url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.23B
2.21B
node_id
stringlengths
18
19
number
int64
4.29k
6.76k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
3
milestone
dict
comments
int64
0
48
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
2
33.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
comments_body
sequencelengths
0
30
https://api.github.com/repos/huggingface/datasets/issues/4391
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4391/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4391/comments
https://api.github.com/repos/huggingface/datasets/issues/4391/events
https://github.com/huggingface/datasets/pull/4391
1,244,839,185
PR_kwDODunzps44RpGv
4,391
Refactor column mappings for question answering datasets
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
"2022-05-23T09:13:14"
"2022-05-24T12:57:00"
"2022-05-24T12:48:48"
MEMBER
null
This PR tweaks the keys in the metadata that are used to define the column mapping for question answering datasets. This is needed in order to faithfully reconstruct column names like `answers.text` and `answers.answer_start` from the keys in AutoTrain. As observed in https://github.com/huggingface/datasets/pull/4367 we cannot use periods `.` in the keys of the YAML tags, so a decision was made to use a flat mapping with underscores. For QA datasets, however, it's handy to be able to reconstruct the nesting -- hence this PR. cc @sashavor
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4391/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4391", "html_url": "https://github.com/huggingface/datasets/pull/4391", "diff_url": "https://github.com/huggingface/datasets/pull/4391.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4391.patch", "merged_at": "2022-05-24T12:48:48" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks.\r\n> \r\n> I have no visibility about this, but if you say it is more useful for AutoTrain this way...\r\n\r\nThanks for the review @albertvillanova ! Yes, I need some way to reconstruct the original column names with a period because that's how they appear after we flatten the nested columns. In any case, we can adjust this later if needed :)", "Does that mean that we need to change the metadata?", "> Does that mean that we need to change the metadata?\r\n\r\nYes, but this PR takes care of it :)", "Oh good! thanks for the heads up!" ]
https://api.github.com/repos/huggingface/datasets/issues/4390
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4390/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4390/comments
https://api.github.com/repos/huggingface/datasets/issues/4390/events
https://github.com/huggingface/datasets/pull/4390
1,244,835,877
PR_kwDODunzps44RoXs
4,390
Fix metadata validation
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-23T09:11:20"
"2022-06-01T09:27:52"
"2022-06-01T09:19:25"
MEMBER
null
Since Python 3.8, the typing module: - raises an AttributeError when trying to access `__args__` on any type, e.g.: `List.__args__` - provides the `get_args` function instead: `get_args(List)` This PR implements a fix for Python >=3.8 whereas maintaining backward compatibility.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4390/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4390/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4390", "html_url": "https://github.com/huggingface/datasets/pull/4390", "diff_url": "https://github.com/huggingface/datasets/pull/4390.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4390.patch", "merged_at": "2022-06-01T09:19:25" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4389
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4389/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4389/comments
https://api.github.com/repos/huggingface/datasets/issues/4389/events
https://github.com/huggingface/datasets/pull/4389
1,244,693,690
PR_kwDODunzps44RKMn
4,389
Fix bug in gem dataset for wiki_auto_asset_turk config
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-23T07:19:49"
"2022-05-23T10:38:26"
"2022-05-23T10:29:55"
MEMBER
null
This PR fixes some URLs. Fix #4386.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4389/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4389/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4389", "html_url": "https://github.com/huggingface/datasets/pull/4389", "diff_url": "https://github.com/huggingface/datasets/pull/4389.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4389.patch", "merged_at": "2022-05-23T10:29:55" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4388
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4388/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4388/comments
https://api.github.com/repos/huggingface/datasets/issues/4388/events
https://github.com/huggingface/datasets/pull/4388
1,244,645,158
PR_kwDODunzps44RAG1
4,388
Set builder name from module instead of class
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-23T06:26:35"
"2022-05-25T05:24:43"
"2022-05-25T05:16:15"
MEMBER
null
Now the builder name attribute is set from from the builder class name. This PR sets the builder name attribute from the module name instead. Some motivating reasons: - The dataset ID is relevant and unique among all datasets and this is directly related to the repository name, i.e., the name of the directory containing the dataset - The name of the module (i.e. the file containing the loading loading script) is already relevant for loading: it must have the same name as its containing directory (related to the dataset ID), as we search for it using its directory name - On the other hand, the name of the builder class is not relevant for loading: in our code, we just search for a class which is subclass of `DatasetBuilder` (independently of its name). We do not put any constraint on the naming of the builder class and indeed it can have a name completely different from its module/direcotry/dataset_id IMO it makes more sense to align the caching directory name with the dataset_id/directory/module name instead of the builder class name. Fix #4381.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4388/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4388/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4388", "html_url": "https://github.com/huggingface/datasets/pull/4388", "diff_url": "https://github.com/huggingface/datasets/pull/4388.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4388.patch", "merged_at": "2022-05-25T05:16:15" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4387
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4387/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4387/comments
https://api.github.com/repos/huggingface/datasets/issues/4387/events
https://github.com/huggingface/datasets/issues/4387
1,244,147,817
I_kwDODunzps5KKDBp
4,387
device/google/accessory/adk2012 - Git at Google
{ "login": "Aeckard45", "id": 87345839, "node_id": "MDQ6VXNlcjg3MzQ1ODM5", "avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Aeckard45", "html_url": "https://github.com/Aeckard45", "followers_url": "https://api.github.com/users/Aeckard45/followers", "following_url": "https://api.github.com/users/Aeckard45/following{/other_user}", "gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}", "starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions", "organizations_url": "https://api.github.com/users/Aeckard45/orgs", "repos_url": "https://api.github.com/users/Aeckard45/repos", "events_url": "https://api.github.com/users/Aeckard45/events{/privacy}", "received_events_url": "https://api.github.com/users/Aeckard45/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2022-05-22T04:57:19"
"2022-05-23T06:36:27"
"2022-05-23T06:36:27"
NONE
null
"git clone https://android.googlesource.com/device/google/accessory/adk2012" https://android.googlesource.com/device/google/accessory/adk2012/#:~:text=git%20clone%20https%3A//android.googlesource.com/device/google/accessory/adk2012
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4387/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4387/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4386
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4386/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4386/comments
https://api.github.com/repos/huggingface/datasets/issues/4386/events
https://github.com/huggingface/datasets/issues/4386
1,243,965,532
I_kwDODunzps5KJWhc
4,386
Bug for wiki_auto_asset_turk from GEM
{ "login": "StevenTang1998", "id": 37647985, "node_id": "MDQ6VXNlcjM3NjQ3OTg1", "avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StevenTang1998", "html_url": "https://github.com/StevenTang1998", "followers_url": "https://api.github.com/users/StevenTang1998/followers", "following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}", "gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}", "starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions", "organizations_url": "https://api.github.com/users/StevenTang1998/orgs", "repos_url": "https://api.github.com/users/StevenTang1998/repos", "events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}", "received_events_url": "https://api.github.com/users/StevenTang1998/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
7
"2022-05-21T12:31:30"
"2022-05-24T05:55:52"
"2022-05-23T10:29:55"
NONE
null
## Describe the bug The script of wiki_auto_asset_turk for GEM may be out of date. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('gem', 'wiki_auto_asset_turk') ``` ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1731, in load_dataset builder_instance.download_and_prepare( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 640, in download_and_prepare self._download_and_prepare( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 1158, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 707, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/tangtianyi/.cache/huggingface/modules/datasets_modules/datasets/gem/982a54473b12c6a6e40d4356e025fb7172a5bb2065e655e2c1af51f2b3cf4ca1/gem.py", line 538, in _split_generators dl_dir = dl_manager.download_and_extract(_URLs[self.config.name]) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 416, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 294, in download downloaded_path_or_paths = map_nested( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 351, in map_nested mapped = [ File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 352, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 288, in _single_map_nested return function(data_struct) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 320, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 234, in cached_path output_path = get_from_cache( File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 579, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.orig ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4386/timeline
null
completed
null
null
false
[ "Thanks for reporting, @StevenTang1998.\r\n\r\nI'm looking into it. ", "Hi @StevenTang1998,\r\n\r\nWe have fixed the issue:\r\n- #4389\r\n\r\nThe fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by installing `datasets` from our GitHub repo:\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```", "Thanks for your reply!!\r\nAnd the totto dataset has the same problem. The url should be change to [https://storage.googleapis.com/totto-public/totto_data.zip](https://storage.googleapis.com/totto-public/totto_data.zip).", "Hi again @StevenTang1998,\r\n\r\nI don't see any problem when loading `totto` dataset:\r\n```python\r\nIn [4]: import datasets\r\n ...: ds = datasets.load_dataset(\"totto\")\r\nDownloading builder script: 5.58kB [00:00, 5.33MB/s] \r\nDownloading metadata: 2.78kB [00:00, 2.96MB/s] \r\nUsing custom data configuration default\r\nDownloading and preparing dataset totto/default (download: 179.03 MiB, generated: 706.59 MiB, post-processed: Unknown size, total: 885.62 MiB) to .../.cache/huggingface/datasets/totto/default/1.0.0/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2...\r\nDownloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 188M/188M [00:32<00:00, 5.77MB/s]\r\nDataset totto downloaded and prepared to .../.cache/huggingface/datasets/totto/default/1.0.0/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 147.95it/s]\r\n\r\nIn [5]: ds\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 120761\r\n })\r\n validation: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n test: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n})\r\n```", "Sorry, I didn't express it clearly. It's the totto dataset from gem.\r\ndatasets.load_dataset('gem', 'totto')\r\n", "@StevenTang1998 fixed in:\r\n- #4396", "Thanks!!" ]
https://api.github.com/repos/huggingface/datasets/issues/4385
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4385/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4385/comments
https://api.github.com/repos/huggingface/datasets/issues/4385/events
https://github.com/huggingface/datasets/pull/4385
1,243,921,287
PR_kwDODunzps44OwXF
4,385
Test dill
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
"2022-05-21T08:57:43"
"2022-05-25T08:30:13"
"2022-05-25T08:21:48"
MEMBER
null
Regression test for future releases of `dill`. Related to #4379.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4385/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4385/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4385", "html_url": "https://github.com/huggingface/datasets/pull/4385", "diff_url": "https://github.com/huggingface/datasets/pull/4385.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4385.patch", "merged_at": "2022-05-25T08:21:48" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "I should point out that the hash will be the same if computed twice with the same code on the same version of dill (after adding huggingface's code that removes line numbers and file names, and sorts globals.) My changes in dill 0.3.5 and ones that I will make in 0.3.6 will result in different pickles than the ones dill 0.3.4 was making. This should still be fine for caching.", "Just some comments @lhoestq:\r\n\r\nThe best practice for testing is to have a `test_<filename>.py` for each `<filename>.py`. Therefore in order to have the filenames aligned, I would propose:\r\n- either renaming `fingerprint.py` to `caching.py`\r\n- or renaming `test_caching.py` to `test_fingerprint.py`\r\n\r\nOn the other hand, my idea when implementing this test was not to test all the functionalities of the `Hasher`, but just to have a regression test that fails if dill version is > 0.3.4 and the pin in our `setup.py` is not present. Just recall that we had no failing test in our CI when the issue with dill was found on `transformers`.\r\n\r\nThe objective of this PR is just to have a regression test for that case: I tested and I got `AttributeError: module 'dill._dill' has no attribute 'stack'`\r\n\r\nFor this regression test, I took into account this comment by @gugarosa: https://github.com/huggingface/datasets/issues/4379#issuecomment-1133131825\r\n\r\nThere is no equivalent test in `test_caching.py` because our CI did not fail before pinning dill.", "Ok I see, renaming it to `test_fingerprint.py` sounds like a good idea :)" ]
https://api.github.com/repos/huggingface/datasets/issues/4384
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4384/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4384/comments
https://api.github.com/repos/huggingface/datasets/issues/4384/events
https://github.com/huggingface/datasets/pull/4384
1,243,919,748
PR_kwDODunzps44OwFr
4,384
Refactor download
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
"2022-05-21T08:49:24"
"2022-05-25T10:52:02"
"2022-05-25T10:43:43"
MEMBER
null
This PR performs a refactoring of the download functionalities, by proposing a modular solution and moving them to their own package "download". Some motivating arguments: - understandability: from a logical partitioning of the library, it makes sense to have all download functionalities grouped together instead of scattered in a much larger directory containing many more different functionalities - abstraction: the level of abstraction of "download" (higher) is not the same as "utils" (lower); putting different levels of abstraction together, makes dependencies more intricate (potential circular dependencies) and the system more tightly coupled; when the levels of abstraction are clearly separated, the dependencies flow in a neat direction from higher to lower - architectural: "download" is a domain-specific functionality of our library/application (a dataset builder performs several actions: download, generate dataset and cache it); these functionalities are at the core of our library; on the other hand, "utils" are always a low-level set of functionalities, not directly related to our domain/business core logic (all libraries have "utils"), thus at the periphery of our lib architecture Also note that when a library is not architecturally designed following simple, neat, clean principles, this has a negative impact on extensibility, making more and more difficult to make enhancements. As a concrete example in this case, please see: https://app.circleci.com/pipelines/github/huggingface/datasets/12185/workflows/ff25a790-8e3f-45a1-aadd-9d79dfb73c4d/jobs/72860 - After an extension, a circular import is found - Diving into the cause of this circular import, see the dependency flow, which should be from higher to lower levels of abstraction: ``` ImportError while loading conftest '/home/circleci/datasets/tests/conftest.py'. tests/conftest.py:12: in <module> import datasets ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/__init__.py:37: in <module> from .arrow_dataset import Dataset, concatenate_datasets ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/arrow_dataset.py:59: in <module> from . import config ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/config.py:8: in <module> from .utils.logging import get_logger ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/__init__.py:30: in <module> from .download_manager import DownloadConfig, DownloadManager, DownloadMode ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/download_manager.py:39: in <module> from .py_utils import NestedDataStructure, map_nested, size_str ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/py_utils.py:608: in <module> if config.DILL_VERSION < version.parse("0.3.5"): E AttributeError: module 'datasets.config' has no attribute 'DILL_VERSION' ``` Imports: - datasets - Dataset: lower level than datasets - config: lower level than Dataset - logger: lower level than config - DownloadManager: !!! HIGHER level of abstraction than logger!! Why when importing logger we require importing DownloadManager?!? - Logically, it does not make sense - This is due to an error in the design/architecture of our library: - To import the logger, we need to import it from `.utils.logging` - To import `.utils.logging` we need to import `.utils` - The import of `.utils` require the import of all its submodules defined in `utils.__init__.py`, among them: `.utils.download_manager`! When putting `logging` and `download_manager` both inside `utils`, in order to import `logging` we need to import `download_manager` first: this is a strong coupling between modules and moreover between modules at different levels of abstraction (to import a lower level module, we require to import a higher level module). Additionally, it is clear that is makes no sense that in order to import `logging` we require to import `download_manager` first.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4384/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4384/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4384", "html_url": "https://github.com/huggingface/datasets/pull/4384", "diff_url": "https://github.com/huggingface/datasets/pull/4384.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4384.patch", "merged_at": "2022-05-25T10:43:43" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "This looks like a breaking change no ?\r\nAlso could you explain why it would be better this way ?", "The might be only there to help type checkers, but I am not too familiar with the code base to know for sure. I think this might be useful:\n\nhttps://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING", "> This looks like a breaking change no ?\r\n> Also could you explain why it would be better this way ?\r\n\r\nSorry, @lhoestq, I naively thought it was obvious. I have tried to give some arguments in the motivation of this PR (see above). I can give additional arguments if needed. " ]
https://api.github.com/repos/huggingface/datasets/issues/4383
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4383/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4383/comments
https://api.github.com/repos/huggingface/datasets/issues/4383/events
https://github.com/huggingface/datasets/issues/4383
1,243,856,981
I_kwDODunzps5KI8BV
4,383
L
{ "login": "AronCodes21", "id": 99847861, "node_id": "U_kgDOBfOOtQ", "avatar_url": "https://avatars.githubusercontent.com/u/99847861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AronCodes21", "html_url": "https://github.com/AronCodes21", "followers_url": "https://api.github.com/users/AronCodes21/followers", "following_url": "https://api.github.com/users/AronCodes21/following{/other_user}", "gists_url": "https://api.github.com/users/AronCodes21/gists{/gist_id}", "starred_url": "https://api.github.com/users/AronCodes21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AronCodes21/subscriptions", "organizations_url": "https://api.github.com/users/AronCodes21/orgs", "repos_url": "https://api.github.com/users/AronCodes21/repos", "events_url": "https://api.github.com/users/AronCodes21/events{/privacy}", "received_events_url": "https://api.github.com/users/AronCodes21/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
"2022-05-21T03:47:58"
"2022-05-21T19:20:13"
"2022-05-21T19:20:13"
NONE
null
## Describe the L L ## Expected L A clear and concise lmll Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4383/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4383/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4382
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4382/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4382/comments
https://api.github.com/repos/huggingface/datasets/issues/4382/events
https://github.com/huggingface/datasets/issues/4382
1,243,839,783
I_kwDODunzps5KI30n
4,382
First time trying
{ "login": "Aeckard45", "id": 87345839, "node_id": "MDQ6VXNlcjg3MzQ1ODM5", "avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Aeckard45", "html_url": "https://github.com/Aeckard45", "followers_url": "https://api.github.com/users/Aeckard45/followers", "following_url": "https://api.github.com/users/Aeckard45/following{/other_user}", "gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}", "starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions", "organizations_url": "https://api.github.com/users/Aeckard45/orgs", "repos_url": "https://api.github.com/users/Aeckard45/repos", "events_url": "https://api.github.com/users/Aeckard45/events{/privacy}", "received_events_url": "https://api.github.com/users/Aeckard45/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
0
"2022-05-21T02:15:18"
"2022-05-21T19:20:44"
"2022-05-21T19:20:44"
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4382/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4381
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4381/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4381/comments
https://api.github.com/repos/huggingface/datasets/issues/4381/events
https://github.com/huggingface/datasets/issues/4381
1,243,478,863
I_kwDODunzps5KHftP
4,381
Bug in caching 2 datasets both with the same builder class name
{ "login": "NouamaneTazi", "id": 29777165, "node_id": "MDQ6VXNlcjI5Nzc3MTY1", "avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NouamaneTazi", "html_url": "https://github.com/NouamaneTazi", "followers_url": "https://api.github.com/users/NouamaneTazi/followers", "following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}", "gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}", "starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions", "organizations_url": "https://api.github.com/users/NouamaneTazi/orgs", "repos_url": "https://api.github.com/users/NouamaneTazi/repos", "events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}", "received_events_url": "https://api.github.com/users/NouamaneTazi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
"2022-05-20T18:18:03"
"2022-06-02T08:18:37"
"2022-05-25T05:16:15"
MEMBER
null
## Describe the bug The two datasets `mteb/mtop_intent` and `mteb/mtop_domain `use both the same cache folder `.cache/huggingface/datasets/mteb___mtop`. So if you first load `mteb/mtop_intent` then datasets will not load `mteb/mtop_domain`. If you delete this cache folder and flip the order how you load the two datasets , you will get the opposite datasets loaded (difference is here in terms of the label and label_text). ## Steps to reproduce the bug ```python import datasets dataset = datasets.load_dataset("mteb/mtop_intent", "en") print(dataset['train'][0]) dataset = datasets.load_dataset("mteb/mtop_domain", "en") print(dataset['train'][0]) ``` ## Expected results ``` Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_intent/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s] {'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'} Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_domain/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35) 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s] {'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 0, 'label_text': 'messaging'} ``` ## Actual results ``` Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35) 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s] {'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'} Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35) 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s] {'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.1 - Platform: macOS-12.1-arm64-arm-64bit - Python version: 3.9.12 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4381/timeline
null
completed
null
null
false
[ "Hi @NouamaneTazi, thanks for reporting.\r\n\r\nPlease note that both datasets are cached in the same directory because their loading builder classes have the same name: `class MTOP(datasets.GeneratorBasedBuilder)`.\r\n\r\nYou should name their builder classes differently, e.g.:\r\n- `MtopDomain`\r\n- `MtopIntent`", "Hi @NouamaneTazi, please note that after our fix:\r\n- #4388\r\n\r\nwe do not consider the class name anymore, but the name of the file where the loading builder class is implemented. " ]
https://api.github.com/repos/huggingface/datasets/issues/4380
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4380/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4380/comments
https://api.github.com/repos/huggingface/datasets/issues/4380/events
https://github.com/huggingface/datasets/pull/4380
1,243,183,054
PR_kwDODunzps44MUz0
4,380
Pin dill
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-20T13:54:19"
"2022-06-13T10:03:52"
"2022-05-20T16:33:04"
MEMBER
null
Hotfix #4379. CC: @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4380/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4380/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4380", "html_url": "https://github.com/huggingface/datasets/pull/4380", "diff_url": "https://github.com/huggingface/datasets/pull/4380.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4380.patch", "merged_at": "2022-05-20T16:33:04" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4379
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4379/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4379/comments
https://api.github.com/repos/huggingface/datasets/issues/4379/events
https://github.com/huggingface/datasets/issues/4379
1,243,175,854
I_kwDODunzps5KGVuu
4,379
Latest dill release raises exception
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
8
"2022-05-20T13:48:36"
"2022-05-21T15:53:26"
"2022-05-20T17:06:27"
MEMBER
null
## Describe the bug As reported by @sgugger, latest dill release is breaking things with Datasets. ``` ______________ ExamplesTests.test_run_speech_recognition_seq2seq _______________ self = <multiprocess.pool.ApplyResult object at 0x7fa5981a1cd0>, timeout = None def get(self, timeout=None): self.wait(timeout) if not self.ready(): raise TimeoutError if self._success: return self._value else: > raise self._value E TypeError: '>' not supported between instances of 'NoneType' and 'float' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4379/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4379/timeline
null
completed
null
null
false
[ "Fixed by:\r\n- #4380 ", "Just an additional insight, the latest dill (either 0.3.5 or 0.3.5.1) also broke the hashing/fingerprinting of any mapping function.\r\n\r\nFor example:\r\n```\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"rotten_tomatoes\")\r\nd.map(lambda x: x)\r\n```\r\n\r\nReturns the standard non-dillable error:\r\n```\r\nParameter 'function'=<function <lambda> at 0x7fe7d18c9560> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly....\r\n```", "@albertvillanova ExamplesTests.test_run_speech_recognition_seq2seq is in which file?", "Thanks a lot @gugarosa for the insight: we will incorporate it in our CI as regression testing for future dill releases.", "Hi @anivegesana, that test is in `transformers` library:\r\n- https://github.com/huggingface/transformers/blob/main/examples/pytorch/test_pytorch_examples.py#L449\r\n- https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py ", "@albertvillanova\n\nI did a deep dive into @gugarosa's problem and found the issue and it might be related to the one @sgugger discovered. In dill 0.3.5(.1), I created a new `save_function` that fixes a bug in dill that prevented the pickling of recursive inner functions. It was a more complete solution to the problem that `dill._dill.stack` tried to solve in the internal API of dill. Since `dill._dill.stack` was no longer needed, I removed it. Since datasets copies the `save_function` directly from the dill API, it stops working with the new dill version since `dill._dill.stack` is no longer present and the `save_function` has been updated with new code.\r\n\r\nhttps://github.com/huggingface/datasets/blob/95193ae61e92aa537d0c65d37a1fd9d2393aae89/src/datasets/utils/py_utils.py#L607-L678\r\n\r\n~If the dill version is below 0.3.5, you should keep this function. If it is after, you would need to update your copy of `save_function` to use the code I introduced, or manually add a `stack` variable to `dill._dill` if it doesn't exist. Fortunately, in any version of Python 3.7+, dictionaries are always in insertion order and dill no longer supports Python 3.6 or older. So, any globals dictionary saved by dill 0.3.5+ will be deterministic given that the version of dill is held constant and this save_function is unnecessary for newer versions of dill.~\r\n\r\nAh. I see what is happening. I guess a different copy of the function code is needed that sorts the global variables by name.\r\n\r\n```py\r\nif dill.__version__.split('.') < ['0', '3', '5']:\r\n # current save_function code inside here\r\nelse:\r\n # new save_function code inside here with the following line inserted after creating the globals\r\n globs = {k: globs[k] for k in sorted(globs.keys())} \r\n```\r\n\r\nWill look into the test case @sgugger pointed out after that and verify if this is causing the problem.\r\n\r\nI am actually looking into rewriting the global variables code in uqfoundation/dill#466 and will keep this in mind and will try to create an easy way to modify the global variables in dill 0.3.6 (for example, sort them by key like datasets does).", "Thanks a lot for your investigation @anivegesana.\r\n\r\nYes, we copied-pasted the old `save_function` function from `dill`, just adding a line to make deterministic the order of global variables `globs`. \r\n\r\nHowever, this function has changed a lot from version 0.3.5, after your PR (thank you for the fix in recursiveness, indeed):\r\n- uqfoundation/dill#443\r\n\r\nWe have to address this change.\r\n\r\nIf finally your PR to sort global variables is merged into dill 0.3.6, that will make our life easier, as the tweak will no longer be necessary. ;)\r\n\r\nI have included a regression test so that we are sure future releases of dill do not break `datasets`:\r\n- #4385 ", "I should note that because Python 3.6 and older are now deprecated and Python 3.7 has insertion order dictionaries, the globals in dill will have a deterministic order, just not sorted. I would still keep it sorted like you have it to help with stability (for example, if someone reorders variables in a file, then sorting the globals would not invalidate the cache.)\n\nIt seems that the order is not quite deterministic in IPython. Huggingface datasets seems to do well in Jupyter regardless, so it is not a good idea to remove the sorting. uqfoundation/dill#19" ]
https://api.github.com/repos/huggingface/datasets/issues/4378
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4378/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4378/comments
https://api.github.com/repos/huggingface/datasets/issues/4378/events
https://github.com/huggingface/datasets/pull/4378
1,242,935,373
PR_kwDODunzps44Lf2R
4,378
Tidy up license metadata for google_wellformed_query, newspop, sick
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-05-20T10:16:12"
"2022-05-24T13:50:23"
"2022-05-24T13:10:27"
CONTRIBUTOR
null
Amend three licenses on datasets to fit naming convention (lower case, cc licenses include sub-version number). I think that's it - everything else on datasets looks great & super-searchable now!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4378/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4378/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4378", "html_url": "https://github.com/huggingface/datasets/pull/4378", "diff_url": "https://github.com/huggingface/datasets/pull/4378.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4378.patch", "merged_at": "2022-05-24T13:10:27" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "& thank you!" ]
https://api.github.com/repos/huggingface/datasets/issues/4377
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4377/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4377/comments
https://api.github.com/repos/huggingface/datasets/issues/4377/events
https://github.com/huggingface/datasets/pull/4377
1,242,746,186
PR_kwDODunzps44K4OY
4,377
Fix checksum and bug in irc_disentangle dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-20T07:29:28"
"2022-05-20T09:34:36"
"2022-05-20T09:26:32"
MEMBER
null
There was a bug in filepath segment: - wrong: `jkkummerfeld-irc-disentanglement-fd379e9` - right: `jkkummerfeld-irc-disentanglement-35f0a40` Also there was a bug in the checksum of the downloaded file. This PR fixes these issues. Fix partially #4376.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4377/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4377/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4377", "html_url": "https://github.com/huggingface/datasets/pull/4377", "diff_url": "https://github.com/huggingface/datasets/pull/4377.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4377.patch", "merged_at": "2022-05-20T09:26:32" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4376
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4376/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4376/comments
https://api.github.com/repos/huggingface/datasets/issues/4376/events
https://github.com/huggingface/datasets/issues/4376
1,242,218,144
I_kwDODunzps5KCr6g
4,376
irc_disentagle viewer error
{ "login": "labouz", "id": 25671683, "node_id": "MDQ6VXNlcjI1NjcxNjgz", "avatar_url": "https://avatars.githubusercontent.com/u/25671683?v=4", "gravatar_id": "", "url": "https://api.github.com/users/labouz", "html_url": "https://github.com/labouz", "followers_url": "https://api.github.com/users/labouz/followers", "following_url": "https://api.github.com/users/labouz/following{/other_user}", "gists_url": "https://api.github.com/users/labouz/gists{/gist_id}", "starred_url": "https://api.github.com/users/labouz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/labouz/subscriptions", "organizations_url": "https://api.github.com/users/labouz/orgs", "repos_url": "https://api.github.com/users/labouz/repos", "events_url": "https://api.github.com/users/labouz/events{/privacy}", "received_events_url": "https://api.github.com/users/labouz/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
5
"2022-05-19T19:15:16"
"2023-01-12T16:56:13"
"2022-06-02T08:20:00"
NONE
null
the dataviewer shows this message for "ubuntu" - "train", "test", and "validation" splits: ``` Server error Status code: 400 Exception: ValueError Message: Cannot seek streaming HTTP file ``` it appears to give the same message for the "channel_two" data as well. I get a Checksums error when using `load_data()` with this dataset. Even with the `download_mode` and `ignore_verifications` options set. i referenced the issue here: https://github.com/huggingface/datasets/issues/3807
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4376/timeline
null
completed
null
null
false
[ "DUPLICATED comment from https://github.com/huggingface/datasets/issues/3807:\r\n\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\nhowever, it produces the same error\r\n```\r\n[38](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=37) if len(bad_urls) > 0:\r\n [39](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=38) error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> [40](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=39) raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n [41](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=40) logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/jkkummerfeld/irc-disentanglement/tarball/master']\r\n```\r\nI attempted to use the `ignore_verifications' as such:\r\n\r\n```\r\nds = datasets.load_dataset('irc_disentangle', download_mode=\"force_redownload\", ignore_verifications=True)\r\n\r\nDownloading builder script: 12.0kB [00:00, 5.92MB/s] \r\nDownloading metadata: 7.58kB [00:00, 3.48MB/s] \r\nNo config specified, defaulting to: irc_disentangle/ubuntu\r\nDownloading and preparing dataset irc_disentangle/ubuntu (download: 112.98 MiB, generated: 60.05 MiB, post-processed: Unknown size, total: 173.03 MiB) to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5...\r\nDownloading data: 118MB [00:09, 12.1MB/s] \r\n \r\nDataset irc_disentangle downloaded and prepared to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5. Subsequent calls will reuse this data.\r\n100%|██████████| 3/3 [00:00<00:00, 675.38it/s]\r\n```\r\nbut, this returns an empty set?\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\nnot sure what else to try at this point?\r\nThanks in advanced🤗", "Thanks for reporting, @labouz. I'm addressing it. ", "The issue with checksum and empty dataset has been fixed by:\r\n- #4377\r\n\r\nTo load the dataset, you should force the re-generation of the dataset from the downloaded file by passing `download_mode=\"reuse_cache_if_exists\"` to `load_dataset`.\r\n\r\nIn relation with the issue with the dataset viewer, first the dataset should be refactored to support streaming.", "parfait!\r\nit works now, thank you 🙏 ", "Hi there, \r\nI see this issue is closed, but I am wondering if there is any chance the source files have been moved since this fix? I am stumbling into the same NonMatchingChecksumError noted by lebouz's second post once 118MB of data has been downloaded, and have tried the solutions noted in the various fix checksum posts linked here and in other posts regarding passing in \"reuse_cache_if_exists\" to download_mode. Any suggestions? Thank you!\r\n\r\n" ]
https://api.github.com/repos/huggingface/datasets/issues/4375
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4375/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4375/comments
https://api.github.com/repos/huggingface/datasets/issues/4375/events
https://github.com/huggingface/datasets/pull/4375
1,241,921,147
PR_kwDODunzps44IMCS
4,375
Support DataLoader with num_workers > 0 in streaming mode
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
7
"2022-05-19T15:00:31"
"2022-07-04T16:05:14"
"2022-06-10T20:47:27"
MEMBER
null
### Issue It's currently not possible to properly stream a dataset using multiple `torch.utils.data.DataLoader` workers: - the `TorchIterableDataset` can't be pickled and passed to the subprocesses: https://github.com/huggingface/datasets/issues/3950 - streaming extension is failing: https://github.com/huggingface/datasets/issues/3951 - `fsspec` doesn't work out of the box in subprocesses ### Solution in this PR I fixed these to enable passing an `IterableDataset` to a `torch.utils.data.DataLoader` with `num_workers > 0`. I also had to shard the `IterableDataset` to give each worker a shard, otherwise data would be duplicated. This is implemented in `TorchIterableDataset.__iter__` and uses the new `IterableDataset._iter_shard(shard_idx)` method I also had to do a few changes the patching that enable streaming in dataset scripts: - the patches are now always applied - not just for streaming mode. They're applied when a builder is instantiated - I improved it to also check for renamed modules or attributes (ex: pandas vs pd) - I grouped all the patches of pathlib.Path into a class `xPath`, so that `Path` outside of dataset scripts stay unchanged - otherwise I didn't change the content of the extended Path methods for streaming - I fixed a bug with the `pd.read_csv` patch, opening the file in "rb" mode was missing and causing some datasets to not work in streaming mode, and compression inference was missing ### A few details regarding `fsspec` in multiprocessing From https://github.com/fsspec/filesystem_spec/pull/963#issuecomment-1131709948 : > Non-async instances might be safe in the forked child, if they hold no open files/sockets etc.; I'm not sure any implementations pass this test! > If any async instance has been created, the newly forked processes must: > 1. discard references to locks, threads and event loops and make new ones > 2. not use any async fsspec instances from the parent process > 3. clear all class instance caches Therefore in a DataLoader's worker, I clear the reference to the loop and thread (1). We should be fine for 2 and 3 already since we don't use fsspec class instances from the parent process. Fix https://github.com/huggingface/datasets/issues/3950 Fix https://github.com/huggingface/datasets/issues/3951 TODO: - [x] fix tests
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4375/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4375/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4375", "html_url": "https://github.com/huggingface/datasets/pull/4375", "diff_url": "https://github.com/huggingface/datasets/pull/4375.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4375.patch", "merged_at": "2022-06-10T20:47:26" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Alright this is finally ready for review ! It's quite long I'm sorry, but it's not easy to disentangle everything ^^'\r\n\r\nThe main additions are in\r\n- src/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py\r\n- src/datasets/iterable_dataset.py\r\n- src/datasets/utils/patching.py", "Added some comments and an error when lists have different lengths for sharding :)", "Let's resolve the merge conflict and the CI error (if it's related to the changes), and I can review the PR again.", "Feel free to review again :) The CI fail is unrelated to this PR and will be fixed by https://github.com/huggingface/datasets/pull/4472 (the hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos)", "CI failures are unrelated to this PR - merging :)\r\n\r\n(CI fails are a mix of pip install fails and Hub fails)", "@lhoestq you're our hero :)" ]
https://api.github.com/repos/huggingface/datasets/issues/4374
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4374/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4374/comments
https://api.github.com/repos/huggingface/datasets/issues/4374/events
https://github.com/huggingface/datasets/issues/4374
1,241,860,535
I_kwDODunzps5KBUm3
4,374
extremely slow processing when using a custom dataset
{ "login": "StephennFernandes", "id": 32235549, "node_id": "MDQ6VXNlcjMyMjM1NTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/32235549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/StephennFernandes", "html_url": "https://github.com/StephennFernandes", "followers_url": "https://api.github.com/users/StephennFernandes/followers", "following_url": "https://api.github.com/users/StephennFernandes/following{/other_user}", "gists_url": "https://api.github.com/users/StephennFernandes/gists{/gist_id}", "starred_url": "https://api.github.com/users/StephennFernandes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/StephennFernandes/subscriptions", "organizations_url": "https://api.github.com/users/StephennFernandes/orgs", "repos_url": "https://api.github.com/users/StephennFernandes/repos", "events_url": "https://api.github.com/users/StephennFernandes/events{/privacy}", "received_events_url": "https://api.github.com/users/StephennFernandes/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
null
[]
null
2
"2022-05-19T14:18:05"
"2023-07-25T15:07:17"
"2023-07-25T15:07:16"
NONE
null
## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub I have a large .txt file of 22 GB which i load into HF dataset `lang_dataset = datasets.load_dataset("text", data_files="hi.txt")` further i use a pre-processing function to clean the dataset `lang_dataset["train"] = lang_dataset["train"].map( remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)` the following processing takes astronomical time to process, while hoging all the ram. similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data. `lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)` the hours predicted to preprocess are as follows: huggingface hub dataset: 6.5 hrs custom loaded dataset: 7000 hrs note: both the datasets are almost actually same, just provided by different sources with has +/- some samples, only one is hosted on the HF hub and the other is downloaded in a text format. ## Steps to reproduce the bug ``` import datasets import psutil import sys import glob from fastcore.utils import listify import re import gc def remove_non_indic_sentences(example): tmp_ls = [] eng_regex = r'[. a-zA-Z0-9ÖÄÅöäå _.,!"\'\/$]*' for e in listify(example['text']): matches = re.findall(eng_regex, e) for match in (str(match).strip() for match in matches if match not in [""," ", " ", ",", " ,", ", ", " , "]): if len(list(match.split(" "))) > 2: e = re.sub(match," ",e,count=1) tmp_ls.append(e) gc.collect() example['clean_text'] = tmp_ls return example lang_dataset = datasets.load_dataset("text", data_files="hi.txt") lang_dataset["train"] = lang_dataset["train"].map( remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64) ## same thing work much faster when loading similar dataset from hub lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", split="train", use_auth_token=True) lang_dataset["train"] = lang_dataset["train"].map( remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64) ``` ## Actual results similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data. `lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True) **the hours predicted to preprocess are as follows:** huggingface hub dataset: 6.5 hrs custom loaded dataset: 7000 hrs **i even tried the following:** - sharding the large 22gb text files into smaller files and loading - saving the file to disk and then loading - using lesser num_proc - using smaller batch size - processing without batches ie : without `batched=True` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2.dev0 - Platform: Ubuntu 20.04 LTS - Python version: 3.9.7 - PyArrow version:8.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4374/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4374/timeline
null
completed
null
null
false
[ "Hi !\r\n\r\nMy guess is that some examples in your dataset are bigger than your RAM, and therefore loading them in RAM to pass them to `remove_non_indic_sentences` takes forever because it might use SWAP memory.\r\n\r\nMaybe several examples in your dataset are grouped together, can you check `len(lang_dataset[\"train\"])` and `lang_dataset[\"train\"].data.nbytes` of both datasets please ? It can also be helpful to check the distribution of lengths of each examples in your dataset.", "Closing due to inactivity" ]
https://api.github.com/repos/huggingface/datasets/issues/4373
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4373/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4373/comments
https://api.github.com/repos/huggingface/datasets/issues/4373/events
https://github.com/huggingface/datasets/pull/4373
1,241,769,310
PR_kwDODunzps44HsaY
4,373
Remove links in docs to old dataset viewer
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-19T13:24:39"
"2022-05-20T15:24:28"
"2022-05-20T15:16:05"
CONTRIBUTOR
null
Remove the links in the docs to the no longer maintained dataset viewer.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4373/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4373", "html_url": "https://github.com/huggingface/datasets/pull/4373", "diff_url": "https://github.com/huggingface/datasets/pull/4373.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4373.patch", "merged_at": "2022-05-20T15:16:05" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4372
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4372/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4372/comments
https://api.github.com/repos/huggingface/datasets/issues/4372/events
https://github.com/huggingface/datasets/pull/4372
1,241,703,826
PR_kwDODunzps44HeYC
4,372
Check if dataset features match before push in `DatasetDict.push_to_hub`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-19T12:32:30"
"2022-05-20T15:23:36"
"2022-05-20T15:15:30"
CONTRIBUTOR
null
Fix #4211
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4372/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4372/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4372", "html_url": "https://github.com/huggingface/datasets/pull/4372", "diff_url": "https://github.com/huggingface/datasets/pull/4372.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4372.patch", "merged_at": "2022-05-20T15:15:30" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4371
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4371/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4371/comments
https://api.github.com/repos/huggingface/datasets/issues/4371/events
https://github.com/huggingface/datasets/pull/4371
1,241,500,906
PR_kwDODunzps44GzSZ
4,371
Add missing language tags for udhr dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-19T09:34:10"
"2022-06-08T12:03:24"
"2022-05-20T09:43:10"
MEMBER
null
Related to #4362.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4371/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4371/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4371", "html_url": "https://github.com/huggingface/datasets/pull/4371", "diff_url": "https://github.com/huggingface/datasets/pull/4371.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4371.patch", "merged_at": "2022-05-20T09:43:10" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4369
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4369/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4369/comments
https://api.github.com/repos/huggingface/datasets/issues/4369/events
https://github.com/huggingface/datasets/pull/4369
1,240,245,642
PR_kwDODunzps44CpCe
4,369
Add redirect to dataset script in the repo structure page
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-18T17:05:33"
"2022-05-19T08:19:01"
"2022-05-19T08:10:51"
MEMBER
null
Following https://github.com/huggingface/hub-docs/pull/146 I added a redirection to the dataset scripts documentation in the repository structure page.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4369/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4369", "html_url": "https://github.com/huggingface/datasets/pull/4369", "diff_url": "https://github.com/huggingface/datasets/pull/4369.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4369.patch", "merged_at": "2022-05-19T08:10:51" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4368
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4368/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4368/comments
https://api.github.com/repos/huggingface/datasets/issues/4368/events
https://github.com/huggingface/datasets/pull/4368
1,240,064,860
PR_kwDODunzps44CDFk
4,368
Add long answer candidates to natural questions dataset
{ "login": "seirasto", "id": 4257308, "node_id": "MDQ6VXNlcjQyNTczMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/4257308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/seirasto", "html_url": "https://github.com/seirasto", "followers_url": "https://api.github.com/users/seirasto/followers", "following_url": "https://api.github.com/users/seirasto/following{/other_user}", "gists_url": "https://api.github.com/users/seirasto/gists{/gist_id}", "starred_url": "https://api.github.com/users/seirasto/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seirasto/subscriptions", "organizations_url": "https://api.github.com/users/seirasto/orgs", "repos_url": "https://api.github.com/users/seirasto/repos", "events_url": "https://api.github.com/users/seirasto/events{/privacy}", "received_events_url": "https://api.github.com/users/seirasto/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
18
"2022-05-18T14:35:42"
"2022-07-26T20:30:41"
"2022-07-26T20:18:42"
CONTRIBUTOR
null
This is a modification of the Natural Questions dataset to include missing information specifically related to long answer candidates. (See here: https://github.com/google-research-datasets/natural-questions#long-answer-candidates). This information is important to ensure consistent comparison with prior work. It does not disturb the rest of the format . @lhoestq @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4368/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4368", "html_url": "https://github.com/huggingface/datasets/pull/4368", "diff_url": "https://github.com/huggingface/datasets/pull/4368.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4368.patch", "merged_at": "2022-07-26T20:18:42" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Once we have added `long_answer_candidates` maybe it would be worth to also add the missing `candidate_index` (inside `long_answer`). What do you think, @seirasto ?", "Also note the \"Data Fields\" section in the README is missing the `long_answer` field.\r\n\r\nMoreover, there is no instance example in \"Data Instances\" section.", "We could either make these fixes in this PR or in a subsequent PR.", "@albertvillanova I've added the missing fields and updated the README to include a data instance and some other things. ", "Great! I've made the updates to align the README. Please let me know if I missed anything.", "As there were many minor little fixes, I thought it would be easier to fix them directly.", "I think the loading script is OK now. If it is also validated by another datasets maintainer, I could run the generation of the pre-processed data and then merge this PR into master (once all the tests are green).\r\n\r\nCC: @lhoestq ", "It looks good to me, thanks @seirasto !", "I have merged the master branch, so that we include all the fixes on Apache Beam + Google Dataflow.", "Pre-processing is running!\r\n\r\nAlready finished for \"dev\" config:\r\n```python\r\nIn [2]: ds = load_dataset(\"datasets/natural_questions\", \"dev\")\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n validation: Dataset({\r\n features: ['id', 'document', 'question', 'long_answer_candidates', 'annotations'],\r\n num_rows: 7830\r\n })\r\n})\r\n```", "There is an issue while running the preprocessing for the \"default\" (train+dev) config. Train data files are larger than than dev ones and workers run out of memory.\r\n\r\nI'm opening a separate issue to handle this problem: #4525", "@seirasto is proposing uploading their preprocessed data files to our Datasets bucket.\r\n\r\nI think @lhoestq can give a more informed answer about authentication requirements.", "Now that the data fiels are uploaded, can you merge the `main` branch into yours to re-trigger the CI @seirasto please ? :) Then I think we can merge if it's good for you @albertvillanova ", "Merge is done! I think someone needs to approve the CI to run :) ", "Can you run `make style` to fix the code formatting required by the CI please ?", "Thanks @albertvillanova! I've committed all your suggestions.", "The CI is green. I'm merging this PR." ]
https://api.github.com/repos/huggingface/datasets/issues/4367
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4367/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4367/comments
https://api.github.com/repos/huggingface/datasets/issues/4367/events
https://github.com/huggingface/datasets/pull/4367
1,240,011,602
PR_kwDODunzps44B340
4,367
Remove config names as yaml keys
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2022-05-18T13:59:24"
"2022-05-20T09:35:26"
"2022-05-20T09:27:19"
MEMBER
null
Many datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys. I fix this, I removed the tags separations per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anyway. I removed all the config names used as YAML keys, and I moved them in under a new `config:` key. This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946). Also removing the dots in the YAML keys would allow us to do as in https://github.com/huggingface/datasets/pull/4302 which removes a hack that replaces all the dots by underscores in the YAML tags. I also added a test in the CI that checks that all the YAML tags to make sure that: - they can be parsed using a YAML parser - they contain only valid YAML tags like languages or task_ids
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4367/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4367", "html_url": "https://github.com/huggingface/datasets/pull/4367", "diff_url": "https://github.com/huggingface/datasets/pull/4367.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4367.patch", "merged_at": "2022-05-20T09:27:19" }
true
[ "I included the change from https://github.com/huggingface/datasets/pull/4302 directly in this PR, this way the datasets will be updated right away in the CI (the CI is only triggered when a dataset card is changed)", "_The documentation is not available anymore as the PR was closed or merged._", "Alright it's ready now :)\r\n\r\nHere is an example for the `ade_corpus_v2` dataset card. Notice the new `configs` key:\r\n\r\nhttps://github.com/huggingface/datasets/blob/76d9a141740a03f6836feb251f6059894b8d8046/datasets/ade_corpus_v2/README.md#L1-L78\r\n\r\nCI failures are only related to dataset cards missing some content." ]
https://api.github.com/repos/huggingface/datasets/issues/4366
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4366/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4366/comments
https://api.github.com/repos/huggingface/datasets/issues/4366/events
https://github.com/huggingface/datasets/issues/4366
1,239,534,165
I_kwDODunzps5J4cpV
4,366
TypeError: __init__() missing 1 required positional argument: 'scheme'
{ "login": "jffgitt", "id": 99231535, "node_id": "U_kgDOBeonLw", "avatar_url": "https://avatars.githubusercontent.com/u/99231535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jffgitt", "html_url": "https://github.com/jffgitt", "followers_url": "https://api.github.com/users/jffgitt/followers", "following_url": "https://api.github.com/users/jffgitt/following{/other_user}", "gists_url": "https://api.github.com/users/jffgitt/gists{/gist_id}", "starred_url": "https://api.github.com/users/jffgitt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jffgitt/subscriptions", "organizations_url": "https://api.github.com/users/jffgitt/orgs", "repos_url": "https://api.github.com/users/jffgitt/repos", "events_url": "https://api.github.com/users/jffgitt/events{/privacy}", "received_events_url": "https://api.github.com/users/jffgitt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
closed
false
null
[]
null
1
"2022-05-18T07:17:29"
"2022-05-18T16:36:22"
"2022-05-18T16:36:21"
NONE
null
"name" : "node-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "", "version" : { "number" : "7.5.0", "build_flavor" : "default", "build_type" : "tar", "build_hash" : "", "build_date" : "2019-11-26T01:06:52.518245Z", "build_snapshot" : false, "lucene_version" : "8.3.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" when I run the order: nohup python3 custom_service.pyc > service.log 2>&1& the log: nohup: 忽略输入 Traceback (most recent call last): File "/home/xfz/p3_custom_test/custom_service.py", line 55, in <module> File "/home/xfz/p3_custom_test/custom_service.py", line 48, in doInitialize File "custom_impl.py", line 286, in custom_setup File "custom_impl.py", line 127, in create_es_index File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/__init__.py", line 345, in __init__ ssl_show_warn=ssl_show_warn, File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 105, in client_node_configs node_configs = hosts_to_node_configs(hosts) File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 154, in hosts_to_node_configs node_configs.append(host_mapping_to_node_config(host)) File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 221, in host_mapping_to_node_config return NodeConfig(**options) # type: ignore TypeError: __init__() missing 1 required positional argument: 'scheme' [1]+ 退出 1 nohup python3 custom_service.pyc > service.log 2>&1 custom_service_pyc can't running
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4366/timeline
null
completed
null
null
false
[ "Duplicate of:\r\n- #3956\r\n\r\nI think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py" ]
https://api.github.com/repos/huggingface/datasets/issues/4365
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4365/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4365/comments
https://api.github.com/repos/huggingface/datasets/issues/4365/events
https://github.com/huggingface/datasets/pull/4365
1,239,109,943
PR_kwDODunzps43-4fC
4,365
Remove dots in config names
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-05-17T20:12:57"
"2023-09-24T10:02:53"
"2022-05-18T13:59:41"
MEMBER
null
20+ datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys. This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946). Also removing the dots in the config names would allow us to merge https://github.com/huggingface/datasets/pull/4302 which removes a hack that replaces all the dots by underscores in the YAML tags. I also added a test in the CI that checks that all the YAML tags to make sure that: - they can be parsed using a YAML parser - they contain only valid YAML tags like `languages` or `task_ids` - they contain valid config names (no invalid characters `<>:/\|?*.`)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4365/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4365", "html_url": "https://github.com/huggingface/datasets/pull/4365", "diff_url": "https://github.com/huggingface/datasets/pull/4365.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4365.patch", "merged_at": null }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Closing in favor of https://github.com/huggingface/datasets/pull/4367" ]
https://api.github.com/repos/huggingface/datasets/issues/4364
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4364/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4364/comments
https://api.github.com/repos/huggingface/datasets/issues/4364/events
https://github.com/huggingface/datasets/pull/4364
1,238,976,106
PR_kwDODunzps43-bmq
4,364
Support complex feature types as `features` in packaged loaders
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-17T17:53:23"
"2022-05-31T12:26:23"
"2022-05-31T12:16:32"
CONTRIBUTOR
null
This PR adds `table_cast` to the packaged loaders to fix casting to the `Image`/`Audio`, `ArrayND` and `ClassLabel` types. If these types are not present in the `builder.config.features` dictionary, the built-in `pa.Table.cast` is used for better performance. Additionally, this PR adds `cast_storage` to `ClassLabel` to support the string to int conversion in `table_cast` and ensure that integer labels are in a valid range. Fix https://github.com/huggingface/datasets/issues/4210 This PR is also a solution for these (popular) discussions: https://discuss.huggingface.co/t/converting-string-label-to-int/2816 and https://discuss.huggingface.co/t/class-labels-for-custom-datasets/15130/2 TODO: * [x] tests
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4364/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4364/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4364", "html_url": "https://github.com/huggingface/datasets/pull/4364", "diff_url": "https://github.com/huggingface/datasets/pull/4364.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4364.patch", "merged_at": "2022-05-31T12:16:31" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4363
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4363/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4363/comments
https://api.github.com/repos/huggingface/datasets/issues/4363/events
https://github.com/huggingface/datasets/issues/4363
1,238,897,652
I_kwDODunzps5J2BP0
4,363
The dataset preview is not available for this split.
{ "login": "roholazandie", "id": 7584674, "node_id": "MDQ6VXNlcjc1ODQ2NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7584674?v=4", "gravatar_id": "", "url": "https://api.github.com/users/roholazandie", "html_url": "https://github.com/roholazandie", "followers_url": "https://api.github.com/users/roholazandie/followers", "following_url": "https://api.github.com/users/roholazandie/following{/other_user}", "gists_url": "https://api.github.com/users/roholazandie/gists{/gist_id}", "starred_url": "https://api.github.com/users/roholazandie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/roholazandie/subscriptions", "organizations_url": "https://api.github.com/users/roholazandie/orgs", "repos_url": "https://api.github.com/users/roholazandie/repos", "events_url": "https://api.github.com/users/roholazandie/events{/privacy}", "received_events_url": "https://api.github.com/users/roholazandie/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
7
"2022-05-17T16:34:43"
"2022-06-08T12:32:10"
"2022-06-08T09:26:56"
NONE
null
I have uploaded the corpus developed by our lab in the speech domain to huggingface [datasets](https://huggingface.co/datasets/Roh/ryanspeech). You can read about the companion paper accepted in interspeech 2021 [here](https://arxiv.org/abs/2106.08468). The dataset works fine but I can't make the dataset preview work. It gives me the following error that I don't understand. Can you help me to begin debugging it? ``` Status code: 400 Exception: AttributeError Message: 'NoneType' object has no attribute 'split' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4363/timeline
null
completed
null
null
false
[ "Hi! A dataset has to be streamable to work with the viewer. I did a quick test, and yours is, so this might be a bug in the viewer. cc @severo \r\n", "Looking at it. The message is now:\r\n\r\n```\r\nMessage: cannot cache function '__shear_dense': no locator available for file '/src/services/worker/.venv/lib/python3.9/site-packages/librosa/util/utils.py'\r\n```\r\n\r\nso possibly it's related to the libraries versions?\r\n", "Maybe this SO thread can help: https://stackoverflow.com/questions/59290386/runtimeerror-at-cannot-cache-function-shear-dense-no-locator-available-fo", "Same error for https://huggingface.co/datasets/LIUM/tedlium/viewer/release1/test. cc @sanchit-gandhi . I'm on it", "Fixed in the datasets viewer, by setting the `NUMBA_CACHE_DIR` env var to a writable directory.", "https://huggingface.co/datasets/Roh/ryanspeech/viewer/male/train\r\n\r\n<img width=\"1538\" alt=\"Capture d’écran 2022-06-08 à 11 30 08\" src=\"https://user-images.githubusercontent.com/1676121/172583285-4cd49a0f-5715-423b-95dd-5f6ace3b2416.png\">\r\n", "https://huggingface.co/datasets/LIUM/tedlium/viewer/\r\n\r\n<img width=\"1538\" alt=\"Capture d’écran 2022-06-08 à 14 31 52\" src=\"https://user-images.githubusercontent.com/1676121/172616897-fbcb7df7-0308-4d09-a17d-48826bc91374.png\">\r\n" ]
https://api.github.com/repos/huggingface/datasets/issues/4362
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4362/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4362/comments
https://api.github.com/repos/huggingface/datasets/issues/4362/events
https://github.com/huggingface/datasets/pull/4362
1,238,680,112
PR_kwDODunzps439bkf
4,362
Update dataset_infos for UDHN/udhr dataset
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
5
"2022-05-17T13:52:59"
"2022-06-08T19:20:11"
"2022-06-08T19:11:21"
CONTRIBUTOR
null
Checksum update to `udhr` for issue #4361
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4362/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4362", "html_url": "https://github.com/huggingface/datasets/pull/4362", "diff_url": "https://github.com/huggingface/datasets/pull/4362.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4362.patch", "merged_at": "2022-06-08T19:11:20" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for contributing @leondz.\r\n\r\nThe checksums of the files have changed because more languages have been added:\r\n- the new language codes need to be added to the dataset card (README file)\r\n- I think the dataset version number should also be increased, so that users who had previously cached it, get a new dataset download (with the additional languages)", "Yep! All done (also fixed the language tags in the README which were iso639-3 instead of the expected bcp47)", "I guess the language code CI failure is due to languages.json being a subset of bcp47 (see issue #4304), happy to contribute a solution here, e.g. autogeneration of the lang list from the relevant isos and the ietf bcp47 subtag register or full code for validation", "> Thanks again for your contribution, @leondz.\r\n> \r\n> Yes, I think it is OK to set version 1.0.0 (as previous was 0.0.0).\r\n> \r\n> One of the CI failures is related to dummy data: once you have updated the dataset version, the dummy_data ZIP file should be moved from \"dummy/0.0.0/dummy_data.zip\" to \"dummy/1.0.0/dummy_data.zip\".\r\n\r\nOh, thanks, I missed that one\r\n\r\n\r\n> Other CI failure is related to missing languages in our resources file. This has been addressed in this PR:\r\n> \r\n> * #4371\r\n> \r\n> You should merge master branch into your feature branch to incorporate that fix.\r\n\r\nYeah, I saw this :) I already have the merge, thanks. I'm talking about the longer-term picture: every time another language code comes up (e.g. da-bornholm or es-VE), the json will need updating, because the current approach is non-exhaustive manual whitelisting instead of relying on the established bcp standard." ]
https://api.github.com/repos/huggingface/datasets/issues/4361
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4361/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4361/comments
https://api.github.com/repos/huggingface/datasets/issues/4361/events
https://github.com/huggingface/datasets/issues/4361
1,238,671,931
I_kwDODunzps5J1KI7
4,361
`udhr` doesn't load, dataset checksum mismatch
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
"2022-05-17T13:47:09"
"2022-06-08T19:11:21"
"2022-06-08T19:11:21"
CONTRIBUTOR
null
## Describe the bug Loading `udhr` fails due to a checksum mismatch for some source files. Looks like both of the source files on unicode.org have changed: size + checksum in datasets repo: ``` (hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json { "https://unicode.org/udhr/assemblies/udhr_xml.zip": { "num_bytes": 2273633, "checksum": "0565fa62c2ff155b84123198bcc967edd8c5eb9679eadc01e6fb44a5cf730fee" }, "https://unicode.org/udhr/assemblies/udhr_txt.zip": { "num_bytes": 2107471, "checksum": "087b474a070dd4096ae3028f9ee0b30dcdcb030cc85a1ca02e143be46327e5e5" } } ``` size + checksum regenerated from current source files: ``` (hfdev) leon@blade:~/datasets/datasets/udhr$ rm dataset_infos.json (hfdev) leon@blade:~/datasets/datasets/udhr$ datasets-cli test --save_infos udhr.py Using custom data configuration default Testing builder 'default' (1/1) Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66... Dataset udhn downloaded and prepared to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66. Subsequent calls will reuse this data. 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 686.69it/s] Dataset Infos file saved at dataset_infos.json Test successful. (hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json { "https://unicode.org/udhr/assemblies/udhr_xml.zip": { "num_bytes": 2389690, "checksum": "a3350912790196c6e1b26bfd1c8a50e8575f5cf185922ecd9bd15713d7d21438" }, "https://unicode.org/udhr/assemblies/udhr_txt.zip": { "num_bytes": 2215441, "checksum": "cb87ecb25b56f34e4fd6f22b323000524fd9c06ae2a29f122b048789cf17e9fe" } } (hfdev) leon@blade:~/datasets/datasets/udhr$ ``` --- is unicode.org a sustainable hosting solution for this dataset? ## Steps to reproduce the bug ```python from datasets import load_dataset udhr = load_dataset("udhr") ``` ## Expected results That a Dataset object containing the UDHR data will be returned. ## Actual results ``` >>> d = load_dataset('udhr') Using custom data configuration default Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/leon/.local/lib/python3.9/site-packages/datasets/load.py", line 1731, in load_dataset builder_instance.download_and_prepare( File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 613, in download_and_prepare self._download_and_prepare( File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 1117, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 684, in _download_and_prepare verify_checksums( File "/home/leon/.local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://unicode.org/udhr/assemblies/udhr_xml.zip', 'https://unicode.org/udhr/assemblies/udhr_txt.zip'] >>> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.1 commit/4110fb6034f79c5fb470cf1043ff52180e9c63b7 - Platform: Linux Ubuntu 20.04 - Python version: 3.9.12 - PyArrow version: 8.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4361/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4360
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4360/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4360/comments
https://api.github.com/repos/huggingface/datasets/issues/4360/events
https://github.com/huggingface/datasets/pull/4360
1,237,239,096
PR_kwDODunzps434izs
4,360
Fix example in opus_ubuntu, Add license info
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-05-16T14:22:28"
"2022-06-01T13:06:07"
"2022-06-01T12:57:09"
CONTRIBUTOR
null
This PR * fixes a typo in the example for the`opus_ubuntu` dataset where it's mistakenly referred to as `ubuntu` * adds the declared license info for this corpus' origin * adds an example instance * updates the data origin type
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4360/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4360", "html_url": "https://github.com/huggingface/datasets/pull/4360", "diff_url": "https://github.com/huggingface/datasets/pull/4360.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4360.patch", "merged_at": "2022-06-01T12:57:09" }
true
[ "CI seems to fail due to languages incorrectly being flagged as invalid, I guess that's related to the currently-broken bcp47 validation (see #4304)", "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4359
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4359/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4359/comments
https://api.github.com/repos/huggingface/datasets/issues/4359/events
https://github.com/huggingface/datasets/pull/4359
1,237,149,578
PR_kwDODunzps434Pb6
4,359
Fix Version equality
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-16T13:19:26"
"2022-05-24T16:25:37"
"2022-05-24T16:17:14"
MEMBER
null
I think `Version` equality should align with other similar cases in Python, like: ```python In [1]: "a" == 5, "a" == None Out[1]: (False, False) In [2]: "a" != 5, "a" != None Out[2]: (True, True) ``` With this PR, we will get: ```python In [3]: Version("1.0.0") == 5, Version("1.0.0") == None Out[3]: (False, False) In [4]: Version("1.0.0") != 5, Version("1.0.0") != None Out[4]: (True, True) ``` Note I found this issue when `doc-builder` tried to compare: ```python if param.default != inspect._empty ``` where `param.default` is an instance of `Version`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4359/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4359", "html_url": "https://github.com/huggingface/datasets/pull/4359", "diff_url": "https://github.com/huggingface/datasets/pull/4359.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4359.patch", "merged_at": "2022-05-24T16:17:14" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4358
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4358/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4358/comments
https://api.github.com/repos/huggingface/datasets/issues/4358/events
https://github.com/huggingface/datasets/issues/4358
1,237,147,692
I_kwDODunzps5JvWAs
4,358
Missing dataset tags and sections in some dataset cards
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
2
"2022-05-16T13:18:16"
"2022-05-30T15:36:52"
null
NONE
null
Summary of CircleCI errors for different dataset metadata: - **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids' - **Conllpp**: expected some content in section `Citation Information` but it is empty. - **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags - **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids' - **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty - **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty. - **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty. - **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty. - **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids' - **sms_spam**: `Data Instances` and`Data Splits` are empty. - **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids' - **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4358/timeline
null
null
null
null
false
[ "@lhoestq I can take this issue. Please can you point out to me where I can find the other positional arguments?", "Hi @RohitRathore1 :)\r\n\r\nYou can find all the YAML tags in the tagging app here: https://hf.co/spaces/huggingface/datasets-tagging). They're all passed as arguments to a DatasetMetadata object used to validate the tags." ]
https://api.github.com/repos/huggingface/datasets/issues/4357
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4357/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4357/comments
https://api.github.com/repos/huggingface/datasets/issues/4357/events
https://github.com/huggingface/datasets/pull/4357
1,237,037,069
PR_kwDODunzps4333b9
4,357
Fix warning in push_to_hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-16T11:50:17"
"2022-05-16T15:18:49"
"2022-05-16T15:10:41"
MEMBER
null
Fix warning: ``` FutureWarning: 'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4357/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4357", "html_url": "https://github.com/huggingface/datasets/pull/4357", "diff_url": "https://github.com/huggingface/datasets/pull/4357.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4357.patch", "merged_at": "2022-05-16T15:10:41" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4356
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4356/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4356/comments
https://api.github.com/repos/huggingface/datasets/issues/4356/events
https://github.com/huggingface/datasets/pull/4356
1,236,846,308
PR_kwDODunzps433OsB
4,356
Fix dataset builder default version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-05-16T09:05:10"
"2022-05-30T13:56:58"
"2022-05-30T13:47:54"
MEMBER
null
Currently, when using a custom config (subclass of `BuilderConfig`), default version set at the builder level is ignored: we must set default version in the custom config class. However, when loading a dataset with `config_kwargs` (for a configuration not present in `BUILDER_CONFIGS`), the default version set in the custom config is ignored and "0.0.0" is used instead: ```python ds = load_dataset("wikipedia", language="co", date="20220501", beam_runner="DirectRunner") ``` generates the following config: ```python WikipediaConfig(name='20220501.co', version=0.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for co, parsed from 20220501 dump.') ``` with version "0.0.0" instead of "2.0.0". See as a counter-example, when the config is present in `BUILDER_CONFIGS`: ```python ds = load_dataset("wikipedia", "20220301.fr", beam_runner="DirectRunner") ``` generates the following config: ```python WikipediaConfig(name='20220301.fr', version=2.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for fr, parsed from 20220301 dump.') ``` with correct version "2.0.0", as set in the custom config class. The reason for this is that `DatasetBuilder` has a default VERSION ("0.0.0") that overwrites the default version set at the custom config class. This PR: - Removes the default VERSION at `DatasetBuilder` (set to None, so that the class attribute exists but it does not override the custom config default version). - Note that the `BuilderConfig` class already sets a default version = "0.0.0"; no need to pass this from the builder.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4356/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4356/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4356", "html_url": "https://github.com/huggingface/datasets/pull/4356", "diff_url": "https://github.com/huggingface/datasets/pull/4356.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4356.patch", "merged_at": "2022-05-30T13:47:54" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "This PR requires one of these other PRs being merged first:\r\n- #4359 \r\n- huggingface/doc-builder#211" ]
https://api.github.com/repos/huggingface/datasets/issues/4355
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4355/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4355/comments
https://api.github.com/repos/huggingface/datasets/issues/4355/events
https://github.com/huggingface/datasets/pull/4355
1,236,797,490
PR_kwDODunzps433EgP
4,355
Fix warning in upload_file
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-16T08:21:31"
"2022-05-16T11:28:02"
"2022-05-16T11:19:57"
MEMBER
null
Fix warning: ``` FutureWarning: Pass path_or_fileobj='...' as keyword args. From version 0.7 passing these as positional arguments will result in an error ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4355/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4355", "html_url": "https://github.com/huggingface/datasets/pull/4355", "diff_url": "https://github.com/huggingface/datasets/pull/4355.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4355.patch", "merged_at": "2022-05-16T11:19:57" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4354/comments
https://api.github.com/repos/huggingface/datasets/issues/4354/events
https://github.com/huggingface/datasets/issues/4354
1,236,404,383
I_kwDODunzps5Jsgif
4,354
Problems with WMT dataset
{ "login": "eldarkurtic", "id": 8884008, "node_id": "MDQ6VXNlcjg4ODQwMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/8884008?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eldarkurtic", "html_url": "https://github.com/eldarkurtic", "followers_url": "https://api.github.com/users/eldarkurtic/followers", "following_url": "https://api.github.com/users/eldarkurtic/following{/other_user}", "gists_url": "https://api.github.com/users/eldarkurtic/gists{/gist_id}", "starred_url": "https://api.github.com/users/eldarkurtic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eldarkurtic/subscriptions", "organizations_url": "https://api.github.com/users/eldarkurtic/orgs", "repos_url": "https://api.github.com/users/eldarkurtic/repos", "events_url": "https://api.github.com/users/eldarkurtic/events{/privacy}", "received_events_url": "https://api.github.com/users/eldarkurtic/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
6
"2022-05-15T20:58:26"
"2022-07-11T14:54:02"
"2022-07-11T14:54:01"
NONE
null
## Describe the bug I am trying to load WMT15 dataset and to define which data-sources to use for train/validation/test splits, but unfortunately it seems that the official documentation at [https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)](https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)) doesn't work anymore. ## Steps to reproduce the bug ```shell >>> import datasets >>> a = datasets.translate.wmt.WmtConfig() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'datasets' has no attribute 'translate' >>> a = datasets.wmt.WmtConfig() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: module 'datasets' has no attribute 'wmt' ``` ## Expected results To load WMT15 with given data-sources. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Linux-5.10.0-10-amd64-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4354/timeline
null
completed
null
null
false
[ "Hi! Yes, the docs are outdated. Expect this to be fixed soon. \r\n\r\nIn the meantime, you can try to fix the issue yourself.\r\n\r\nThese are the configs/language pairs supported by `wmt15` from which you can choose:\r\n* `cs-en` (Czech - English)\r\n* `de-en` (German - English)\r\n* `fi-en` (Finnish- English)\r\n* `fr-en` (French - English)\r\n* `ru-en` (Russian - English)\r\n\r\nAnd the current implementation always uses all the subsets available for a language, so to define custom subsets, you'll have to clone the repo from the Hub and replace the line https://huggingface.co/datasets/wmt15/blob/main/wmt_utils.py#L688 with:\r\n`for split, ss_names in (self._subsets if self.config.subsets is None else self.config.subsets).items()`\r\n\r\nThen, you can load the dataset as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"path/to/local/wmt15_folder\", \"<one of 5 available configs>\", subsets=...)", "@mariosasko thanks a lot for the suggested fix! ", "Hi @mariosasko \r\n\r\nAre the docs updated? If not, I would like to get on it. I am new around here, would we helpful, if you can guide.\r\n\r\nThanks", "Hi @khushmeeet! The docs haven't been updated, so feel free to work on this issue. This is a tricky issue, so I'll give the steps you can follow to fix this:\r\n\r\nFirst, this code:\r\nhttps://github.com/huggingface/datasets/blob/7cff5b9726a223509dbd6224de3f5f452c8d924f/src/datasets/load.py#L113-L118\r\n\r\nneeds to be replaced with (makes the dataset builder search more robust and allows us to remove the ABC stuff from `wmt_utils.py`):\r\n```python\r\n for name, obj in module.__dict__.items():\r\n if inspect.isclass(obj) and issubclass(obj, main_cls_type):\r\n if inspect.isabstract(obj):\r\n continue\r\n module_main_cls = obj\r\n obj_module = inspect.getmodule(obj)\r\n if obj_module is not None and module == obj_module:\r\n break\r\n```\r\n\r\nThen, all the `wmt_utils.py` scripts need to be updated as follows (these are the diffs with the requiered changes):\r\n````diff\r\n import os\r\n import re\r\n import xml.etree.cElementTree as ElementTree\r\n-from abc import ABC, abstractmethod\r\n\r\n import datasets\r\n````\r\n\r\n````diff\r\nlogger = datasets.logging.get_logger(__name__)\r\n\r\n\r\n _DESCRIPTION = \"\"\"\\\r\n-Translate dataset based on the data from statmt.org.\r\n+Translation dataset based on the data from statmt.org.\r\n\r\n-Versions exists for the different years using a combination of multiple data\r\n-sources. The base `wmt_translate` allows you to create your own config to choose\r\n-your own data/language pair by creating a custom `datasets.translate.wmt.WmtConfig`.\r\n+Versions exist for different years using a combination of data\r\n+sources. The base `wmt` allows you to create a custom dataset by choosing\r\n+your own data/language pair. This can be done as follows:\r\n\r\n ```\r\n-config = datasets.wmt.WmtConfig(\r\n- version=\"0.0.1\",\r\n+from datasets import inspect_dataset, load_dataset_builder\r\n+\r\n+inspect_dataset(\"<insert the dataset name\", \"path/to/scripts\")\r\n+builder = load_dataset_builder(\r\n+ \"path/to/scripts/wmt_utils.py\",\r\n language_pair=(\"fr\", \"de\"),\r\n subsets={\r\n datasets.Split.TRAIN: [\"commoncrawl_frde\"],\r\n datasets.Split.VALIDATION: [\"euelections_dev2019\"],\r\n },\r\n )\r\n-builder = datasets.builder(\"wmt_translate\", config=config)\r\n-```\r\n\r\n+# Standard version\r\n+builder.download_and_prepare()\r\n+ds = builder.as_dataset()\r\n+\r\n+# Streamable version\r\n+ds = builder.as_streaming_dataset()\r\n+```\r\n \"\"\"\r\n````\r\n\r\n````diff\r\n+class Wmt(datasets.GeneratorBasedBuilder):\r\n \"\"\"WMT translation dataset.\"\"\"\r\n+\r\n+ BUILDER_CONFIG_CLASS = WmtConfig\r\n\r\n def __init__(self, *args, **kwargs):\r\n- if type(self) == Wmt and \"config\" not in kwargs: # pylint: disable=unidiomatic-typecheck\r\n- raise ValueError(\r\n- \"The raw `wmt_translate` can only be instantiated with the config \"\r\n- \"kwargs. You may want to use one of the `wmtYY_translate` \"\r\n- \"implementation instead to get the WMT dataset for a specific year.\"\r\n- )\r\n super(Wmt, self).__init__(*args, **kwargs)\r\n\r\n @property\r\n- @abstractmethod\r\n def _subsets(self):\r\n \"\"\"Subsets that make up each split of the dataset.\"\"\"\r\n````\r\n```diff\r\n \"\"\"Subsets that make up each split of the dataset for the language pair.\"\"\"\r\n source, target = self.config.language_pair\r\n filtered_subsets = {}\r\n- for split, ss_names in self._subsets.items():\r\n+ subsets = self._subsets if self.config.subsets is None else self.config.subsets\r\n+ for split, ss_names in subsets.items():\r\n filtered_subsets[split] = []\r\n for ss_name in ss_names:\r\n dataset = DATASET_MAP[ss_name]\r\n```\r\n\r\n`wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t` have this script, so all of them need to be updated. Also, the dataset summaries from the READMEs of these datasets need to be updated to match the new `_DESCRIPTION` string. And that's it! Let me know if you need additional help.", "Hi @mariosasko ,\r\n\r\nI have made the changes as suggested by you and have opened a PR #4537.\r\n\r\nThanks", "Resolved via #4554 " ]
https://api.github.com/repos/huggingface/datasets/issues/4353
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4353/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4353/comments
https://api.github.com/repos/huggingface/datasets/issues/4353/events
https://github.com/huggingface/datasets/pull/4353
1,236,092,176
PR_kwDODunzps43016x
4,353
Don't strip proceeding hyphen
{ "login": "JohnGiorgi", "id": 8917831, "node_id": "MDQ6VXNlcjg5MTc4MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JohnGiorgi", "html_url": "https://github.com/JohnGiorgi", "followers_url": "https://api.github.com/users/JohnGiorgi/followers", "following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}", "gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}", "starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions", "organizations_url": "https://api.github.com/users/JohnGiorgi/orgs", "repos_url": "https://api.github.com/users/JohnGiorgi/repos", "events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}", "received_events_url": "https://api.github.com/users/JohnGiorgi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-14T18:25:29"
"2022-05-16T18:51:38"
"2022-05-16T13:52:11"
CONTRIBUTOR
null
Closes #4320.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4353/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4353", "html_url": "https://github.com/huggingface/datasets/pull/4353", "diff_url": "https://github.com/huggingface/datasets/pull/4353.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4353.patch", "merged_at": "2022-05-16T13:52:10" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4352
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4352/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4352/comments
https://api.github.com/repos/huggingface/datasets/issues/4352/events
https://github.com/huggingface/datasets/issues/4352
1,236,086,170
I_kwDODunzps5JrS2a
4,352
When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way
{ "login": "plamb-viso", "id": 99206017, "node_id": "U_kgDOBenDgQ", "avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/plamb-viso", "html_url": "https://github.com/plamb-viso", "followers_url": "https://api.github.com/users/plamb-viso/followers", "following_url": "https://api.github.com/users/plamb-viso/following{/other_user}", "gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}", "starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions", "organizations_url": "https://api.github.com/users/plamb-viso/orgs", "repos_url": "https://api.github.com/users/plamb-viso/repos", "events_url": "https://api.github.com/users/plamb-viso/events{/privacy}", "received_events_url": "https://api.github.com/users/plamb-viso/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
"2022-05-14T17:55:15"
"2022-05-16T15:09:17"
null
NONE
null
## Describe the bug Recently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not match the types that came back. Because of this, i ended up in tracebacks deep inside arrow_dataset.py and arrow_writer.py with exceptions that [did not make clear what the problem was](https://github.com/huggingface/datasets/issues/4349). In short i ended up with overflows and the OS killing processes when Arrow was attempting to write. It wasn't until I dug into `def write_batch` and the loop that loops over cols that I figured out what was going on. It seems like `.map()` could set a boolean that it's checked that for at least 1 instance from the dataset, the returned data's types match the types provided by the `features` param and error out with a clear exception if they don't. This would make the cause of the issue much more understandable and save people time. This could be construed as a feature but it feels more like a bug to me. ## Steps to reproduce the bug I don't have explicit code to repro the bug, but ill show an example Code prior to the fix: ```python def preprocess(examples): # returns an encoded data dict with keys that match the features, but the types do not match ... def get_encoded_data(data): dataset = Dataset.from_pandas(data) unique_labels = data['audit_type'].unique().tolist() features = Features({ 'image': Array3D(dtype="uint8", shape=(3, 224, 224))), 'input_ids': Sequence(feature=Value(dtype='int64'))), 'attention_mask': Sequence(Value(dtype='int64'))), 'token_type_ids': Sequence(Value(dtype='int64'))), 'bbox': Array2D(dtype="int64", shape=(512, 4))), 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels), }) encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names) ``` The Features set that fixed it: ```python features = Features({ 'image': Sequence(Array3D(dtype="uint8", shape=(3, 224, 224))), 'input_ids': Sequence(Sequence(feature=Value(dtype='int64'))), 'attention_mask': Sequence(Sequence(Value(dtype='int64'))), 'token_type_ids': Sequence(Sequence(Value(dtype='int64'))), 'bbox': Sequence(Array2D(dtype="int64", shape=(512, 4))), 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels), }) ``` The difference between my original code (which was based on documentation) and the working code is the addition of the `Sequence(...)` to 4/5 features as I am working with paginated data and the doc examples are not. ## Expected results Dataset.map() attempts to validate the data types for each Feature on the first iteration and errors out if they are not validated. ## Actual results Specify the actual results or traceback. Based on the value of `writer_batch_size`, execution errors out when Arrow attempts to write because the types do not match, though its error messages dont make this obvious Example errors: ``` OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB. (offset overflow while concatenating arrays) ``` ``` zsh: killed python doc_classification.py UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> datasets version: 2.1.0 Platform: macOS-12.2.1-arm64-arm-64bit Python version: 3.9.12 PyArrow version: 6.0.1 Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4352/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4352/timeline
null
null
null
null
false
[ "Hi ! Thanks for reporting :) `datasets` usually returns a `pa.lib.ArrowInvalid` error if the feature types don't match.\r\n\r\nIt would be awesome if we had a way to reproduce the `OverflowError` in this case, to better understand what happened and be able to provide the best error message" ]
https://api.github.com/repos/huggingface/datasets/issues/4351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4351/comments
https://api.github.com/repos/huggingface/datasets/issues/4351/events
https://github.com/huggingface/datasets/issues/4351
1,235,950,209
I_kwDODunzps5JqxqB
4,351
Add optional progress bar for .save_to_disk(..) and .load_from_disk(..) when working with remote filesystems
{ "login": "Rexhaif", "id": 5154447, "node_id": "MDQ6VXNlcjUxNTQ0NDc=", "avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rexhaif", "html_url": "https://github.com/Rexhaif", "followers_url": "https://api.github.com/users/Rexhaif/followers", "following_url": "https://api.github.com/users/Rexhaif/following{/other_user}", "gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions", "organizations_url": "https://api.github.com/users/Rexhaif/orgs", "repos_url": "https://api.github.com/users/Rexhaif/repos", "events_url": "https://api.github.com/users/Rexhaif/events{/privacy}", "received_events_url": "https://api.github.com/users/Rexhaif/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
1
"2022-05-14T11:30:42"
"2022-12-14T18:22:59"
"2022-12-14T18:22:59"
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** When working with large datasets stored on remote filesystems(such as s3), the process of uploading a dataset could take really long time. For instance: I was uploading a re-processed version of wmt17 en-ru to my s3 bucket and it took like 35 minutes(and that's given that I have a fiber optic connection). The only output during that process was a progress bar for flattening indices and then ~35 minutes of complete silence. **Describe the solution you'd like** I want to be able to enable a progress bar when calling .save_to_disk(..) and .load_from_disk(..), it would track either amount of bytes sent/received or amount of records written/loaded, and will give some ETA. Basically just tqdm. **Describe alternatives you've considered** - Save dataset to tmp folder at the disk and then upload it using custom wrapper over botocore, which will work with progress bar, like [this](https://alexwlchan.net/2021/04/s3-progress-bars/).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4351/timeline
null
completed
null
null
false
[ "Hi! I like this idea. For consistency with `load_dataset`, we can use `fsspec`'s `TqdmCallback` in `.load_from_disk` to monitor the number of bytes downloaded, and in `.save_to_disk`, we can track the number of saved shards for consistency with `push_to_hub` (after we implement https://github.com/huggingface/datasets/issues/4196)." ]
https://api.github.com/repos/huggingface/datasets/issues/4350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4350/comments
https://api.github.com/repos/huggingface/datasets/issues/4350/events
https://github.com/huggingface/datasets/pull/4350
1,235,505,104
PR_kwDODunzps43zKIV
4,350
Add a new metric: CTC_Consistency
{ "login": "YEdenZ", "id": 92551194, "node_id": "U_kgDOBYQ4Gg", "avatar_url": "https://avatars.githubusercontent.com/u/92551194?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YEdenZ", "html_url": "https://github.com/YEdenZ", "followers_url": "https://api.github.com/users/YEdenZ/followers", "following_url": "https://api.github.com/users/YEdenZ/following{/other_user}", "gists_url": "https://api.github.com/users/YEdenZ/gists{/gist_id}", "starred_url": "https://api.github.com/users/YEdenZ/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YEdenZ/subscriptions", "organizations_url": "https://api.github.com/users/YEdenZ/orgs", "repos_url": "https://api.github.com/users/YEdenZ/repos", "events_url": "https://api.github.com/users/YEdenZ/events{/privacy}", "received_events_url": "https://api.github.com/users/YEdenZ/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-13T17:31:19"
"2022-05-19T10:23:04"
"2022-05-19T10:23:03"
NONE
null
Add CTC_Consistency metric Do I also need to modify the `test_metric_common.py` file to make it run on test?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4350/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4350", "html_url": "https://github.com/huggingface/datasets/pull/4350", "diff_url": "https://github.com/huggingface/datasets/pull/4350.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4350.patch", "merged_at": null }
true
[ "Thanks for your contribution, @YEdenZ.\r\n\r\nPlease note that our old `metrics` module is in the process of being incorporated to a separate library called `evaluate`: https://github.com/huggingface/evaluate\r\n\r\nTherefore, I would ask you to transfer your PR to that repository. Thank you." ]
https://api.github.com/repos/huggingface/datasets/issues/4349
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4349/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4349/comments
https://api.github.com/repos/huggingface/datasets/issues/4349/events
https://github.com/huggingface/datasets/issues/4349
1,235,474,765
I_kwDODunzps5Jo9lN
4,349
Dataset.map()'s fails at any value of parameter writer_batch_size
{ "login": "plamb-viso", "id": 99206017, "node_id": "U_kgDOBenDgQ", "avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/plamb-viso", "html_url": "https://github.com/plamb-viso", "followers_url": "https://api.github.com/users/plamb-viso/followers", "following_url": "https://api.github.com/users/plamb-viso/following{/other_user}", "gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}", "starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions", "organizations_url": "https://api.github.com/users/plamb-viso/orgs", "repos_url": "https://api.github.com/users/plamb-viso/repos", "events_url": "https://api.github.com/users/plamb-viso/events{/privacy}", "received_events_url": "https://api.github.com/users/plamb-viso/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
6
"2022-05-13T16:55:12"
"2022-06-02T12:51:11"
"2022-05-14T15:08:08"
NONE
null
## Describe the bug If the the value of `writer_batch_size` is less than the total number of instances in the dataset it will fail at that same number of instances. If it is greater than the total number of instances, it fails on the last instance. Context: I am attempting to fine-tune a pre-trained HuggingFace transformers model called LayoutLMv2. This model takes three inputs: document images, words and word bounding boxes. [The Processor for this model has two options](https://huggingface.co/docs/transformers/model_doc/layoutlmv2#usage-layoutlmv2processor), the default is passing a document to the Processor and allowing it to create images of the document and use PyTesseract to perform OCR and generate words/bounding boxes. The other option is to provide `revision="no_ocr"` to the pre-trained model which allows you to use your own OCR results (in my case, Amazon Textract) so you have to provide the image, words and bounding boxes yourself. I am using this second option which might be good context for the bug. I am using the Dataset.map() paradigm to create these three inputs, encode them and save the dataset. Note that my documents (data instances) on average are fairly large and can range from 1 page up to 300 pages. Code I am using is provided below ## Steps to reproduce the bug I do not have explicit sample code, but I will paste the code I'm using in case reading it helps. When `.map()` is called, the dataset has 2933 rows, many of which represent large pdf documents. ```python def get_encoded_data(data): dataset = Dataset.from_pandas(data) unique_labels = data['label'].unique() features = Features({ 'image': Array3D(dtype="int64", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'token_type_ids': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels), }) encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names, writer_batch_size=dataset.num_rows+1) encoded_dataset.save_to_disk(TRAINING_DATA_PATH + ENCODED_DATASET_NAME) encoded_dataset.set_format(type="torch") return encoded_dataset ``` ```python PROCESSOR = LayoutLMv2Processor.from_pretrained(MODEL_PATH, revision="no_ocr", use_fast=False) def preprocess_data(examples): directory = os.path.join(FILES_PATH, examples['file_location']) images_dir = os.path.join(directory, PDF_IMAGE_DIR) textract_response_path = os.path.join(directory, 'textract.json') doc_meta_path = os.path.join(directory, 'doc_meta.json') textract_document = get_textract_document(textract_response_path, doc_meta_path) images, words, bboxes = get_doc_training_data(images_dir, textract_document) encoded_inputs = PROCESSOR(images, words, boxes=bboxes, padding="max_length", truncation=True) # https://github.com/NielsRogge/Transformers-Tutorials/issues/36 encoded_inputs["image"] = np.array(encoded_inputs["image"]) encoded_inputs["label"] = examples['label_id'] return encoded_inputs ``` ## Expected results My expectation is that `writer_batch_size` allows one to simply trade off performance and memory requirements, not that it must be a specific number for `.map()` to function correctly. ## Actual results If writer_batch_size is set to a value less than the number of rows, I get either: ``` OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB. (offset overflow while concatenating arrays) ``` or simply ``` zsh: killed python doc_classification.py UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown ``` If it is greater than the number of rows, i get the `zsh: killed` error above ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.1.0 - Platform: macOS-12.2.1-arm64-arm-64bit - Python version: 3.9.12 - PyArrow version: 6.0.1 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4349/timeline
null
completed
null
null
false
[ "Note that this same issue occurs even if i preprocess with the more default way of tokenizing that uses LayoutLMv2Processor's internal OCR:\r\n\r\n```python\r\n feature_extractor = LayoutLMv2FeatureExtractor()\r\n tokenizer = LayoutLMv2Tokenizer.from_pretrained(\"microsoft/layoutlmv2-base-uncased\")\r\n processor = LayoutLMv2Processor(feature_extractor, tokenizer)\r\n encoded_inputs = processor(images, padding=\"max_length\", truncation=True)\r\n encoded_inputs[\"image\"] = np.array(encoded_inputs[\"image\"])\r\n encoded_inputs[\"label\"] = examples['label_id']\r\n```", "Wanted to make sure anyone that finds this also finds my other report: https://github.com/huggingface/datasets/issues/4352", "Did you close it because you found that it was due to the incorrect Feature types ?", "Yeah-- my analysis of the issue was wrong in this one so I just closed it while linking to the new issue", "I met with the same problem when doing some experiments about layoutlm. I tried to set the writer_batch_size to 1, and the error still exists. Is there any solutions to this problem?", "The problem lies in how your Features are defined. It's erroring out when it actually goes to write them to disk" ]
https://api.github.com/repos/huggingface/datasets/issues/4348
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4348/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4348/comments
https://api.github.com/repos/huggingface/datasets/issues/4348/events
https://github.com/huggingface/datasets/issues/4348
1,235,432,976
I_kwDODunzps5JozYQ
4,348
`inspect` functions can't fetch dataset script from the Hub
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
2
"2022-05-13T16:08:26"
"2022-06-09T10:26:06"
"2022-06-09T10:26:06"
MEMBER
null
The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`: ```py >>> from datasets import inspect_dataset >>> inspect_dataset('rotten_tomatoes', local_path='path/to/my/local/folder') FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4348/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4348/timeline
null
completed
null
null
false
[ "Hi, thanks for reporting! `git bisect` points to #2986 as the PR that introduced the bug. Since then, there have been some additional changes to the loading logic, and in the current state, `force_local_path` (set via `local_path`) forbids pulling a script from the internet instead of downloading it: https://github.com/huggingface/datasets/blob/cfae0545b2ba05452e16136cacc7d370b4b186a1/src/datasets/inspect.py#L89-L91\r\n\r\ncc @lhoestq: `force_local_path` is only used in `inspect_dataset` and `inspect_metric`. Is it OK if we revert the behavior to match the old one?", "Good catch ! Yea I think it's fine :)" ]
https://api.github.com/repos/huggingface/datasets/issues/4347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4347/comments
https://api.github.com/repos/huggingface/datasets/issues/4347/events
https://github.com/huggingface/datasets/pull/4347
1,235,318,064
PR_kwDODunzps43yihq
4,347
Support remote cache_dir
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
6
"2022-05-13T14:26:35"
"2022-05-25T16:35:23"
"2022-05-25T16:27:03"
MEMBER
null
This PR implements complete support for remote `cache_dir`. Before, the support was just partial. This is useful to create datasets using Apache Beam (parallel data processing) builder with `cache_dir` in a remote bucket, e.g., for Wikipedia dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4347/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4347/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4347", "html_url": "https://github.com/huggingface/datasets/pull/4347", "diff_url": "https://github.com/huggingface/datasets/pull/4347.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4347.patch", "merged_at": "2022-05-25T16:27:03" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq thanks for your review.\r\n\r\nPlease note that `xjoin` cannot be used in this context, as it always returns a POSIX path string and this is not suitable on Windows machines.", "<s>`xjoin` returns windows paths (not posix) on windows, since it just extends`os.path.join` </s>\r\n\r\nActually you are right.\r\n\r\nhttps://github.com/huggingface/datasets/blob/08ec04ccb59630a3029b2ecd8a14d327bddd0c4a/src/datasets/utils/streaming_download_manager.py#L104-L105\r\n\r\nThough this is not an issue because posix paths (as returned by Path().as_posix()) work on windows. That's why we can replace `os.path.join` with `xjoin` in streaming mode. They look like `c:/Program Files/` or something (can't confirm right now, I don't have a windows with me)", "Until now, we have always replaced \"/\" in paths with `os.path.join` (`os.sep`,...) in order to support Windows paths (that contain r\"\\\\\").\r\n\r\nNow, you suggest ignoring this and work with POSIX strings (with \"/\").\r\n\r\nAs an example, when passing `cache_dir=r\"C:\\Users\\Username\\.mycache\"`:\r\n- Until now, it results in `self._cache_downloaded_dir = r\"C:\\Users\\Username\\.mycache\\downloads\"`\r\n- If we use `xjoin`, it will give `self._cache_downloaded_dir = \"C:/Users/Username/.mycache/downloads\"`\r\n\r\nYou say this is OK and we don't care if we work with POSIX strings on Windows machines.\r\n\r\nI'm incorporating your suggested changes then...", "Also note that using `xjoin`, if we pass `cache_dir=\"C:\\\\Users\\\\Username\\\\.mycache\"`, we get:\r\n- `self._cache_dir_root = \"C:\\\\Users\\\\Username\\\\.mycache\"`\r\n- `self._cache_downloaded_dir = \"C:/Users/Username/.mycache/downloads\"`", "It looks like it broke the CI on windows :/ maybe this was not a good idea, sorry" ]
https://api.github.com/repos/huggingface/datasets/issues/4346
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4346/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4346/comments
https://api.github.com/repos/huggingface/datasets/issues/4346/events
https://github.com/huggingface/datasets/issues/4346
1,235,067,062
I_kwDODunzps5JnaC2
4,346
GH Action to build documentation never ends
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
"2022-05-13T10:44:44"
"2022-05-13T11:22:00"
"2022-05-13T11:22:00"
MEMBER
null
## Describe the bug See: https://github.com/huggingface/datasets/runs/6418035586?check_suite_focus=true I finally forced the cancel of the workflow.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4346/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4345/comments
https://api.github.com/repos/huggingface/datasets/issues/4345/events
https://github.com/huggingface/datasets/pull/4345
1,235,062,787
PR_kwDODunzps43xrky
4,345
Fix never ending GH Action to build documentation
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-13T10:40:10"
"2022-05-13T11:29:43"
"2022-05-13T11:22:00"
MEMBER
null
There was an unclosed code block introduced by: - #4313 https://github.com/huggingface/datasets/pull/4313/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R538 This causes the "Make documentation" step in the "Build documentation" workflow to never finish. - I think this issue should also be addressed in the `doc-builder` lib. Fix #4346.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4345/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4345", "html_url": "https://github.com/huggingface/datasets/pull/4345", "diff_url": "https://github.com/huggingface/datasets/pull/4345.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4345.patch", "merged_at": "2022-05-13T11:22:00" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4344/comments
https://api.github.com/repos/huggingface/datasets/issues/4344/events
https://github.com/huggingface/datasets/pull/4344
1,234,882,542
PR_kwDODunzps43xFEn
4,344
Fix docstring in DatasetDict::shuffle
{ "login": "felixdivo", "id": 4403130, "node_id": "MDQ6VXNlcjQ0MDMxMzA=", "avatar_url": "https://avatars.githubusercontent.com/u/4403130?v=4", "gravatar_id": "", "url": "https://api.github.com/users/felixdivo", "html_url": "https://github.com/felixdivo", "followers_url": "https://api.github.com/users/felixdivo/followers", "following_url": "https://api.github.com/users/felixdivo/following{/other_user}", "gists_url": "https://api.github.com/users/felixdivo/gists{/gist_id}", "starred_url": "https://api.github.com/users/felixdivo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/felixdivo/subscriptions", "organizations_url": "https://api.github.com/users/felixdivo/orgs", "repos_url": "https://api.github.com/users/felixdivo/repos", "events_url": "https://api.github.com/users/felixdivo/events{/privacy}", "received_events_url": "https://api.github.com/users/felixdivo/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2022-05-13T08:06:00"
"2022-05-25T09:23:43"
"2022-05-24T15:35:21"
CONTRIBUTOR
null
I think due to #1626, the docstring contained this error ever since `seed` was added.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4344/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4344", "html_url": "https://github.com/huggingface/datasets/pull/4344", "diff_url": "https://github.com/huggingface/datasets/pull/4344.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4344.patch", "merged_at": "2022-05-24T15:35:21" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4343/comments
https://api.github.com/repos/huggingface/datasets/issues/4343/events
https://github.com/huggingface/datasets/issues/4343
1,234,864,168
I_kwDODunzps5Jmogo
4,343
Metrics documentation is not accessible in the datasets doc UI
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400959, "node_id": "MDU6TGFiZWwyMDY3NDAwOTU5", "url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion", "name": "Metric discussion", "color": "d722e8", "default": false, "description": "Discussions on the metrics" } ]
closed
false
null
[]
null
1
"2022-05-13T07:46:30"
"2022-06-03T08:50:25"
"2022-06-03T08:50:25"
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** Search for a metric name like "seqeval" yields no results on https://huggingface.co/docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the metric expects as an input, for example for `squad` there is a [key `id`](https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L42) documented only in the function doc but not in the `README.md`, and one needs to go look into the code to understand what the metric expects. **Describe the solution you'd like** Have the documentation for metrics appear as well in the doc UI, e.g. this https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L21-L63 I know there are plans to migrate metrics to the evaluate library, but just pointing this out.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4343/timeline
null
completed
null
null
false
[ "Hey @fxmarty :) Yes we are working on showing the docs of all the metrics on the Hugging face website. If you want to follow the advancements you can check the [evaluate](https://github.com/huggingface/evaluate) repository cc @lvwerra @sashavor " ]
https://api.github.com/repos/huggingface/datasets/issues/4342
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4342/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4342/comments
https://api.github.com/repos/huggingface/datasets/issues/4342/events
https://github.com/huggingface/datasets/pull/4342
1,234,743,765
PR_kwDODunzps43woHm
4,342
Fix failing CI on Windows for sari and wiki_split metrics
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2022-05-13T05:03:38"
"2022-05-13T05:47:42"
"2022-05-13T05:47:42"
MEMBER
null
This PR adds `sacremoses` as explicit tests dependency (required by sari and wiki_split metrics). Before, this library was installed as a third-party dependency, but this is no longer the case for Windows. Fix #4341.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4342/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4342", "html_url": "https://github.com/huggingface/datasets/pull/4342", "diff_url": "https://github.com/huggingface/datasets/pull/4342.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4342.patch", "merged_at": "2022-05-13T05:47:41" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4341/comments
https://api.github.com/repos/huggingface/datasets/issues/4341/events
https://github.com/huggingface/datasets/issues/4341
1,234,739,703
I_kwDODunzps5JmKH3
4,341
Failing CI on Windows for sari and wiki_split metrics
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
"2022-05-13T04:55:17"
"2022-05-13T05:47:41"
"2022-05-13T05:47:41"
MEMBER
null
## Describe the bug Our CI is failing from yesterday on Windows for metrics: sari and wiki_split ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split ``` See: https://app.circleci.com/pipelines/github/huggingface/datasets/11928/workflows/79daa5e7-65c9-4e85-829b-00d2bfbd076a/jobs/71594
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4341/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4340
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4340/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4340/comments
https://api.github.com/repos/huggingface/datasets/issues/4340/events
https://github.com/huggingface/datasets/pull/4340
1,234,671,025
PR_kwDODunzps43wY1U
4,340
Fix irc_disentangle dataset script
{ "login": "i-am-pad", "id": 32005017, "node_id": "MDQ6VXNlcjMyMDA1MDE3", "avatar_url": "https://avatars.githubusercontent.com/u/32005017?v=4", "gravatar_id": "", "url": "https://api.github.com/users/i-am-pad", "html_url": "https://github.com/i-am-pad", "followers_url": "https://api.github.com/users/i-am-pad/followers", "following_url": "https://api.github.com/users/i-am-pad/following{/other_user}", "gists_url": "https://api.github.com/users/i-am-pad/gists{/gist_id}", "starred_url": "https://api.github.com/users/i-am-pad/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/i-am-pad/subscriptions", "organizations_url": "https://api.github.com/users/i-am-pad/orgs", "repos_url": "https://api.github.com/users/i-am-pad/repos", "events_url": "https://api.github.com/users/i-am-pad/events{/privacy}", "received_events_url": "https://api.github.com/users/i-am-pad/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-13T02:37:57"
"2022-05-24T15:37:30"
"2022-05-24T15:37:29"
NONE
null
updated extracted dataset's repo's latest commit hash (included in tarball's name), and updated the related data_infos.json
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4340/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4340", "html_url": "https://github.com/huggingface/datasets/pull/4340", "diff_url": "https://github.com/huggingface/datasets/pull/4340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4340.patch", "merged_at": null }
true
[ "Thanks ! This has been fixed in https://github.com/huggingface/datasets/pull/4377, we can close this PR" ]
https://api.github.com/repos/huggingface/datasets/issues/4339
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4339/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4339/comments
https://api.github.com/repos/huggingface/datasets/issues/4339/events
https://github.com/huggingface/datasets/pull/4339
1,234,496,289
PR_kwDODunzps43v0WT
4,339
Dataset loader for the MSLR2022 shared task
{ "login": "JohnGiorgi", "id": 8917831, "node_id": "MDQ6VXNlcjg5MTc4MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JohnGiorgi", "html_url": "https://github.com/JohnGiorgi", "followers_url": "https://api.github.com/users/JohnGiorgi/followers", "following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}", "gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}", "starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions", "organizations_url": "https://api.github.com/users/JohnGiorgi/orgs", "repos_url": "https://api.github.com/users/JohnGiorgi/repos", "events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}", "received_events_url": "https://api.github.com/users/JohnGiorgi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
9
"2022-05-12T21:23:41"
"2022-07-18T17:19:27"
"2022-07-18T16:58:34"
CONTRIBUTOR
null
This PR adds a dataset loader for the [MSLR2022 Shared Task](https://github.com/allenai/mslr-shared-task). Both the MS^2 and Cochrane datasets can be loaded with this dataloader: ```python from datasets import load_dataset ms2 = load_dataset("mslr2022", "ms2") cochrane = load_dataset("mslr2022", "cochrane") ``` Usage looks like: ```python >>> ms2 = load_dataset("mslr2022", "ms2", split="validation") >>> ms2.keys() dict_keys(['review_id', 'pmid', 'title', 'abstract', 'target', 'background', 'reviews_info']) >>> ms2[0].target 'Conclusions SC therapy is effective for PAH in pre clinical studies .\nThese results may help to st and ardise pre clinical animal studies and provide a theoretical basis for clinical trial design in the future .' ``` I have tested this works with the following command: ```bash datasets-cli test datasets/mslr2022 --save_infos --all_configs ``` However I have having a little trouble generating the dummy data ```bash datasets-cli dummy_data datasets/mslr2022 --auto_generate ``` errors out with the following stack trace: ``` Couldn't generate dummy file 'datasets/mslr2022/dummy/ms2/1.0.0/dummy_data/mslr_data.tar.gz/mslr_data/ms2/convert_to_cochrane.py'. Ignore that if this file is not useful for dummy data. Traceback (most recent call last): File "/Users/johngiorgi/.pyenv/versions/datasets/bin/datasets-cli", line 11, in <module> load_entry_point('datasets', 'console_scripts', 'datasets-cli')() File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 319, in run keep_uncompressed=self._keep_uncompressed, File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data dataset_builder._prepare_split(split_generator, check_duplicate_keys=False) File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/builder.py", line 1146, in _prepare_split desc=f"Generating {split_info.name} split", File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/Users/johngiorgi/.cache/huggingface/modules/datasets_modules/datasets/mslr2022/b4becd2f52cf18255d4934d7154c2a1127fb393371b87b3c1fc2c8b35a777cea/mslr2022.py", line 149, in _generate_examples reviews_info_df = pd.read_csv(reviews_info_filepath, index_col=0) File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper return func(*args, **kwargs) File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 586, in read_csv return _read(filepath_or_buffer, kwds) File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 488, in _read return parser.read(nrows) File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 1047, in read index, columns, col_dict = self._engine.read(nrows) File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 224, in read chunks = self._reader.read_low_memory(nrows) File "pandas/_libs/parsers.pyx", line 801, in pandas._libs.parsers.TextReader.read_low_memory File "pandas/_libs/parsers.pyx", line 857, in pandas._libs.parsers.TextReader._read_rows File "pandas/_libs/parsers.pyx", line 843, in pandas._libs.parsers.TextReader._tokenize_rows File "pandas/_libs/parsers.pyx", line 1925, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 2 ``` I think this may have to do with unusual line terminators in the original data. When I open it in VSCode, it complains: ``` The file 'dev-inputs.csv' contains one or more unusual line terminator characters, like Line Separator (LS) or Paragraph Separator (PS). It is recommended to remove them from the file. This can be configured via `editor.unusualLineTerminators`. ``` Tagging the organizers of the shared task in case they want to sanity check this or add any info to the model card :) @lucylw @jayded
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4339/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4339/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4339", "html_url": "https://github.com/huggingface/datasets/pull/4339", "diff_url": "https://github.com/huggingface/datasets/pull/4339.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4339.patch", "merged_at": null }
true
[ "I think the underlying issue is in https://github.com/huggingface/datasets/blob/c0ed6fdc29675b3565b01b77fde5ab5d9d8b60ec/src/datasets/commands/dummy_data.py#L124 - where `CSV`s are considered to be in the same class of file as text, jsonl, and tsv.\r\n\r\nI think this is an error because CSVs can have newlines within the rows of a file. I'm happy to make a PR to change how this handling works, or make the change within this PR. \r\n\r\nWe should figure out:\r\n1. Does this dummy data need to be generated more than once? (It looks like no)\r\n2. Should this be fixed generally? (needs a HF person to weigh in here)\r\n3. What is the right way for such a fix to exist permanently here; the [Contributing document](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md) doesn't provide guidance on any tests. Writing a test is several times more effort than fixing the underlying issue. (again needs a HF person)", "Would someone from HF mind taking a look at this PR? (@lhoestq)", "Hi ! Sorry for the delay in responding :)\r\n\r\nI don't think there's a big need to fix this in the general case for now, feel free to just generate the dummy data for this specific dataset :)\r\n\r\nThe `datasets-cli dummy_data datasets/mslr2022` command should tell you what dummy files to generate. In each dummy file you just need to include enough data to generate 4 or 5 examples", "_The documentation is not available anymore as the PR was closed or merged._", "Awesome! Generated the dummy data and the tests now pass. @jayded thanks for your help! If you and @lucylw are happy with this I think it's ready to be merged. @lhoestq this is ready for another look :)", "Hi @lhoestq, is there anything blocking this from being merged that I can address?", "Hi @JohnGiorgi ! Thanks for the changes, it looks all good now :)\r\n\r\nI think this dataset can be under the AllenAI page here: https://huggingface.co/allenai What do you think ?\r\nFeel free to create a new dataset repository on huggingface.co and upload your files (dataset script, readme, etc.)\r\n\r\nOnce the dataset is under the AllenAI org, we can close this PR\r\n", "> Hi @JohnGiorgi ! Thanks for the changes, it looks all good now :)\r\n> \r\n> I think this dataset can be under the AllenAI page here: https://huggingface.co/allenai What do you think ? Feel free to create a new dataset repository on huggingface.co and upload your files (dataset script, readme, etc.)\r\n> \r\n> Once the dataset is under the AllenAI org, we can close this PR\r\n\r\nSweet! It is uploaded here: https://huggingface.co/datasets/allenai/mslr2022", "Nice ! Thanks :)\r\n\r\nI think we can close this PR then.\r\n\r\nI noticed that the dataset preview is not available on this dataset, this is because we require datasets to work in streaming mode to show a preview. However TAR archives don't work well in streaming mode (you can't know in advance what files are inside a TAR archive without reading it completely). This can be fixed by using a ZIP archive instead.\r\n\r\nLet me know if you have questions or if I can help." ]
https://api.github.com/repos/huggingface/datasets/issues/4338
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4338/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4338/comments
https://api.github.com/repos/huggingface/datasets/issues/4338/events
https://github.com/huggingface/datasets/pull/4338
1,234,478,851
PR_kwDODunzps43vwsm
4,338
Eval metadata Batch 4: Tweet Eval, Tweets Hate Speech Detection, VCTK, Weibo NER, Wisesight Sentiment, XSum, Yahoo Answers Topics, Yelp Polarity, Yelp Review Full
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-05-12T21:02:08"
"2022-05-16T15:51:02"
"2022-05-16T15:42:59"
NONE
null
Adding evaluation metadata for: - Tweet Eval - Tweets Hate Speech Detection - VCTK - Weibo NER - Wisesight Sentiment - XSum - Yahoo Answers Topics - Yelp Polarity - Yelp Review Full
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4338/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4338", "html_url": "https://github.com/huggingface/datasets/pull/4338", "diff_url": "https://github.com/huggingface/datasets/pull/4338.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4338.patch", "merged_at": "2022-05-16T15:42:59" }
true
[ "Summary of CircleCI errors:\r\n\r\n- **XSum**: missing 6 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', and 'source_datasets'\r\n- **Yelp_polarity**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'", "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4337/comments
https://api.github.com/repos/huggingface/datasets/issues/4337/events
https://github.com/huggingface/datasets/pull/4337
1,234,470,083
PR_kwDODunzps43vuzF
4,337
Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-05-12T20:52:02"
"2022-05-16T16:26:19"
"2022-05-16T16:18:30"
NONE
null
Adding evaluation metadata for: - Reddit - Rotten Tomatoes - SemEval 2010 - Sentiment 140 - SMS Spam - Snips - SQuAD - SQuAD v2 - Timit ASR
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4337/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4337", "html_url": "https://github.com/huggingface/datasets/pull/4337", "diff_url": "https://github.com/huggingface/datasets/pull/4337.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4337.patch", "merged_at": "2022-05-16T16:18:30" }
true
[ "Summary of CircleCI errors:\r\n\r\n- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sms_spam**: `Data Instances` and`Data Splits` are empty.\r\n- **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n\r\nThere are also some timeout errors, I don't really understand the source though :confused: ", "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4336
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4336/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4336/comments
https://api.github.com/repos/huggingface/datasets/issues/4336/events
https://github.com/huggingface/datasets/pull/4336
1,234,446,174
PR_kwDODunzps43vpqG
4,336
Eval metadata batch 2 : Health Fact, Jigsaw Toxicity, LIAR, LJ Speech, MSRA NER, Multi News, NCBI Disease, Poem Sentiment
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2022-05-12T20:24:45"
"2022-05-16T16:25:00"
"2022-05-16T16:24:59"
NONE
null
Adding evaluation metadata for : - Health Fact - Jigsaw Toxicity - LIAR - LJ Speech - MSRA NER - Multi News - NCBI Diseas - Poem Sentiment
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4336/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4336", "html_url": "https://github.com/huggingface/datasets/pull/4336", "diff_url": "https://github.com/huggingface/datasets/pull/4336.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4336.patch", "merged_at": "2022-05-16T16:24:59" }
true
[ "Summary of CircleCI errors:\r\n- **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty.\r\n- **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n- **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty.\r\n", "The CI errors about missing content in the dataset cards can be ignored in this PR btw", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4336). All of your documentation changes will be reflected on that endpoint." ]
https://api.github.com/repos/huggingface/datasets/issues/4335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4335/comments
https://api.github.com/repos/huggingface/datasets/issues/4335/events
https://github.com/huggingface/datasets/pull/4335
1,234,157,123
PR_kwDODunzps43usJP
4,335
Eval metadata batch 1: BillSum, CoNLL2003, CoNLLPP, CUAD, Emotion, GigaWord, GLUE, Hate Speech 18, Hate Speech
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2022-05-12T15:28:16"
"2022-05-16T16:31:10"
"2022-05-16T16:23:09"
NONE
null
Adding evaluation metadata for: - BillSum - CoNLL2003 - CoNLLPP - CUAD - Emotion - GigaWord - GLUE - Hate Speech 18 - Hate Speech Offensive
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4335/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4335", "html_url": "https://github.com/huggingface/datasets/pull/4335", "diff_url": "https://github.com/huggingface/datasets/pull/4335.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4335.patch", "merged_at": "2022-05-16T16:23:08" }
true
[ "Summary of CircleCI errors:\r\n- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **Conllpp**: expected some content in section `Citation Information` but it is empty.\r\n- **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags\r\n- **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids'\r\n- **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty", "And yes we can ignore all the CI errors related to missing content in the dataset cards, these issues can be fixed in other PRs", "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4334
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4334/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4334/comments
https://api.github.com/repos/huggingface/datasets/issues/4334/events
https://github.com/huggingface/datasets/pull/4334
1,234,103,477
PR_kwDODunzps43uguB
4,334
Adding eval metadata for billsum
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2022-05-12T14:49:08"
"2023-09-24T10:02:46"
"2022-05-12T14:49:24"
NONE
null
Adding eval metadata for billsum
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4334/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4334", "html_url": "https://github.com/huggingface/datasets/pull/4334", "diff_url": "https://github.com/huggingface/datasets/pull/4334.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4334.patch", "merged_at": null }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4333/comments
https://api.github.com/repos/huggingface/datasets/issues/4333/events
https://github.com/huggingface/datasets/pull/4333
1,234,038,705
PR_kwDODunzps43uSuj
4,333
Adding eval metadata for Banking 77
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-12T14:05:05"
"2022-05-12T21:03:32"
"2022-05-12T21:03:31"
NONE
null
Adding eval metadata for Banking 77
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4333/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4333", "html_url": "https://github.com/huggingface/datasets/pull/4333", "diff_url": "https://github.com/huggingface/datasets/pull/4333.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4333.patch", "merged_at": "2022-05-12T21:03:31" }
true
[ "@lhoestq , Circle CI is giving me an error, saying that ['extended'] is a key that shouldn't be in the dataset metadata, but it was there before my modification (so I don't want to remove it)" ]
https://api.github.com/repos/huggingface/datasets/issues/4332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4332/comments
https://api.github.com/repos/huggingface/datasets/issues/4332/events
https://github.com/huggingface/datasets/pull/4332
1,234,021,188
PR_kwDODunzps43uO8S
4,332
Adding eval metadata for arabic speech corpus
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2022-05-12T13:51:38"
"2022-05-12T21:03:21"
"2022-05-12T21:03:20"
NONE
null
Adding eval metadata for arabic speech corpus
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4332/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4332", "html_url": "https://github.com/huggingface/datasets/pull/4332", "diff_url": "https://github.com/huggingface/datasets/pull/4332.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4332.patch", "merged_at": "2022-05-12T21:03:20" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4331
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4331/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4331/comments
https://api.github.com/repos/huggingface/datasets/issues/4331/events
https://github.com/huggingface/datasets/pull/4331
1,234,016,110
PR_kwDODunzps43uN2R
4,331
Adding eval metadata to Amazon Polarity
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2022-05-12T13:47:59"
"2022-05-12T21:03:14"
"2022-05-12T21:03:13"
NONE
null
Adding eval metadata to Amazon Polarity
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4331/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4331", "html_url": "https://github.com/huggingface/datasets/pull/4331", "diff_url": "https://github.com/huggingface/datasets/pull/4331.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4331.patch", "merged_at": "2022-05-12T21:03:13" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4330
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4330/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4330/comments
https://api.github.com/repos/huggingface/datasets/issues/4330/events
https://github.com/huggingface/datasets/pull/4330
1,233,992,681
PR_kwDODunzps43uIwm
4,330
Adding eval metadata to Allociné dataset
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2022-05-12T13:31:39"
"2022-05-12T21:03:05"
"2022-05-12T21:03:05"
NONE
null
Adding eval metadata to Allociné dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4330/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4330", "html_url": "https://github.com/huggingface/datasets/pull/4330", "diff_url": "https://github.com/huggingface/datasets/pull/4330.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4330.patch", "merged_at": "2022-05-12T21:03:05" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4329/comments
https://api.github.com/repos/huggingface/datasets/issues/4329/events
https://github.com/huggingface/datasets/pull/4329
1,233,991,207
PR_kwDODunzps43uIcF
4,329
Adding eval metadata for AG News
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2022-05-12T13:30:32"
"2022-05-12T21:02:41"
"2022-05-12T21:02:40"
NONE
null
Adding eval metadata for AG News
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4329/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4329", "html_url": "https://github.com/huggingface/datasets/pull/4329", "diff_url": "https://github.com/huggingface/datasets/pull/4329.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4329.patch", "merged_at": "2022-05-12T21:02:40" }
true
[]
https://api.github.com/repos/huggingface/datasets/issues/4328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4328/comments
https://api.github.com/repos/huggingface/datasets/issues/4328/events
https://github.com/huggingface/datasets/pull/4328
1,233,856,690
PR_kwDODunzps43trrd
4,328
Fix and clean Apache Beam functionality
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-12T11:41:07"
"2022-05-24T13:43:11"
"2022-05-24T13:34:32"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4328/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4328", "html_url": "https://github.com/huggingface/datasets/pull/4328", "diff_url": "https://github.com/huggingface/datasets/pull/4328.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4328.patch", "merged_at": "2022-05-24T13:34:32" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4327/comments
https://api.github.com/repos/huggingface/datasets/issues/4327/events
https://github.com/huggingface/datasets/issues/4327
1,233,840,020
I_kwDODunzps5JiueU
4,327
`wikipedia` pre-processed datasets
{ "login": "vpj", "id": 81152, "node_id": "MDQ6VXNlcjgxMTUy", "avatar_url": "https://avatars.githubusercontent.com/u/81152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vpj", "html_url": "https://github.com/vpj", "followers_url": "https://api.github.com/users/vpj/followers", "following_url": "https://api.github.com/users/vpj/following{/other_user}", "gists_url": "https://api.github.com/users/vpj/gists{/gist_id}", "starred_url": "https://api.github.com/users/vpj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vpj/subscriptions", "organizations_url": "https://api.github.com/users/vpj/orgs", "repos_url": "https://api.github.com/users/vpj/repos", "events_url": "https://api.github.com/users/vpj/events{/privacy}", "received_events_url": "https://api.github.com/users/vpj/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
"2022-05-12T11:25:42"
"2022-08-31T08:26:57"
"2022-08-31T08:26:57"
NONE
null
## Describe the bug [Wikipedia](https://huggingface.co/datasets/wikipedia) dataset readme says that certain subsets are preprocessed. However it seems like they are not available. When I try to load them it takes a really long time, and it seems like it's processing them. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("wikipedia", "20220301.en") ``` ## Expected results To load the dataset ## Actual results Takes a very long time to load (after downloading) After `Downloading data files: 100%`. It takes hours and gets killed. Tried `wikipedia.simple` and it got processed after ~30mins.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4327/timeline
null
completed
null
null
false
[ "Hi @vpj, thanks for reporting.\r\n\r\nI'm sorry, but I can't reproduce your bug: I load \"20220301.simple\"in 9 seconds:\r\n```shell\r\ntime python -c \"from datasets import load_dataset; load_dataset('wikipedia', '20220301.simple')\"\r\n\r\nDownloading and preparing dataset wikipedia/20220301.simple (download: 228.58 MiB, generated: 224.18 MiB, post-processed: Unknown size, total: 452.76 MiB) to .../.cache/huggingface/datasets/wikipedia/20220301.simple/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...\r\nDownloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.66k/1.66k [00:00<00:00, 1.02MB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 235M/235M [00:02<00:00, 82.8MB/s]\r\nDataset wikipedia downloaded and prepared to .../.cache/huggingface/datasets/wikipedia/20220301.simple/2.0.0/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 290.75it/s]\r\n\r\nreal\t0m9.693s\r\nuser\t0m6.002s\r\nsys\t0m3.260s\r\n```\r\n\r\nCould you please check your environment info, as requested when opening this issue?\r\n```\r\n## Environment info\r\n<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->\r\n- `datasets` version:\r\n- Platform:\r\n- Python version:\r\n- PyArrow version:\r\n```\r\nMaybe you are using an old version of `datasets`...", "Downloading and processing `wikipedia simple` dataset completed in under 11sec on M1 Mac. Could you please check `dataset` version as mentioned by @albertvillanova? Also check system specs, if system is under load processing could take some time I guess." ]
https://api.github.com/repos/huggingface/datasets/issues/4326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4326/comments
https://api.github.com/repos/huggingface/datasets/issues/4326/events
https://github.com/huggingface/datasets/pull/4326
1,233,818,489
PR_kwDODunzps43tjWy
4,326
Fix type hint and documentation for `new_fingerprint`
{ "login": "fxmarty", "id": 9808326, "node_id": "MDQ6VXNlcjk4MDgzMjY=", "avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fxmarty", "html_url": "https://github.com/fxmarty", "followers_url": "https://api.github.com/users/fxmarty/followers", "following_url": "https://api.github.com/users/fxmarty/following{/other_user}", "gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}", "starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions", "organizations_url": "https://api.github.com/users/fxmarty/orgs", "repos_url": "https://api.github.com/users/fxmarty/repos", "events_url": "https://api.github.com/users/fxmarty/events{/privacy}", "received_events_url": "https://api.github.com/users/fxmarty/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-12T11:05:08"
"2022-06-01T13:04:45"
"2022-06-01T12:56:18"
CONTRIBUTOR
null
Currently, there are no type hints nor `Optional` for the argument `new_fingerprint` in several methods of `datasets.arrow_dataset.Dataset`. There was some documentation missing as well. Note that pylance is happy with the type hints, but pyright does not detect that `new_fingerprint` is set within the decorator. The modifications in this PR are fine since here https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/src/datasets/fingerprint.py#L446-L454 for the non-inplace case we make sure to auto-generate a new fingerprint (as indicated in the doc).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4326/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4326/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4326", "html_url": "https://github.com/huggingface/datasets/pull/4326", "diff_url": "https://github.com/huggingface/datasets/pull/4326.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4326.patch", "merged_at": "2022-06-01T12:56:18" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4325/comments
https://api.github.com/repos/huggingface/datasets/issues/4325/events
https://github.com/huggingface/datasets/issues/4325
1,233,812,191
I_kwDODunzps5Jinrf
4,325
Dataset Viewer issue for strombergnlp/offenseval_2020, strombergnlp/polstance
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
4
"2022-05-12T10:59:08"
"2022-05-13T10:57:15"
"2022-05-13T10:57:02"
CONTRIBUTOR
null
### Link https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train ### Description The viewer isn't running for these two datasets. I left it overnight because a wait sometimes helps things get loaded, and the error messages have all gone, but the datasets are still turning up blank in viewer. Maybe it needs a bit more time. * https://huggingface.co/datasets/strombergnlp/polstance/viewer/PolStance/train * https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train While offenseval_2020 is gated w. prompt, the other gated previews I have run fine in Viewer, e.g. https://huggingface.co/datasets/strombergnlp/shaj , so I'm a bit stumped! ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4325/timeline
null
completed
null
null
false
[ "Not sure if it's related... I was going to raise an issue for https://huggingface.co/datasets/domenicrosati/TruthfulQA which also has the same issue... https://huggingface.co/datasets/domenicrosati/TruthfulQA/viewer/domenicrosati--TruthfulQA/train \r\n\r\n", "Yes, it's related. The backend behind the dataset viewer is currently under too much load, and these datasets are still in the jobs queue. We're actively working on this issue, and we expect to fix the issue permanently soon. Thanks for your patience 🙏  ", "Thanks @severo and no worries! - a suggestion for a UI usability thing maybe is to indicate that the dataset processing is in the job queue (rather than no data?)", "Thanks, these are working great now (including @domenicrosati 's, afaics!)" ]
https://api.github.com/repos/huggingface/datasets/issues/4324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4324/comments
https://api.github.com/repos/huggingface/datasets/issues/4324/events
https://github.com/huggingface/datasets/issues/4324
1,233,780,870
I_kwDODunzps5JigCG
4,324
Support >1 PWC dataset per dataset card
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
"2022-05-12T10:29:07"
"2022-05-13T11:25:29"
null
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** Some datasets cover more than one dataset on PapersWithCode. For example, the OffensEval 2020 challenge involved five languages, and there's one dataset to cover all five datasets, [`strombergnlp/offenseval_2020`](https://huggingface.co/datasets/strombergnlp/offenseval_2020). However, the yaml `paperswithcode_id:` dataset card entry only supports one value; when multiple are added, the PWC link disappears from the dataset page. Because the link from a PapersWithCode dataset to a Hugging Face Hub entry can't be entered manually and seems to be scraped, this means end users don't have a way of getting a dataset reader link to appear on all the PWC datasets supported by one HF Hub Dataset reader. It's not super unusual to have papers introduce multiple parallel variants of a dataset and would be handy to reflect this, so e.g. dataset maintainers can DRY, and so dataset users can keep what they're doing simple. **Describe the solution you'd like** I'd like `paperswithcode_id:` to support lists and be able to connect with multiple PWC datasets. **Describe alternatives you've considered** De-normalising the datasets on HF Hub to create multiple readers for each variation on a task, i.e. instead of a single `offenseval_2020`, having `offenseval_2020_ar`, `offenseval_2020_da`, `offenseval_2020_gr`, ... **Additional context** Hope that's enough **Priority** Low
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4324/timeline
null
null
null
null
false
[ "Hi @leondz, I agree it would be nice. We'll see what we can do ;)" ]
https://api.github.com/repos/huggingface/datasets/issues/4323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4323/comments
https://api.github.com/repos/huggingface/datasets/issues/4323/events
https://github.com/huggingface/datasets/issues/4323
1,233,634,928
I_kwDODunzps5Jh8Zw
4,323
Audio can not find value["bytes"]
{ "login": "YooSungHyun", "id": 34292279, "node_id": "MDQ6VXNlcjM0MjkyMjc5", "avatar_url": "https://avatars.githubusercontent.com/u/34292279?v=4", "gravatar_id": "", "url": "https://api.github.com/users/YooSungHyun", "html_url": "https://github.com/YooSungHyun", "followers_url": "https://api.github.com/users/YooSungHyun/followers", "following_url": "https://api.github.com/users/YooSungHyun/following{/other_user}", "gists_url": "https://api.github.com/users/YooSungHyun/gists{/gist_id}", "starred_url": "https://api.github.com/users/YooSungHyun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YooSungHyun/subscriptions", "organizations_url": "https://api.github.com/users/YooSungHyun/orgs", "repos_url": "https://api.github.com/users/YooSungHyun/repos", "events_url": "https://api.github.com/users/YooSungHyun/events{/privacy}", "received_events_url": "https://api.github.com/users/YooSungHyun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
9
"2022-05-12T08:31:58"
"2022-07-07T13:16:08"
"2022-07-07T13:16:08"
CONTRIBUTOR
null
## Describe the bug I wrote down _generate_examples like: ![image](https://user-images.githubusercontent.com/34292279/168027186-2fe8b255-2cd8-4b9b-ab1e-8d5a7182979b.png) but where is the bytes? ![image](https://user-images.githubusercontent.com/34292279/168027330-f2496dd0-1d99-464c-b15c-bc57eee0415a.png) ## Expected results value["bytes"] is not None, so i can make datasets with bytes, not path ## bytes looks like: blah blah~~ \xfe\x03\x00\xfb\x06\x1c\x0bo\x074\x03\xaf\x01\x13\x04\xbc\x06\x8c\x05y\x05,\t7\x08\xaf\x03\xc0\xfe\xe8\xfc\x94\xfe\xb7\xfd\xea\xfa\xd5\xf9$\xf9>\xf9\x1f\xf8\r\xf5F\xf49\xf4\xda\xf5-\xf8\n\xf8k\xf8\x07\xfb\x18\xfd\xd9\xfdv\xfd"\xfe\xcc\x01\x1c\x04\x08\x04@\x04{\x06^\tf\t\x1e\x07\x8b\x06\x02\x08\x13\t\x07\x08 \x06g\x06"\x06\xa0\x03\xc6\x002\xff \xff\x1d\xff\x19\xfd?\xfb\xdb\xfa\xfc\xfa$\xfb}\xf9\xe5\xf7\xf9\xf7\xce\xf8.\xf9b\xf9\xc5\xf9\xc0\xfb\xfa\xfcP\xfc\xba\xfbQ\xfc1\xfe\x9f\xff\x12\x00\xa2\x00\x18\x02Z\x03\x02\x04\xb1\x03\xc5\x03W\x04\x82\x04\x8f\x04U\x04\xb6\x04\x10\x05{\x04\x83\x02\x17\x01\x1d\x00\xa0\xff\xec\xfe\x03\xfe#\xfe\xc2\xfe2\xff\xe6\xfe\x9a\xfe~\x01\x91\x08\xb3\tU\x05\x10\x024\x02\xe4\x05\xa8\x07\xa7\x053\x07I\n\x91\x07v\x02\x95\xfd\xbb\xfd\x96\xff\x01\xfe\x1e\xfb\xbb\xf9S\xf8!\xf8\xf4\xf5\xd6\xf3\xf7\xf3l\xf4d\xf6l\xf7d\xf6b\xf7\xc1\xfa(\xfd\xcf\xfd*\xfdq\xfe\xe9\x01\xa8\x03t\x03\x17\x04B\x07\xce\t\t\t\xeb\x06\x0c\x07\x95\x08\x92\t\xbc\x07O\x06\xfb\x06\xd2\x06U\x04\x00\x02\x92\x00\xdc\x00\x84\x00 \xfeT\xfc\xf1\xfb\x82\xfc\x97\xfb}\xf9\x00\xf8_\xf8\x0b\xf9\xe5\xf8\xe2\xf7\xaa\xf8\xb2\xfa\x10\xfbl\xfa\xf5\xf9Y\xfb\xc0\xfd\xe8\xfe\xec\xfe1\x00\xad\x01\xec\x02E\x03\x13\x03\x9b\x03o\x04\xce\x04\xa8\x04\xb2\x04\x1b\x05\xc0\x05\xd2\x04\xe8\x02z\x01\xbe\x00\xae\x00\x07\x00$\xff|\xff\x8e\x00\x13\x00\x10\xff\x98\xff0\x05{\x0b\x05\t\xaa\x03\x82\x01n\x03 blah blah~~ that function not return None ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:2.2.1 - Platform:ubuntu 18.04 - Python version:3.6.9 - PyArrow version:6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4323/timeline
null
completed
null
null
false
[ "![image](https://user-images.githubusercontent.com/34292279/168063684-fff5c12a-8b1e-4c65-b18b-36100ab8a1af.png)\r\n\r\nthat is reason my bytes`s empty\r\nbut i have some confused why path prior is higher than bytes?\r\n\r\nif you can make bytes in _generate_examples , you don`t have to make bytes to path?\r\nbecause we have path and bytes already", "> but i have some confused why path prior is higher than bytes?\r\n\r\nIf the audio file is already available locally, we don't need to store the bytes again.\r\n\r\nIf you don't specify a \"path\" to a local file, then the bytes are stored. You can set \"path\" to None for example.\r\n\r\n> if you can make bytes in _generate_examples , you don`t have to make bytes to path?\r\n> because we have path and bytes already\r\n\r\nIt's useful to pass both \"path\" and \"bytes\" in `_generate_examples`:\r\n- when the dataset has been downloaded, then the \"path\" to the audio files are stored and we can ignore \"bytes\" in order to save disk space.\r\n- when the dataset is loaded in streaming mode, the audio files are not available on your disk and therefore we use the \"bytes\" ", "@lhoestq \r\nFirst of all, thx for reply\r\n\r\nbut, if i put in \"bytes\" and \"path\"\r\nex) {\"bytes\":\"blah blah~\", \"path\":\"blah blah~\"}\r\n\r\nthat source working that my bytes to empty first,\r\nand then, re-calculate my bytes!\r\n![image](https://user-images.githubusercontent.com/34292279/168534687-1fb60d8c-d369-47d2-a4bb-db68f95194b4.png)\r\n\r\nif you have some pcm file, pcm is can read bytes.\r\nso, i put in bytes and paths.\r\nbut bytes is been None why encode_example func make None\r\nand then, on decode_example func, we no have bytes. so, calculate bytes to path.\r\npcm is not support librosa or soundfile, error occured!\r\n\r\nthe most important thing is not announced anywhere this situation can be reproduced\r\n\r\nis that truly right process flow?", "I don't think we support PCM files, feel free to convert your data to WAV for now.\r\n\r\nIt would be awesome to support PCM files though, let me know if you'd like to contribute this feature, I'd be happy to help", "@lhoestq oh, how can i contribute?", "You can clone the repository (see the guide on [how to contribute](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-create-a-pull-request)) and see how we can make the `Image.encode_example` method work with PCM data.\r\n\r\nThere might be other ways to approach this problem, but here is what I think is a reasonable one:\r\n\r\nI think `Image.encode_example` should be able to take PCM bytes as input and the sampling rate, and return the WAV bytes (built by combining the PCM bytes and the sampling rate info), so that `Image.decode_example` can read it.\r\n\r\nTo check if the input bytes are PCM data, you can just check if the extension of the `path` is \".pcm\".\r\n", "maybe i can start to contribute on this sunday!\r\n@lhoestq ", "@lhoestq plz check my pr #4409 \r\n\r\nam i wrong somting?", "Thanks, I reviewed your PR :)" ]
https://api.github.com/repos/huggingface/datasets/issues/4322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4322/comments
https://api.github.com/repos/huggingface/datasets/issues/4322/events
https://github.com/huggingface/datasets/pull/4322
1,233,596,947
PR_kwDODunzps43s1wy
4,322
Added stratify option to train_test_split function.
{ "login": "nandwalritik", "id": 48522685, "node_id": "MDQ6VXNlcjQ4NTIyNjg1", "avatar_url": "https://avatars.githubusercontent.com/u/48522685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nandwalritik", "html_url": "https://github.com/nandwalritik", "followers_url": "https://api.github.com/users/nandwalritik/followers", "following_url": "https://api.github.com/users/nandwalritik/following{/other_user}", "gists_url": "https://api.github.com/users/nandwalritik/gists{/gist_id}", "starred_url": "https://api.github.com/users/nandwalritik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nandwalritik/subscriptions", "organizations_url": "https://api.github.com/users/nandwalritik/orgs", "repos_url": "https://api.github.com/users/nandwalritik/repos", "events_url": "https://api.github.com/users/nandwalritik/events{/privacy}", "received_events_url": "https://api.github.com/users/nandwalritik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
9
"2022-05-12T08:00:31"
"2022-11-22T14:53:55"
"2022-05-25T20:43:51"
CONTRIBUTOR
null
This PR adds `stratify` option to `train_test_split` method. I took reference from scikit-learn's `StratifiedShuffleSplit` class for implementing stratified split and integrated the changes as were suggested by @lhoestq. It fixes #3452. @lhoestq Please review and let me know, if any changes are required.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4322/reactions", "total_count": 5, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4322/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4322", "html_url": "https://github.com/huggingface/datasets/pull/4322", "diff_url": "https://github.com/huggingface/datasets/pull/4322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4322.patch", "merged_at": "2022-05-25T20:43:51" }
true
[ "> Nice thank you ! This will be super useful :)\r\n> \r\n> Could you also add some tests in test_arrow_dataset.py and add an example of usage in the `Example:` section of the `train_test_split` docstring ?\r\n\r\nI will try to do it, is there any documentation for adding test cases? I have never done it before.", "Thanks for the changes !\r\n\r\n> I will try to do it, is there any documentation for adding test cases? I have never done it before.\r\n\r\nYou can just add a function `test_train_test_split_startify` in `test_arrow_dataset.py`.\r\n\r\nIn this function you can define a dataset and make sure that `train_test_split` with the `stratify` argument works as expected.\r\n\r\nYou can do `pytest tests/test_arrow_dataset.py::test_train_test_split_startify` to run your test.\r\n\r\nFeel free to get some inspiration from other tests like `test_interleave_datasets` for example", "I have added tests for stratified train_test_split in `test_arrow_dataset.py` file inside `test_train_test_split_startify` function. I have also added example usage with `stratify` arg in `Example:` section of the `train_test_split` docstring.\r\nResults of tests:\r\n```\r\n(data) nandwalritik@hp:~/datasets$ pytest tests/test_arrow_dataset.py::test_train_test_split_startify -W ignore\r\n============================================================================ test session starts ============================================================================\r\nplatform linux -- Python 3.9.5, pytest-7.1.2, pluggy-1.0.0\r\nrootdir: /home/nandwalritik/datasets\r\nplugins: datadir-1.3.1, forked-1.4.0, xdist-2.5.0\r\ncollected 1 item \r\n\r\ntests/test_arrow_dataset.py . [100%]\r\n\r\n============================================================================= 1 passed in 0.12s =============================================================================\r\n\r\n```", "Thanks a lot !\r\n\r\n`utils/stratify.py` sounds good yes :)\r\n\r\nAlso feel free to merge `master` into your branch to fix the CI ;)", "Added all the changes as were suggested and rebased with `main`.", "_The documentation is not available anymore as the PR was closed or merged._", "Hi, I encounter an error when I try to specify the stratify_by_column. However, I have a columns which specific the label of the row as a string. But an error showed when I try to do it. \"ValueError: Stratifying by column is only supported for ClassLabel column, and column code is Value.\".", "Hi @Damon03 , you can change the type of your column to ClassLabel using\r\n```python\r\nds = ds.class_encode_column(column_name)\r\n```\r\nthen you'll be free to use `stratify` :)", "Thank you so much. It worked." ]
https://api.github.com/repos/huggingface/datasets/issues/4321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4321/comments
https://api.github.com/repos/huggingface/datasets/issues/4321/events
https://github.com/huggingface/datasets/pull/4321
1,233,273,351
PR_kwDODunzps43ryW7
4,321
Adding dataset enwik8
{ "login": "HallerPatrick", "id": 22773355, "node_id": "MDQ6VXNlcjIyNzczMzU1", "avatar_url": "https://avatars.githubusercontent.com/u/22773355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HallerPatrick", "html_url": "https://github.com/HallerPatrick", "followers_url": "https://api.github.com/users/HallerPatrick/followers", "following_url": "https://api.github.com/users/HallerPatrick/following{/other_user}", "gists_url": "https://api.github.com/users/HallerPatrick/gists{/gist_id}", "starred_url": "https://api.github.com/users/HallerPatrick/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HallerPatrick/subscriptions", "organizations_url": "https://api.github.com/users/HallerPatrick/orgs", "repos_url": "https://api.github.com/users/HallerPatrick/repos", "events_url": "https://api.github.com/users/HallerPatrick/events{/privacy}", "received_events_url": "https://api.github.com/users/HallerPatrick/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-05-11T23:25:02"
"2022-06-01T14:27:30"
"2022-06-01T14:04:06"
CONTRIBUTOR
null
Because I regularly work with enwik8, I would like to contribute the dataset loader 🤗
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4321/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4321/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4321", "html_url": "https://github.com/huggingface/datasets/pull/4321", "diff_url": "https://github.com/huggingface/datasets/pull/4321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4321.patch", "merged_at": "2022-06-01T14:04:06" }
true
[ "@lhoestq Thank you for the great feedback! Looks like all tests are passing now :)", "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4320/comments
https://api.github.com/repos/huggingface/datasets/issues/4320/events
https://github.com/huggingface/datasets/issues/4320
1,233,208,864
I_kwDODunzps5JgUYg
4,320
Multi-news dataset loader attempts to strip wrong character from beginning of summaries
{ "login": "JohnGiorgi", "id": 8917831, "node_id": "MDQ6VXNlcjg5MTc4MzE=", "avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JohnGiorgi", "html_url": "https://github.com/JohnGiorgi", "followers_url": "https://api.github.com/users/JohnGiorgi/followers", "following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}", "gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}", "starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions", "organizations_url": "https://api.github.com/users/JohnGiorgi/orgs", "repos_url": "https://api.github.com/users/JohnGiorgi/repos", "events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}", "received_events_url": "https://api.github.com/users/JohnGiorgi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
2
"2022-05-11T21:36:41"
"2022-05-16T13:52:10"
"2022-05-16T13:52:10"
CONTRIBUTOR
null
## Describe the bug The `multi_news.py` data loader has [a line which attempts to strip `"- "` from the beginning of summaries](https://github.com/huggingface/datasets/blob/aa743886221d76afb409d263e1b136e7a71fe2b4/datasets/multi_news/multi_news.py#L97). The actual character in the multi-news dataset, however, is `"– "`, which is different, e.g. `"– " != "- "`. I would have just opened a PR to fix the mistake, but I am wondering what the motivation for stripping this character is? AFAICT most approaches just leave it in, e.g. the current SOTA on this dataset, [PRIMERA](https://huggingface.co/allenai/PRIMERA-multinews) (you can see its in the generated summaries of the model in their [example notebook](https://github.com/allenai/PRIMER/blob/main/Evaluation_Example.ipynb)). ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4320/timeline
null
completed
null
null
false
[ "Hi ! Thanks for reporting :)\r\n\r\nThis dataset was simply converted from [tensorflow datasets](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/summarization/multi_news.py)\r\n\r\nI think we can just remove the `.strip(\"- \")` and keep this character", "Cool! I made a PR." ]
https://api.github.com/repos/huggingface/datasets/issues/4319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4319/comments
https://api.github.com/repos/huggingface/datasets/issues/4319/events
https://github.com/huggingface/datasets/pull/4319
1,232,982,023
PR_kwDODunzps43q0UY
4,319
Adding eval metadata for ade v2
{ "login": "sashavor", "id": 14205986, "node_id": "MDQ6VXNlcjE0MjA1OTg2", "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sashavor", "html_url": "https://github.com/sashavor", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "organizations_url": "https://api.github.com/users/sashavor/orgs", "repos_url": "https://api.github.com/users/sashavor/repos", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "received_events_url": "https://api.github.com/users/sashavor/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-11T17:36:20"
"2022-05-12T13:29:51"
"2022-05-12T13:22:19"
NONE
null
Adding metadata to allow evaluation
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4319/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4319", "html_url": "https://github.com/huggingface/datasets/pull/4319", "diff_url": "https://github.com/huggingface/datasets/pull/4319.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4319.patch", "merged_at": "2022-05-12T13:22:19" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4318/comments
https://api.github.com/repos/huggingface/datasets/issues/4318/events
https://github.com/huggingface/datasets/pull/4318
1,232,905,488
PR_kwDODunzps43qkkQ
4,318
Don't check f.loc in _get_extraction_protocol_with_magic_number
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-11T16:27:09"
"2022-05-11T16:57:02"
"2022-05-11T16:46:31"
MEMBER
null
`f.loc` doesn't always exist for file-like objects in python. I removed it since it was not necessary anyway (we always seek the file to 0 after reading the magic number) Fix https://github.com/huggingface/datasets/issues/4310
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4318/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4318", "html_url": "https://github.com/huggingface/datasets/pull/4318", "diff_url": "https://github.com/huggingface/datasets/pull/4318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4318.patch", "merged_at": "2022-05-11T16:46:31" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4317/comments
https://api.github.com/repos/huggingface/datasets/issues/4317/events
https://github.com/huggingface/datasets/pull/4317
1,232,737,401
PR_kwDODunzps43qBzh
4,317
Fix cnn_dailymail (dm stories were ignored)
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-11T14:25:25"
"2022-05-11T16:00:09"
"2022-05-11T15:52:37"
MEMBER
null
https://github.com/huggingface/datasets/pull/4188 introduced a bug in `datasets` 2.2.0: DailyMail stories are ignored when generating the dataset. I fixed that, and removed the google drive link (it has annoying quota limitations issues) We can do a patch release after this is merged
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4317/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4317/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4317", "html_url": "https://github.com/huggingface/datasets/pull/4317", "diff_url": "https://github.com/huggingface/datasets/pull/4317.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4317.patch", "merged_at": "2022-05-11T15:52:37" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4316/comments
https://api.github.com/repos/huggingface/datasets/issues/4316/events
https://github.com/huggingface/datasets/pull/4316
1,232,681,207
PR_kwDODunzps43p1Za
4,316
Support passing config_kwargs to CLI run_beam
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-11T13:53:37"
"2022-05-11T14:36:49"
"2022-05-11T14:28:31"
MEMBER
null
This PR supports passing `config_kwargs` to CLI run_beam, so that for example for "wikipedia" dataset, we can pass: ``` --date 20220501 --language ca ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4316/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4316/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4316", "html_url": "https://github.com/huggingface/datasets/pull/4316", "diff_url": "https://github.com/huggingface/datasets/pull/4316.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4316.patch", "merged_at": "2022-05-11T14:28:31" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4315/comments
https://api.github.com/repos/huggingface/datasets/issues/4315/events
https://github.com/huggingface/datasets/pull/4315
1,232,549,330
PR_kwDODunzps43pZ6p
4,315
Fix CLI run_beam namespace
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-11T12:21:00"
"2022-05-11T13:13:00"
"2022-05-11T13:05:08"
MEMBER
null
Currently, it raises TypeError: ``` TypeError: __init__() got an unexpected keyword argument 'namespace' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4315/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4315", "html_url": "https://github.com/huggingface/datasets/pull/4315", "diff_url": "https://github.com/huggingface/datasets/pull/4315.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4315.patch", "merged_at": "2022-05-11T13:05:08" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4314/comments
https://api.github.com/repos/huggingface/datasets/issues/4314/events
https://github.com/huggingface/datasets/pull/4314
1,232,326,726
PR_kwDODunzps43oqXD
4,314
Catch pull error when mirroring
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-11T09:38:35"
"2022-05-11T12:54:07"
"2022-05-11T12:46:42"
MEMBER
null
Catch pull errors when mirroring so that the script continues to update the other datasets. The error will still be printed at the end of the job. In this case the job also fails, and asks to manually update the datasets that failed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4314/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4314", "html_url": "https://github.com/huggingface/datasets/pull/4314", "diff_url": "https://github.com/huggingface/datasets/pull/4314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4314.patch", "merged_at": "2022-05-11T12:46:42" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4313/comments
https://api.github.com/repos/huggingface/datasets/issues/4313/events
https://github.com/huggingface/datasets/pull/4313
1,231,764,100
PR_kwDODunzps43m4qB
4,313
Add API code examples for Builder classes
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
1
"2022-05-10T22:22:32"
"2022-05-12T17:02:43"
"2022-05-12T12:36:57"
MEMBER
null
This PR adds API code examples for the Builder classes.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4313/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4313", "html_url": "https://github.com/huggingface/datasets/pull/4313", "diff_url": "https://github.com/huggingface/datasets/pull/4313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4313.patch", "merged_at": "2022-05-12T12:36:57" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4312/comments
https://api.github.com/repos/huggingface/datasets/issues/4312/events
https://github.com/huggingface/datasets/pull/4312
1,231,662,775
PR_kwDODunzps43mlug
4,312
added TR-News dataset
{ "login": "batubayk", "id": 25901065, "node_id": "MDQ6VXNlcjI1OTAxMDY1", "avatar_url": "https://avatars.githubusercontent.com/u/25901065?v=4", "gravatar_id": "", "url": "https://api.github.com/users/batubayk", "html_url": "https://github.com/batubayk", "followers_url": "https://api.github.com/users/batubayk/followers", "following_url": "https://api.github.com/users/batubayk/following{/other_user}", "gists_url": "https://api.github.com/users/batubayk/gists{/gist_id}", "starred_url": "https://api.github.com/users/batubayk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/batubayk/subscriptions", "organizations_url": "https://api.github.com/users/batubayk/orgs", "repos_url": "https://api.github.com/users/batubayk/repos", "events_url": "https://api.github.com/users/batubayk/events{/privacy}", "received_events_url": "https://api.github.com/users/batubayk/received_events", "type": "User", "site_admin": false }
[ { "id": 4564477500, "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution", "name": "dataset contribution", "color": "0e8a16", "default": false, "description": "Contribution to a dataset script" } ]
closed
false
null
[]
null
1
"2022-05-10T20:33:00"
"2022-10-03T09:36:45"
"2022-10-03T09:36:45"
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4312/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4312", "html_url": "https://github.com/huggingface/datasets/pull/4312", "diff_url": "https://github.com/huggingface/datasets/pull/4312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4312.patch", "merged_at": null }
true
[ "Thanks for your contribution, @batubayk.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nI would suggest you create this dataset there. Please, feel free to tell us if you need some help." ]
https://api.github.com/repos/huggingface/datasets/issues/4311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4311/comments
https://api.github.com/repos/huggingface/datasets/issues/4311/events
https://github.com/huggingface/datasets/pull/4311
1,231,369,438
PR_kwDODunzps43ln8-
4,311
[Imagefolder] Docs + Don't infer labels from file names when there are metadata + Error messages when metadata and images aren't linked correctly
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-05-10T15:52:15"
"2022-05-10T17:19:42"
"2022-05-10T17:11:47"
MEMBER
null
I updated the `docs/source/image_process.mdx` documentation and added an example for image captioning and object detection using `ImageFolder`. While doing so I also improved a few aspects: - we don't need to infer labels from file names when there are metadata - they can just be in the metadata if necessary - raise informative error messages when metadata and images aren't linked correctly: - when an image is missing a metadata file - when a metadata file is missing an image I added some tests for these changes as well cc @mariosasko
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4311/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4311", "html_url": "https://github.com/huggingface/datasets/pull/4311", "diff_url": "https://github.com/huggingface/datasets/pull/4311.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4311.patch", "merged_at": "2022-05-10T17:11:47" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Merging this one since mario is off, I took care of adding some tests to make sure everything is fine. Will do the release after it" ]
https://api.github.com/repos/huggingface/datasets/issues/4310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4310/comments
https://api.github.com/repos/huggingface/datasets/issues/4310/events
https://github.com/huggingface/datasets/issues/4310
1,231,319,815
I_kwDODunzps5JZHMH
4,310
Loading dataset with streaming: '_io.BufferedReader' object has no attribute 'loc'
{ "login": "milmin", "id": 72745467, "node_id": "MDQ6VXNlcjcyNzQ1NDY3", "avatar_url": "https://avatars.githubusercontent.com/u/72745467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/milmin", "html_url": "https://github.com/milmin", "followers_url": "https://api.github.com/users/milmin/followers", "following_url": "https://api.github.com/users/milmin/following{/other_user}", "gists_url": "https://api.github.com/users/milmin/gists{/gist_id}", "starred_url": "https://api.github.com/users/milmin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/milmin/subscriptions", "organizations_url": "https://api.github.com/users/milmin/orgs", "repos_url": "https://api.github.com/users/milmin/repos", "events_url": "https://api.github.com/users/milmin/events{/privacy}", "received_events_url": "https://api.github.com/users/milmin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
0
"2022-05-10T15:12:53"
"2022-05-11T16:46:31"
"2022-05-11T16:46:31"
NONE
null
## Describe the bug Loading a datasets with `load_dataset` and `streaming=True` returns `AttributeError: '_io.BufferedReader' object has no attribute 'loc'`. Notice that loading with `streaming=False` works fine. In the following steps we load parquet files but the same happens with pickle files. The problem seems to come from `fsspec` lib, I put in the environment info also `s3fs` and `fsspec` versions since I'm loading from an s3 bucket. ## Steps to reproduce the bug ```python from datasets import load_dataset # path is the path to parquet files data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"} dataset = load_dataset("parquet", data_files=data_files, streaming=True) ``` ## Expected results A dataset object `datasets.dataset_dict.DatasetDict` ## Actual results ``` AttributeError Traceback (most recent call last) <command-562086> in <module> 11 12 data_files = {"train": path + "meta_train.parquet.gzip", "test": path + "meta_test.parquet.gzip"} ---> 13 dataset = load_dataset("parquet", data_files=data_files, streaming=True) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1679 if streaming: 1680 extend_dataset_builder_for_streaming(builder_instance, use_auth_token=use_auth_token) -> 1681 return builder_instance.as_streaming_dataset( 1682 split=split, 1683 use_auth_token=use_auth_token, /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/builder.py in as_streaming_dataset(self, split, base_path, use_auth_token) 904 ) 905 self._check_manual_download(dl_manager) --> 906 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} 907 # By default, return all splits 908 if split is None: /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py in _split_generators(self, dl_manager) 30 if not self.config.data_files: 31 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}") ---> 32 data_files = dl_manager.download_and_extract(self.config.data_files) 33 if isinstance(data_files, (str, list, tuple)): 34 files = data_files /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in download_and_extract(self, url_or_urls) 798 799 def download_and_extract(self, url_or_urls): --> 800 return self.extract(self.download(url_or_urls)) 801 802 def iter_archive(self, urlpath_or_buf: Union[str, io.BufferedReader]) -> Iterable[Tuple]: /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in extract(self, path_or_paths) 776 777 def extract(self, path_or_paths): --> 778 urlpaths = map_nested(self._extract, path_or_paths, map_tuple=True) 779 return urlpaths 780 /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc) 312 num_proc = 1 313 if num_proc <= 1 or len(iterable) <= num_proc: --> 314 mapped = [ 315 _single_map_nested((function, obj, types, None, True, None)) 316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 313 if num_proc <= 1 or len(iterable) <= num_proc: 314 mapped = [ --> 315 _single_map_nested((function, obj, types, None, True, None)) 316 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc) 317 ] /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} 268 else: --> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] 270 if isinstance(data_struct, list): 271 return mapped /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0) 267 return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} 268 else: --> 269 mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] 270 if isinstance(data_struct, list): 271 return mapped /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args) 249 # Singleton first to spare some computation 250 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 251 return function(data_struct) 252 253 # Reduce logging to keep things readable in multiprocessing with tqdm /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _extract(self, urlpath) 781 def _extract(self, urlpath: str) -> str: 782 urlpath = str(urlpath) --> 783 protocol = _get_extraction_protocol(urlpath, use_auth_token=self.download_config.use_auth_token) 784 if protocol is None: 785 # no extraction /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol(urlpath, use_auth_token) 371 urlpath, kwargs = urlpath, {} 372 with fsspec.open(urlpath, **kwargs) as f: --> 373 return _get_extraction_protocol_with_magic_number(f) 374 375 /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/datasets/utils/streaming_download_manager.py in _get_extraction_protocol_with_magic_number(f) 335 def _get_extraction_protocol_with_magic_number(f) -> Optional[str]: 336 """read the magic number from a file-like object and return the compression protocol""" --> 337 prev_loc = f.loc 338 magic_number = f.read(MAGIC_NUMBER_MAX_LENGTH) 339 f.seek(prev_loc) /local_disk0/.ephemeral_nfs/envs/pythonEnv-a7e72260-221c-472b-85f4-bec801aee66d/lib/python3.8/site-packages/fsspec/implementations/local.py in __getattr__(self, item) 337 338 def __getattr__(self, item): --> 339 return getattr(self.f, item) 340 341 def __enter__(self): AttributeError: '_io.BufferedReader' object has no attribute 'loc' ``` ## Environment info - `datasets` version: 2.1.0 - Platform: Linux-5.4.0-1071-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 - `fsspec` version: 2021.08.1 - `s3fs` version: 2021.08.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4310/timeline
null
completed
null
null
false
[]
https://api.github.com/repos/huggingface/datasets/issues/4309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4309/comments
https://api.github.com/repos/huggingface/datasets/issues/4309/events
https://github.com/huggingface/datasets/pull/4309
1,231,232,935
PR_kwDODunzps43lKpm
4,309
[WIP] Add TEDLIUM dataset
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
closed
false
null
[]
null
11
"2022-05-10T14:12:47"
"2022-06-17T12:54:40"
"2022-06-17T11:44:01"
CONTRIBUTOR
null
Adds the TED-LIUM dataset https://www.tensorflow.org/datasets/catalog/tedlium#tedliumrelease3 TODO: - [x] Port `tedium.py` from TF datasets using `convert_dataset.sh` script - [x] Make `load_dataset` work - [ ] ~~Run `datasets-cli` command to generate `dataset_infos.json`~~ - [ ] ~~Create dummy data for continuous testing~~ - [ ] ~~Dummy data tests~~ - [ ] ~~Real data tests~~ - [ ] Create the metadata JSON - [ ] Close PR and add directly to the Hub under LIUM org
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4309/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4309", "html_url": "https://github.com/huggingface/datasets/pull/4309", "diff_url": "https://github.com/huggingface/datasets/pull/4309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4309.patch", "merged_at": null }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```\r\n\r\n```\r\nDownloading and preparing dataset tedlium/release1 to /home/sanchitgandhi/cache/tedlium/release1/1.0.1/5a9fcb97b4b52d5a1c9dc7bde4b1d5994cd89c4a3425ea36c789bf6096fee4f0...\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/load.py\", line 1703, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/sanchit_huggingface_co/datasets/src/datasets/builder.py\", line 1240, in _download_and_prepare\r\n raise MissingBeamOptions(\r\ndatasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/\r\nIf you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). \r\nExample of usage: \r\n `load_dataset('tedlium', 'release1', beam_runner='DirectRunner')`\r\n```\r\nSpecifying the `beam_runner='DirectRunner'` works:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache', beam_runner='DirectRunner')\r\n```", "Extra Python imports/Linux packages:\r\n```\r\npip install pydub\r\nsudo apt install ffmpeg\r\n```", "Script heavily inspired by the TF datasets script at: https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/audio/tedlium.py\r\n\r\nThe TF datasets script uses the module AudioSegment from the package `pydub` (https://github.com/jiaaro/pydub), which is used to to open the audio files (stored in .sph format):\r\nhttps://github.com/huggingface/datasets/blob/61bf6123634bf6e7c7287cd6097909eb26118c58/datasets/tedlium/tedlium.py#L167-L170\r\nThis package requires the pip install of `pydub` and the system installation of `ffmpeg`: https://github.com/jiaaro/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\nThe TF datasets script also uses `_build_pcollection`:\r\nhttps://github.com/huggingface/datasets/blob/8afbbb6fe66b40d05574e2e72e65e974c72ae769/datasets/tedlium/tedlium.py#L200-L206\r\nHowever, I was advised against using `beam` logic. Thus, I have reverted to generating the examples file-by-file: https://github.com/huggingface/datasets/blob/61bf6123634bf6e7c7287cd6097909eb26118c58/datasets/tedlium/tedlium.py#L112-L138\r\n\r\nI am now able to generate examples by running the `load_dataset` command:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```\r\n\r\nHere, generating examples is **extremely** slow: it takes ~1 second per example, so ~60k seconds for the train set (~16 hours). Is there a way of paralleling this to make it faster?", "> This package requires the pip install of pydub and the system installation of ffmpeg: https://github.com/jiaaro/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\nIt's ok, windows users will have have a bad time but I'm not sure we can do much about it.\r\n\r\n> Here, generating examples is extremely slow: it takes ~1 second per example, so ~60k seconds for the train set (~16 hours). Is there a way of paralleling this to make it faster?\r\n\r\nNot at the moment. For such cases we advise hosting the dataset ourselves in a processed format. The license doesn't allow this since the license is \"NoDerivatives\". Currently the only way to parallelize it is by keeping is as a beam dataset and let users pay Google Dataflow to process it (or use spark or whatever).", "Thanks for your super speedy reply @lhoestq!\r\n\r\nI’ve uploaded the script and README.md to the org here: https://huggingface.co/datasets/LIUM/tedlium\r\nIs any modification of the script required to be able to use it from the Hub? When I run:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ntedlium = load_dataset(\"LIUM/tedlium\", \"release1\") # for Release 1\r\n```\r\nI get the following error:\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\nInput In [2], in <cell line: 1>()\r\n----> 1 load_dataset(\"LIUM/tedlium\", \"release1\")\r\n\r\nFile ~/datasets/src/datasets/load.py:1676, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)\r\n 1673 ignore_verifications = ignore_verifications or save_infos\r\n 1675 # Create a dataset builder\r\n-> 1676 builder_instance = load_dataset_builder(\r\n 1677 path=path,\r\n 1678 name=name,\r\n 1679 data_dir=data_dir,\r\n 1680 data_files=data_files,\r\n 1681 cache_dir=cache_dir,\r\n 1682 features=features,\r\n 1683 download_config=download_config,\r\n 1684 download_mode=download_mode,\r\n 1685 revision=revision,\r\n 1686 use_auth_token=use_auth_token,\r\n 1687 **config_kwargs,\r\n 1688 )\r\n 1690 # Return iterable dataset in case of streaming\r\n 1691 if streaming:\r\n\r\nFile ~/datasets/src/datasets/load.py:1502, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)\r\n 1500 download_config = download_config.copy() if download_config else DownloadConfig()\r\n 1501 download_config.use_auth_token = use_auth_token\r\n-> 1502 dataset_module = dataset_module_factory(\r\n 1503 path,\r\n 1504 revision=revision,\r\n 1505 download_config=download_config,\r\n 1506 download_mode=download_mode,\r\n 1507 data_dir=data_dir,\r\n 1508 data_files=data_files,\r\n 1509 )\r\n 1511 # Get dataset builder class from the processing script\r\n 1512 builder_cls = import_main_class(dataset_module.module_path)\r\n\r\nFile ~/datasets/src/datasets/load.py:1254, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1249 if isinstance(e1, FileNotFoundError):\r\n 1250 raise FileNotFoundError(\r\n 1251 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. \"\r\n 1252 f\"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}\"\r\n 1253 ) from None\r\n-> 1254 raise e1 from None\r\n 1255 else:\r\n 1256 raise FileNotFoundError(\r\n 1257 f\"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory.\"\r\n 1258 )\r\n\r\nFile ~/datasets/src/datasets/load.py:1227, in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs)\r\n 1225 raise e\r\n 1226 if filename in [sibling.rfilename for sibling in dataset_info.siblings]:\r\n-> 1227 return HubDatasetModuleFactoryWithScript(\r\n 1228 path,\r\n 1229 revision=revision,\r\n 1230 download_config=download_config,\r\n 1231 download_mode=download_mode,\r\n 1232 dynamic_modules_path=dynamic_modules_path,\r\n 1233 ).get_module()\r\n 1234 else:\r\n 1235 return HubDatasetModuleFactoryWithoutScript(\r\n 1236 path,\r\n 1237 revision=revision,\r\n (...)\r\n 1241 download_mode=download_mode,\r\n 1242 ).get_module()\r\n\r\nFile ~/datasets/src/datasets/load.py:940, in HubDatasetModuleFactoryWithScript.get_module(self)\r\n 938 def get_module(self) -> DatasetModule:\r\n 939 # get script and other files\r\n--> 940 local_path = self.download_loading_script()\r\n 941 dataset_infos_path = self.download_dataset_infos_file()\r\n 942 imports = get_imports(local_path)\r\n\r\nFile ~/datasets/src/datasets/load.py:918, in HubDatasetModuleFactoryWithScript.download_loading_script(self)\r\n 917 def download_loading_script(self) -> str:\r\n--> 918 file_path = hf_hub_url(path=self.name, name=self.name.split(\"/\")[1] + \".py\", revision=self.revision)\r\n 919 download_config = self.download_config.copy()\r\n 920 if download_config.download_desc is None:\r\n\r\nTypeError: hf_hub_url() got an unexpected keyword argument 'name'\r\n```\r\n\r\nNote that I am able to load the dataset from the `datasets` repo with the following lines of code:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('./datasets/tedlium', 'release1', cache_dir='/home/sanchitgandhi/cache')\r\n```", "What version of `datasets` do you have ?\r\nUpdating `datasets` should fix the error ;)\r\n", "> This package requires the pip install of pydub and the system installation of ffmpeg: https://github.com/jiaaro/pydub#installation\r\nIs it ok to use these packages? Or do we tend to avoid introducing additional dependencies?\r\n\r\n`soundfile`, which is a required audio dependency, should also work with `.sph` files, no?", "> `soundfile`, which is a required audio dependency, should also work with `.sph` files, no?\r\n\r\nAwesome, thanks for the pointer @mariosasko! Switched `pydub` to `soundfile`, and having specifying the `dtype` argument in `soundfile.read` as `np.int16`, the arrays match with those from `pydub` ✅\r\n\r\nI also did some heavy optimising of the script with the processing of the `.stm` and `.sph` files - it now runs 2000x faster than before, so there probably isn't a need to upload the data to the Hub @lhoestq. The total processing time is just ~2mins now 🚀\r\n", "TEDLIUM completed and uploaded to the HF Hub: https://huggingface.co/datasets/LIUM/tedlium", "Awesome !" ]
https://api.github.com/repos/huggingface/datasets/issues/4308
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4308/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4308/comments
https://api.github.com/repos/huggingface/datasets/issues/4308/events
https://github.com/huggingface/datasets/pull/4308
1,231,217,783
PR_kwDODunzps43lHdP
4,308
Remove unused multiprocessing args from test CLI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-10T14:02:15"
"2022-05-11T12:58:25"
"2022-05-11T12:50:43"
MEMBER
null
Multiprocessing is not used in the test CLI.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4308/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4308/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4308", "html_url": "https://github.com/huggingface/datasets/pull/4308", "diff_url": "https://github.com/huggingface/datasets/pull/4308.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4308.patch", "merged_at": "2022-05-11T12:50:42" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4307
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4307/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4307/comments
https://api.github.com/repos/huggingface/datasets/issues/4307/events
https://github.com/huggingface/datasets/pull/4307
1,231,175,639
PR_kwDODunzps43k-Wo
4,307
Add packaged builder configs to the documentation
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-10T13:34:19"
"2022-05-10T14:03:50"
"2022-05-10T13:55:54"
MEMBER
null
Add the packaged builders configurations to the docs reference is useful to show the list of all parameters one can use when loading data in many formats: CSV, JSON, etc.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4307/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4307/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4307", "html_url": "https://github.com/huggingface/datasets/pull/4307", "diff_url": "https://github.com/huggingface/datasets/pull/4307.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4307.patch", "merged_at": "2022-05-10T13:55:54" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4306/comments
https://api.github.com/repos/huggingface/datasets/issues/4306/events
https://github.com/huggingface/datasets/issues/4306
1,231,137,204
I_kwDODunzps5JYam0
4,306
`load_dataset` does not work with certain filename.
{ "login": "whatever60", "id": 57242693, "node_id": "MDQ6VXNlcjU3MjQyNjkz", "avatar_url": "https://avatars.githubusercontent.com/u/57242693?v=4", "gravatar_id": "", "url": "https://api.github.com/users/whatever60", "html_url": "https://github.com/whatever60", "followers_url": "https://api.github.com/users/whatever60/followers", "following_url": "https://api.github.com/users/whatever60/following{/other_user}", "gists_url": "https://api.github.com/users/whatever60/gists{/gist_id}", "starred_url": "https://api.github.com/users/whatever60/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/whatever60/subscriptions", "organizations_url": "https://api.github.com/users/whatever60/orgs", "repos_url": "https://api.github.com/users/whatever60/repos", "events_url": "https://api.github.com/users/whatever60/events{/privacy}", "received_events_url": "https://api.github.com/users/whatever60/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
1
"2022-05-10T13:14:04"
"2022-05-10T18:58:36"
"2022-05-10T18:58:09"
NONE
null
## Describe the bug This is a weird bug that took me some time to find out. I have a JSON dataset that I want to load with `load_dataset` like this: ``` data_files = dict(train="train.json.zip", val="val.json.zip") dataset = load_dataset("json", data_files=data_files, field="data") ``` ## Expected results No error. ## Actual results The val file is loaded as expected, but the train file throws JSON decoding error: ``` ╭──────────────────────────── Traceback (most recent call last) ────────────────────────────╮ │ <ipython-input-74-97947e92c100>:5 in <module> │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/load.py:1687 in │ │ load_dataset │ │ │ │ 1684 │ try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES │ │ 1685 │ │ │ 1686 │ # Download and prepare data │ │ ❱ 1687 │ builder_instance.download_and_prepare( │ │ 1688 │ │ download_config=download_config, │ │ 1689 │ │ download_mode=download_mode, │ │ 1690 │ │ ignore_verifications=ignore_verifications, │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:605 in │ │ download_and_prepare │ │ │ │ 602 │ │ │ │ │ │ except ConnectionError: │ │ 603 │ │ │ │ │ │ │ logger.warning("HF google storage unreachable. Downloa │ │ 604 │ │ │ │ │ if not downloaded_from_gcs: │ │ ❱ 605 │ │ │ │ │ │ self._download_and_prepare( │ │ 606 │ │ │ │ │ │ │ dl_manager=dl_manager, verify_infos=verify_infos, **do │ │ 607 │ │ │ │ │ │ ) │ │ 608 │ │ │ │ │ # Sync info │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:694 in │ │ _download_and_prepare │ │ │ │ 691 │ │ │ │ │ 692 │ │ │ try: │ │ 693 │ │ │ │ # Prepare split will record examples associated to the split │ │ ❱ 694 │ │ │ │ self._prepare_split(split_generator, **prepare_split_kwargs) │ │ 695 │ │ │ except OSError as e: │ │ 696 │ │ │ │ raise OSError( │ │ 697 │ │ │ │ │ "Cannot find data file. " │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/builder.py:1151 in │ │ _prepare_split │ │ │ │ 1148 │ │ │ │ 1149 │ │ generator = self._generate_tables(**split_generator.gen_kwargs) │ │ 1150 │ │ with ArrowWriter(features=self.info.features, path=fpath) as writer: │ │ ❱ 1151 │ │ │ for key, table in logging.tqdm( │ │ 1152 │ │ │ │ generator, unit=" tables", leave=False, disable=True # not loggin │ │ 1153 │ │ │ ): │ │ 1154 │ │ │ │ writer.write_table(table) │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/notebook.py:257 in │ │ __iter__ │ │ │ │ 254 │ │ │ 255 │ def __iter__(self): │ │ 256 │ │ try: │ │ ❱ 257 │ │ │ for obj in super(tqdm_notebook, self).__iter__(): │ │ 258 │ │ │ │ # return super(tqdm...) will not catch exception │ │ 259 │ │ │ │ yield obj │ │ 260 │ │ # NB: except ... [ as ...] breaks IPython async KeyboardInterrupt │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/tqdm/std.py:1183 in │ │ __iter__ │ │ │ │ 1180 │ │ # If the bar is disabled, then just walk the iterable │ │ 1181 │ │ # (note: keep this check outside the loop for performance) │ │ 1182 │ │ if self.disable: │ │ ❱ 1183 │ │ │ for obj in iterable: │ │ 1184 │ │ │ │ yield obj │ │ 1185 │ │ │ return │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/site-packages/datasets/packaged_modules/j │ │ son/json.py:90 in _generate_tables │ │ │ │ 87 │ │ │ # If the file is one json object and if we need to look at the list of │ │ 88 │ │ │ if self.config.field is not None: │ │ 89 │ │ │ │ with open(file, encoding="utf-8") as f: │ │ ❱ 90 │ │ │ │ │ dataset = json.load(f) │ │ 91 │ │ │ │ │ │ 92 │ │ │ │ # We keep only the field we are interested in │ │ 93 │ │ │ │ dataset = dataset[self.config.field] │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:293 in load │ │ │ │ 290 │ To use a custom ``JSONDecoder`` subclass, specify it with the ``cls`` │ │ 291 │ kwarg; otherwise ``JSONDecoder`` is used. │ │ 292 │ """ │ │ ❱ 293 │ return loads(fp.read(), │ │ 294 │ │ cls=cls, object_hook=object_hook, │ │ 295 │ │ parse_float=parse_float, parse_int=parse_int, │ │ 296 │ │ parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/__init__.py:357 in loads │ │ │ │ 354 │ if (cls is None and object_hook is None and │ │ 355 │ │ │ parse_int is None and parse_float is None and │ │ 356 │ │ │ parse_constant is None and object_pairs_hook is None and not kw): │ │ ❱ 357 │ │ return _default_decoder.decode(s) │ │ 358 │ if cls is None: │ │ 359 │ │ cls = JSONDecoder │ │ 360 │ if object_hook is not None: │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:337 in decode │ │ │ │ 334 │ │ containing a JSON document). │ │ 335 │ │ │ │ 336 │ │ """ │ │ ❱ 337 │ │ obj, end = self.raw_decode(s, idx=_w(s, 0).end()) │ │ 338 │ │ end = _w(s, end).end() │ │ 339 │ │ if end != len(s): │ │ 340 │ │ │ raise JSONDecodeError("Extra data", s, end) │ │ │ │ /home/tiankang/software/anaconda3/lib/python3.8/json/decoder.py:353 in raw_decode │ │ │ │ 350 │ │ │ │ 351 │ │ """ │ │ 352 │ │ try: │ │ ❱ 353 │ │ │ obj, end = self.scan_once(s, idx) │ │ 354 │ │ except StopIteration as err: │ │ 355 │ │ │ raise JSONDecodeError("Expecting value", s, err.value) from None │ │ 356 │ │ return obj, end │ ╰───────────────────────────────────────────────────────────────────────────────────────────╯ JSONDecodeError: Unterminated string starting at: line 85 column 20 (char 60051) ``` However, when I rename the `train.json.zip` to other names (like `training.json.zip`, or even to `train.json`), everything works fine; when I unzip the file to `train.json`, it works as well. ## Environment info ``` - `datasets` version: 2.1.0 - Platform: Linux-4.4.0-131-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 7.0.0 - Pandas version: 1.4.2 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4306/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4306/timeline
null
completed
null
null
false
[ "Never mind. It is because of the caching of datasets..." ]
https://api.github.com/repos/huggingface/datasets/issues/4305
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4305/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4305/comments
https://api.github.com/repos/huggingface/datasets/issues/4305/events
https://github.com/huggingface/datasets/pull/4305
1,231,099,934
PR_kwDODunzps43kt4P
4,305
Fixes FrugalScore
{ "login": "moussaKam", "id": 28675016, "node_id": "MDQ6VXNlcjI4Njc1MDE2", "avatar_url": "https://avatars.githubusercontent.com/u/28675016?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moussaKam", "html_url": "https://github.com/moussaKam", "followers_url": "https://api.github.com/users/moussaKam/followers", "following_url": "https://api.github.com/users/moussaKam/following{/other_user}", "gists_url": "https://api.github.com/users/moussaKam/gists{/gist_id}", "starred_url": "https://api.github.com/users/moussaKam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/moussaKam/subscriptions", "organizations_url": "https://api.github.com/users/moussaKam/orgs", "repos_url": "https://api.github.com/users/moussaKam/repos", "events_url": "https://api.github.com/users/moussaKam/events{/privacy}", "received_events_url": "https://api.github.com/users/moussaKam/received_events", "type": "User", "site_admin": false }
[ { "id": 4190228726, "node_id": "LA_kwDODunzps75wdD2", "url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate", "name": "transfer-to-evaluate", "color": "E3165C", "default": false, "description": "" } ]
open
false
null
[]
null
2
"2022-05-10T12:44:06"
"2022-09-22T16:42:06"
null
CONTRIBUTOR
null
There are two minor modifications in this PR: 1) `predictions` and `references` are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper. 2) I switched to dynamic padding that was was used in the training, forcing the padding to `max_length` introduces errors for some reason that I ignore. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4305/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4305/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4305", "html_url": "https://github.com/huggingface/datasets/pull/4305", "diff_url": "https://github.com/huggingface/datasets/pull/4305.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4305.patch", "merged_at": null }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4305). All of your documentation changes will be reflected on that endpoint.", "> predictions and references are swapped. Basically Frugalscore is commutative, however some tiny differences can occur if we swap the references and the predictions. I decided to swap them just to obtain the exact results as reported in the paper.\r\n\r\nWhat is the order of magnitude of the difference ? Do you know what causes this ?\r\n\r\n> I switched to dynamic padding that was was used in the training, forcing the padding to max_length introduces errors for some reason that I ignore.\r\n\r\nWhat error ?" ]
https://api.github.com/repos/huggingface/datasets/issues/4304
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4304/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4304/comments
https://api.github.com/repos/huggingface/datasets/issues/4304/events
https://github.com/huggingface/datasets/issues/4304
1,231,047,051
I_kwDODunzps5JYEmL
4,304
Language code search does direct matches
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
1
"2022-05-10T11:59:16"
"2022-05-10T12:38:42"
null
CONTRIBUTOR
null
## Describe the bug Hi. Searching for bcp47 tags that are just the language prefix (e.g. `sq` or `da`) excludes datasets that have added extra information in their language metadata (e.g. `sq-AL` or `da-bornholm`). The example codes given in the [tagging app](https://huggingface.co/spaces/huggingface/datasets-tagging) encourages addition of the additional codes ("_expected format is BCP47 tags separated for ';' e.g. 'en-US;fr-FR'_") but this would lead to those datasets being hidden in datasets search. ## Steps to reproduce the bug 1. Add a dataset using a variant tag (e.g. [`sq-AL`](https://huggingface.co/datasets?languages=languages:sq-AL)) 2. Look for datasets using the full code 3. Note that they're missing when just the language is searched for (e.g. [`sq`](https://huggingface.co/datasets?languages=languages:sq)) Some datasets are already affected by this - e.g. `AmazonScience/massive` is listed under `sq-AL` but not `sq`. One workaround is for dataset creators to add an additional root language tag to dataset YAML metadata, but it's unclear how to communicate this. It might be possible to index the search on `languagecode.split('-')[0]` but I wanted to float this issue before trying to write any code :) ## Expected results Datasets using longer bcp47 tags also appear under searches for just the language code; e.g. Quebecois datasets (`fr-CA`) would come up when looking for French datasets with no region specification (`fr`), or US English (`en-US`) datasets would come up when searching for English datasets (`en`). ## Actual results The language codes seem to be directly string matched, excluding datasets with specific language tags from non-specific searches. ## Environment info (web app)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4304/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4304/timeline
null
null
null
null
false
[ "Thanks for reporting ! I forwarded the issue to the front-end team :)\r\n\r\nWill keep you posted !\r\n\r\nI also changed the tagging app to suggest two letters code for now." ]
https://api.github.com/repos/huggingface/datasets/issues/4303
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4303/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4303/comments
https://api.github.com/repos/huggingface/datasets/issues/4303/events
https://github.com/huggingface/datasets/pull/4303
1,230,867,728
PR_kwDODunzps43j8cH
4,303
Fix: Add missing comma
{ "login": "mrm8488", "id": 3653789, "node_id": "MDQ6VXNlcjM2NTM3ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mrm8488", "html_url": "https://github.com/mrm8488", "followers_url": "https://api.github.com/users/mrm8488/followers", "following_url": "https://api.github.com/users/mrm8488/following{/other_user}", "gists_url": "https://api.github.com/users/mrm8488/gists{/gist_id}", "starred_url": "https://api.github.com/users/mrm8488/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mrm8488/subscriptions", "organizations_url": "https://api.github.com/users/mrm8488/orgs", "repos_url": "https://api.github.com/users/mrm8488/repos", "events_url": "https://api.github.com/users/mrm8488/events{/privacy}", "received_events_url": "https://api.github.com/users/mrm8488/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-10T09:21:38"
"2022-05-11T08:50:15"
"2022-05-11T08:50:14"
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4303/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4303/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4303", "html_url": "https://github.com/huggingface/datasets/pull/4303", "diff_url": "https://github.com/huggingface/datasets/pull/4303.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4303.patch", "merged_at": "2022-05-11T08:50:14" }
true
[ "The CI failure is unrelated to this PR and fixed on master, merging :)" ]
https://api.github.com/repos/huggingface/datasets/issues/4302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4302/comments
https://api.github.com/repos/huggingface/datasets/issues/4302/events
https://github.com/huggingface/datasets/pull/4302
1,230,651,117
PR_kwDODunzps43jPE5
4,302
Remove hacking license tags when mirroring datasets on the Hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
9
"2022-05-10T05:52:46"
"2022-05-20T09:48:30"
"2022-05-20T09:40:20"
MEMBER
null
Currently, when mirroring datasets on the Hub, the license tags are hacked: removed of characters "." and "$". On the contrary, this hacking is not applied to community datasets on the Hub. This generates multiple variants of the same tag on the Hub. I guess this hacking is no longer necessary: - it is not applied to community datasets - all canonical datasets are validated by maintainers before being merged: CI + maintainers make sure license tags are the right ones Fix #4298.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4302/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4302/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4302", "html_url": "https://github.com/huggingface/datasets/pull/4302", "diff_url": "https://github.com/huggingface/datasets/pull/4302.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4302.patch", "merged_at": null }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "The Hub doesn't allow these characters in the YAML tags, and git push fails if you want to push a dataset card containing these characters.", "Ok, let me rename the bad config names :) I think I can also keep backward compatibility with a warning", "Almost done with it btw, will submit a PR that shows all the configuration name changes (from a bit more than 20 datasets)", "Please, let me know when the renaming of configs is done. If not enough bandwidth, I can take care of it...", "Will focus on this this afternoon ;)", "I realized when renaming all the configurations with dots in https://github.com/huggingface/datasets/pull/4365 that it's not ideal for certain cases. For example:\r\n- many configurations have a version like \"1.0.0\" in their names\r\n- to avoid breaking changes we need to replace dots with underscores in the user input and show a warning, which hurts the experience\r\n- our second most downloaded dataset at the moment is affected: `newsgroup`\r\n- if we disallow dots, then we'll never be able to make the [allenai/c4](https://huggingface.co/datasets/allenai/c4) work with its different configurations since they contain dots, and we can't rename them because they are the official download links\r\n\r\nI was thinking of other alternatives:\r\n1. just stop separating tags per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anyway\r\n2. use another YAML structure to avoid having config names as keys, such as\r\n```yaml\r\nlanguages:\r\n- config: 20220301_en\r\n values:\r\n - en\r\n```\r\n\r\nI'm down for 1, to keep things simple", "@lhoestq I agree:\r\n- better not changing config names (so that we do not introduce any braking change)\r\n- therefore, we should not use them as keys\r\n\r\nIn relation with the proposed solutions, I have no strong opinion:\r\n- option 1 is simpler and aligns better with current usage on the Hub (configs are ignored)\r\n- however:\r\n - we will lose all the information per config we already have (for those datasets containing config keys; contributors made an effort to put that information per config)\r\n - and this information might be useful on the Hub in the future, in case we would like to enrich the search feature with more granularity; this is only applicable if this feature could eventually make sense\r\n\r\nSo, no strong opinion...", "Closing in favor of https://github.com/huggingface/datasets/pull/4367" ]
https://api.github.com/repos/huggingface/datasets/issues/4301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4301/comments
https://api.github.com/repos/huggingface/datasets/issues/4301/events
https://github.com/huggingface/datasets/pull/4301
1,230,401,256
PR_kwDODunzps43idlE
4,301
Add ImageNet-Sketch dataset
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2022-05-09T23:38:45"
"2022-05-23T18:14:14"
"2022-05-23T18:05:29"
CONTRIBUTOR
null
This PR adds the ImageNet-Sketch dataset and resolves #3953 .
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4301/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4301", "html_url": "https://github.com/huggingface/datasets/pull/4301", "diff_url": "https://github.com/huggingface/datasets/pull/4301.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4301.patch", "merged_at": "2022-05-23T18:05:29" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "I think you can go ahead with uploading the data, and also ping the author in parallel. I think the images may subject to copyright anyway (scrapped from google image) so the dataset author is not allowed to set a license to the data.\r\n\r\nI think it's fine to upload the dataset as soon as we mention explicitly that the images may be subject to copyright." ]
https://api.github.com/repos/huggingface/datasets/issues/4300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4300/comments
https://api.github.com/repos/huggingface/datasets/issues/4300/events
https://github.com/huggingface/datasets/pull/4300
1,230,272,761
PR_kwDODunzps43iA86
4,300
Add API code examples for loading methods
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
1
"2022-05-09T21:30:26"
"2022-05-25T16:23:15"
"2022-05-25T09:20:13"
MEMBER
null
This PR adds API code examples for loading methods, let me know if I've missed any important parameters we should showcase :) I was a bit confused about `inspect_dataset` and `inspect_metric`. The `path` parameter says it will accept a dataset identifier from the Hub. But when I try the identifier `rotten_tomatoes`, it gives me: ```py from datasets import inspect_dataset inspect_dataset('rotten_tomatoes', local_path='/content/rotten_tomatoes') FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory. ``` Does the user need to have an existing copy of `rotten_tomatoes.py` on their local drive (in which case, it seems like the same option as the first option in `path`)?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4300/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4300/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4300", "html_url": "https://github.com/huggingface/datasets/pull/4300", "diff_url": "https://github.com/huggingface/datasets/pull/4300.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4300.patch", "merged_at": "2022-05-25T09:20:12" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4299/comments
https://api.github.com/repos/huggingface/datasets/issues/4299/events
https://github.com/huggingface/datasets/pull/4299
1,230,236,782
PR_kwDODunzps43h5RP
4,299
Remove manual download from imagenet-1k
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2022-05-09T20:49:18"
"2022-05-25T14:54:59"
"2022-05-25T14:46:16"
CONTRIBUTOR
null
Remove the manual download code from `imagenet-1k` to make it a regular dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4299/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4299", "html_url": "https://github.com/huggingface/datasets/pull/4299", "diff_url": "https://github.com/huggingface/datasets/pull/4299.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4299.patch", "merged_at": "2022-05-25T14:46:16" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks for the reviews @apsdehal and @lhoestq! As suggested by @lhoestq, I'll separate the train/val/test splits, apply the validation split fixes and shuffle the images files to simplify the script and make streaming faster.", "@apsdehal I dismissed your review as it's no longer relevant after the data files changes suggested by @lhoestq. " ]
https://api.github.com/repos/huggingface/datasets/issues/4298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4298/comments
https://api.github.com/repos/huggingface/datasets/issues/4298/events
https://github.com/huggingface/datasets/issues/4298
1,229,748,006
I_kwDODunzps5JTHcm
4,298
Normalise license names
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
2
"2022-05-09T13:51:32"
"2022-05-20T09:51:50"
"2022-05-20T09:51:50"
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** When browsing datasets, the Licenses tag cloud (bottom left of e.g. https://huggingface.co/datasets) has multiple variants of the same license. This means the options exclude datasets arbitrarily, giving users artificially low recall. The cause of the dupes is probably due to a bit of variation in metadata. **Describe the solution you'd like** I'd like the licenses in metadata to follow the same standard as much as possible, to remove this problem. I'd like to go ahead and normalise the dataset metadata to follow the format & values given in [src/datasets/utils/resources/licenses.json](https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/licenses.json) . **Describe alternatives you've considered** None **Additional context** None **Priority** Low
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4298/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4298/timeline
null
completed
null
null
false
[ "we'll add the same server-side metadata validation system as for hf.co/models soon-ish\r\n\r\n(you can check on hf.co/models that licenses are \"clean\")", "Fixed by #4367." ]
https://api.github.com/repos/huggingface/datasets/issues/4297
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4297/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4297/comments
https://api.github.com/repos/huggingface/datasets/issues/4297/events
https://github.com/huggingface/datasets/issues/4297
1,229,735,498
I_kwDODunzps5JTEZK
4,297
Datasets YAML tagging space is down
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
3
"2022-05-09T13:45:05"
"2022-05-09T14:44:25"
"2022-05-09T14:44:25"
CONTRIBUTOR
null
## Describe the bug The neat hf spaces app for generating YAML tags for dataset `README.md`s is down ## Steps to reproduce the bug 1. Visit https://huggingface.co/spaces/huggingface/datasets-tagging ## Expected results There'll be a HF spaces web app for generating dataset metadata YAML ## Actual results There's an error message; here's the step where it breaks: ``` Step 18/29 : RUN pip install -r requirements.txt ---> Running in e88bfe7e7e0c Defaulting to user installation because normal site-packages is not writeable Collecting git+https://github.com/huggingface/datasets.git@update-task-list (from -r requirements.txt (line 4)) Cloning https://github.com/huggingface/datasets.git (to revision update-task-list) to /tmp/pip-req-build-bm8t0r0k Running command git clone --filter=blob:none --quiet https://github.com/huggingface/datasets.git /tmp/pip-req-build-bm8t0r0k WARNING: Did not find branch or tag 'update-task-list', assuming revision or ref. Running command git checkout -q update-task-list error: pathspec 'update-task-list' did not match any file(s) known to git error: subprocess-exited-with-error × git checkout -q update-task-list did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × git checkout -q update-task-list did not run successfully. │ exit code: 1 ╰─> See above for output. ``` ## Environment info - Platform: Linux / Brave
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4297/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4297/timeline
null
completed
null
null
false
[ "@lhoestq @albertvillanova `update-task-list` branch does not exist anymore, should point to `main` now i guess", "Thanks for reporting, fixing it now", "It's up again :)" ]
https://api.github.com/repos/huggingface/datasets/issues/4296
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4296/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4296/comments
https://api.github.com/repos/huggingface/datasets/issues/4296/events
https://github.com/huggingface/datasets/pull/4296
1,229,554,645
PR_kwDODunzps43foZ-
4,296
Fix URL query parameters in compression hop path when streaming
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
"2022-05-09T11:18:22"
"2022-07-06T15:19:53"
null
MEMBER
null
Fix #3488.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4296/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4296/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4296", "html_url": "https://github.com/huggingface/datasets/pull/4296", "diff_url": "https://github.com/huggingface/datasets/pull/4296.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4296.patch", "merged_at": null }
true
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4296). All of your documentation changes will be reflected on that endpoint." ]
https://api.github.com/repos/huggingface/datasets/issues/4295
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4295/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4295/comments
https://api.github.com/repos/huggingface/datasets/issues/4295/events
https://github.com/huggingface/datasets/pull/4295
1,229,527,283
PR_kwDODunzps43fieR
4,295
Fix missing lz4 dependency for tests
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-09T10:53:20"
"2022-05-09T11:21:22"
"2022-05-09T11:13:44"
MEMBER
null
Currently, `lz4` is not defined as a dependency for tests. Therefore, all tests marked with `@require_lz4` are skipped.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4295/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4295/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4295", "html_url": "https://github.com/huggingface/datasets/pull/4295", "diff_url": "https://github.com/huggingface/datasets/pull/4295.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4295.patch", "merged_at": "2022-05-09T11:13:44" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4294
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4294/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4294/comments
https://api.github.com/repos/huggingface/datasets/issues/4294/events
https://github.com/huggingface/datasets/pull/4294
1,229,455,582
PR_kwDODunzps43fTXA
4,294
Fix CLI run_beam save_infos
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-09T09:47:43"
"2022-05-10T07:04:04"
"2022-05-10T06:56:10"
MEMBER
null
Currently, it raises TypeError: ``` TypeError: _download_and_prepare() got an unexpected keyword argument 'save_infos' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4294/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4294/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4294", "html_url": "https://github.com/huggingface/datasets/pull/4294", "diff_url": "https://github.com/huggingface/datasets/pull/4294.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4294.patch", "merged_at": "2022-05-10T06:56:10" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4293
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4293/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4293/comments
https://api.github.com/repos/huggingface/datasets/issues/4293/events
https://github.com/huggingface/datasets/pull/4293
1,228,815,477
PR_kwDODunzps43dRt9
4,293
Fix wrong map parameter name in cache docs
{ "login": "h4iku", "id": 3812788, "node_id": "MDQ6VXNlcjM4MTI3ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/3812788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h4iku", "html_url": "https://github.com/h4iku", "followers_url": "https://api.github.com/users/h4iku/followers", "following_url": "https://api.github.com/users/h4iku/following{/other_user}", "gists_url": "https://api.github.com/users/h4iku/gists{/gist_id}", "starred_url": "https://api.github.com/users/h4iku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h4iku/subscriptions", "organizations_url": "https://api.github.com/users/h4iku/orgs", "repos_url": "https://api.github.com/users/h4iku/repos", "events_url": "https://api.github.com/users/h4iku/events{/privacy}", "received_events_url": "https://api.github.com/users/h4iku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2022-05-08T07:27:46"
"2022-06-14T16:49:00"
"2022-06-14T16:07:00"
CONTRIBUTOR
null
The `load_from_cache` parameter of `map` should be `load_from_cache_file`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4293/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4293/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4293", "html_url": "https://github.com/huggingface/datasets/pull/4293", "diff_url": "https://github.com/huggingface/datasets/pull/4293.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4293.patch", "merged_at": "2022-06-14T16:07:00" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4292
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4292/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4292/comments
https://api.github.com/repos/huggingface/datasets/issues/4292/events
https://github.com/huggingface/datasets/pull/4292
1,228,216,788
PR_kwDODunzps43bhrp
4,292
Add API code examples for remaining main classes
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
1
"2022-05-06T18:15:31"
"2022-05-25T18:05:13"
"2022-05-25T17:56:36"
MEMBER
null
This PR adds API code examples for the remaining functions in the Main classes. I wasn't too familiar with some of the functions (`decode_batch`, `decode_column`, `decode_example`, etc.) so please feel free to add an example of usage and I can fill in the rest :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4292/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4292/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4292", "html_url": "https://github.com/huggingface/datasets/pull/4292", "diff_url": "https://github.com/huggingface/datasets/pull/4292.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4292.patch", "merged_at": "2022-05-25T17:56:36" }
true
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
https://api.github.com/repos/huggingface/datasets/issues/4291
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4291/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4291/comments
https://api.github.com/repos/huggingface/datasets/issues/4291/events
https://github.com/huggingface/datasets/issues/4291
1,227,777,500
I_kwDODunzps5JLmXc
4,291
Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
{ "login": "leondz", "id": 121934, "node_id": "MDQ6VXNlcjEyMTkzNA==", "avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4", "gravatar_id": "", "url": "https://api.github.com/users/leondz", "html_url": "https://github.com/leondz", "followers_url": "https://api.github.com/users/leondz/followers", "following_url": "https://api.github.com/users/leondz/following{/other_user}", "gists_url": "https://api.github.com/users/leondz/gists{/gist_id}", "starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leondz/subscriptions", "organizations_url": "https://api.github.com/users/leondz/orgs", "repos_url": "https://api.github.com/users/leondz/repos", "events_url": "https://api.github.com/users/leondz/events{/privacy}", "received_events_url": "https://api.github.com/users/leondz/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
2
"2022-05-06T12:03:27"
"2022-05-09T08:25:58"
"2022-05-09T08:25:58"
CONTRIBUTOR
null
### Link https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train ### Description The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss? ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4291/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4291/timeline
null
completed
null
null
false
[ "Hi @leondz, thanks for reporting.\r\n\r\nIndeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.\r\n\r\nIn particular, in your case, that is due to the data file being TAR. This format is not streamable out of the box (it does not allow random access to the archived files), but we use a trick to allow streaming: using `dl_manager.iter_archive`.\r\n\r\nLet me know if you need some help: I could push a commit to your repo with the fix.", "Ah, right! The preview is working now, but this explanation is good to know, thank you. I'll prefer formats with random file access supported in datasets.utils.extract in future, and try out this fix for the tarfiles :)" ]