url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.07B
node_id
stringlengths
18
32
number
int64
1
3.39k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
1 value
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,639B
updated_at
int64
1,587B
1,639B
closed_at
int64
1,587B
1,639B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3169
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3169/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3169/comments
https://api.github.com/repos/huggingface/datasets/issues/3169/events
https://github.com/huggingface/datasets/pull/3169
1,036,773,357
PR_kwDODunzps4ttYmZ
3,169
Configurable max filename length in file locks
{ "login": "lmmx", "id": 2979452, "node_id": "MDQ6VXNlcjI5Nzk0NTI=", "avatar_url": "https://avatars.githubusercontent.com/u/2979452?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lmmx", "html_url": "https://github.com/lmmx", "followers_url": "https://api.github.com/users/lmmx/followers", "following_url": "https://api.github.com/users/lmmx/following{/other_user}", "gists_url": "https://api.github.com/users/lmmx/gists{/gist_id}", "starred_url": "https://api.github.com/users/lmmx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lmmx/subscriptions", "organizations_url": "https://api.github.com/users/lmmx/orgs", "repos_url": "https://api.github.com/users/lmmx/repos", "events_url": "https://api.github.com/users/lmmx/events{/privacy}", "received_events_url": "https://api.github.com/users/lmmx/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I've also added environment variable configuration so that this can be configured once per machine (e.g. in a `.bashrc` file), as is already done for a few other config variables here.", "Cancelling PR in favour of @mariosasko's in #3173" ]
1,635,285,175,000
1,635,437,654,000
1,635,437,653,000
NONE
null
Resolve #2924 (https://github.com/huggingface/datasets/issues/2924#issuecomment-952330956) wherein the assumption of file lock maximum filename length to be 255 raises an OSError on encrypted drives (ecryptFS on Linux uses part of the lower filename, reducing the maximum filename size to 143). Allowing this limit to be set in the config module allows this to be modified by users. Will not affect Windows users, as their class passes 255 on init explicitly. Reproduced with the following example ([the first few lines of a script from Lightning Flash](https://lightning-flash.readthedocs.io/en/latest/reference/speech_recognition.html), fine-tuning a HF model): ```py import torch import flash from flash.audio import SpeechRecognition, SpeechRecognitionData from flash.core.data.utils import download_data # 1. Create the DataModule download_data("https://pl-flash-data.s3.amazonaws.com/timit_data.zip", "./data") datamodule = SpeechRecognitionData.from_json( input_fields="file", target_fields="text", train_file="data/timit/train.json", test_file="data/timit/test.json", ) ``` Which gave this traceback: ```py Traceback (most recent call last): File "lf_ft.py", line 10, in <module> datamodule = SpeechRecognitionData.from_json( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 1005, in from_json return cls.from_data_source( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 571, in from_data_source train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 307, in to_datasets train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 344, in generate_dataset data = load_data(data, mock_dataset) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py", line 103, in load_data dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)}) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py", line 1599, in load_dataset builder_instance = load_dataset_builder( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py", line 1457, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/builder.py", line 285, in __init__ with FileLock(lock_path): File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 323, in __enter__ self.acquire() File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 272, in acquire self._acquire() File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py", line 403, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock' ``` Note the filename is 145 chars long: ``` >>> len("_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock") 145 ``` After installing datasets as an editable local package and modifying the script I was running to first include: ```py import datasets datasets.config.MAX_DATASET_CONFIG_ID_READABLE_LENGTH = 143 ``` The error goes away. If I instead deliberately set the value incorrectly as 144, the OSError returns: ``` Traceback (most recent call last): File "lf_ft.py", line 14, in <module> datamodule = SpeechRecognitionData.from_json( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 1005, in from_json return cls.from_data_source( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py", line 571, in from_data_source train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets( File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 307, in to_datasets train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py", line 344, in generate_dataset data = load_data(data, mock_dataset) File "/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py", line 103, in load_data dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)}) File "/home/louis/dev/hf_datasets/src/datasets/load.py", line 1605, in load_dataset builder_instance = load_dataset_builder( File "/home/louis/dev/hf_datasets/src/datasets/load.py", line 1463, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/home/louis/dev/hf_datasets/src/datasets/builder.py", line 285, in __init__ with FileLock(lock_path): File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 326, in __enter__ self.acquire() File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 275, in acquire self._acquire() File "/home/louis/dev/hf_datasets/src/datasets/utils/filelock.py", line 406, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-32c812b5c1272d64_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279...-5794079643713042223.lock' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3169/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3169/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3169", "html_url": "https://github.com/huggingface/datasets/pull/3169", "diff_url": "https://github.com/huggingface/datasets/pull/3169.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3169.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3168
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3168/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3168/comments
https://api.github.com/repos/huggingface/datasets/issues/3168/events
https://github.com/huggingface/datasets/issues/3168
1,036,673,263
I_kwDODunzps49ymDv
3,168
OpenSLR/83 is empty
{ "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false }
[ { "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @tyrius02, thanks for reporting. I see you self-assigned this issue: are you working on this?", "@albertvillanova Yes. Figured I introduced the broken config, I should fix it too.\r\n\r\nI've got it working, but I'm struggling with one of the tests. I've started a PR so I/we can work through it.", "Looks like the tests all passed on the PR." ]
1,635,277,341,000
1,635,501,849,000
1,635,501,849,000
CONTRIBUTOR
null
## Describe the bug As the summary says, openslr / SLR83 / train is empty. The dataset returned after loading indicates there are **zero** rows. The correct number should be **17877**. ## Steps to reproduce the bug ```python import datasets datasets.load_dataset('openslr', 'SLR83') ``` ## Expected results ``` DatasetDict({ train: Dataset({ features: ['path', 'audio', 'sentence'], num_rows: 17877 }) }) ``` ## Actual results ``` DatasetDict({ train: Dataset({ features: ['path', 'audio', 'sentence'], num_rows: 0 }) }) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.1.dev0 (master HEAD) - Platform: Ubuntu 20.04 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3168/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3168/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3167/comments
https://api.github.com/repos/huggingface/datasets/issues/3167/events
https://github.com/huggingface/datasets/issues/3167
1,036,488,992
I_kwDODunzps49x5Eg
3,167
bookcorpusopen no longer works
{ "login": "lucadiliello", "id": 23355969, "node_id": "MDQ6VXNlcjIzMzU1OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucadiliello", "html_url": "https://github.com/lucadiliello", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "repos_url": "https://api.github.com/users/lucadiliello/repos", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! Thanks for reporting :) I think #3280 should fix this", "I tried with the latest changes from #3280 on google colab and it worked fine :)\r\nWe'll do a new release soon, in the meantime you can use the updated version with:\r\n```python\r\nload_dataset(\"bookcorpusopen\", revision=\"master\")\r\n```", "Fixed by #3280." ]
1,635,264,375,000
1,637,164,426,000
1,637,164,426,000
CONTRIBUTOR
null
## Describe the bug When using the latest version of datasets (1.14.0), I cannot use the `bookcorpusopen` dataset. The process blocks always around `9924 examples [00:06, 1439.61 examples/s]` when preparing the dataset. I also noticed that after half an hour the process is automatically killed because of the RAM usage (the machine has 1TB of RAM...). This did not happen with 1.4.1. I tried also `rm -rf ~/.cache/huggingface` but did not help. Changing python version between 3.7, 3.8 and 3.9 did not help too. ## Steps to reproduce the bug ```python import datasets d = datasets.load_dataset('bookcorpusopen') ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.0 - Platform: Linux-5.4.0-1054-aws-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3167/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3167/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3166/comments
https://api.github.com/repos/huggingface/datasets/issues/3166/events
https://github.com/huggingface/datasets/pull/3166
1,036,450,283
PR_kwDODunzps4tsVQJ
3,166
Deprecate prepare_module
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sounds good, thanks !" ]
1,635,262,104,000
1,636,104,457,000
1,636,104,456,000
MEMBER
null
In version 1.13, `prepare_module` was deprecated. This PR adds a deprecation warning and removes it from all the library, using `dataset_module_factory` or `metric_module_factory` instead. Fix #3165.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3166/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3166/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3166", "html_url": "https://github.com/huggingface/datasets/pull/3166", "diff_url": "https://github.com/huggingface/datasets/pull/3166.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3166.patch", "merged_at": 1636104456000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3165/comments
https://api.github.com/repos/huggingface/datasets/issues/3165/events
https://github.com/huggingface/datasets/issues/3165
1,036,448,998
I_kwDODunzps49xvTm
3,165
Deprecate prepare_module
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,635,262,035,000
1,636,104,456,000
1,636,104,456,000
MEMBER
null
In version 1.13, `prepare_module` was deprecated. Add deprecation warning and remove its usage from all the library.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3165/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3165/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3164
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3164/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3164/comments
https://api.github.com/repos/huggingface/datasets/issues/3164/events
https://github.com/huggingface/datasets/issues/3164
1,035,662,830
I_kwDODunzps49uvXu
3,164
Add raw data files to the Hub with GitHub LFS for canonical dataset
{ "login": "zlucia", "id": 40370937, "node_id": "MDQ6VXNlcjQwMzcwOTM3", "avatar_url": "https://avatars.githubusercontent.com/u/40370937?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zlucia", "html_url": "https://github.com/zlucia", "followers_url": "https://api.github.com/users/zlucia/followers", "following_url": "https://api.github.com/users/zlucia/following{/other_user}", "gists_url": "https://api.github.com/users/zlucia/gists{/gist_id}", "starred_url": "https://api.github.com/users/zlucia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zlucia/subscriptions", "organizations_url": "https://api.github.com/users/zlucia/orgs", "repos_url": "https://api.github.com/users/zlucia/repos", "events_url": "https://api.github.com/users/zlucia/events{/privacy}", "received_events_url": "https://api.github.com/users/zlucia/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @zlucia, I would actually suggest hosting the dataset as a huggingface.co-hosted dataset.\r\n\r\nThe only difference with a \"canonical\"/legacy dataset is that it's nested under an organization (here `stanford` or `stanfordnlp` for instance – completely up to you) but then you can upload your data using git-lfs (unlike \"canonical\" datasets where we don't host the data)\r\n\r\nLet me know if this fits your use case!\r\n\r\ncc'ing @osanseviero @lhoestq and rest of the team 🤗", "Hi @zlucia,\r\n\r\nAs @julien-c pointed out, the way to store/host raw data files in our Hub is by using what we call \"community\" datasets:\r\n- either at your personal namespace: `load_dataset(\"zlucia/casehold\")`\r\n- or at an organization namespace: for example, if you create the organization `reglab`, then `load_dataset(\"reglab/casehold\")`\r\n\r\nPlease note that \"canonical\" datasets do not normally store/host their raw data at our Hub, but in a third-party server. For \"canonical\" datasets, we just host the \"loading script\", that is, a Python script that downloads the raw data from a third-party server, creates the HuggingFace dataset from it and caches it locally.\r\n\r\nIn order to create an organization namespace in our Hub, please follow this link: https://huggingface.co/organizations/new\r\n\r\nThere are already many organizations at our Hub (complete list here: https://huggingface.co/organizations), such as:\r\n- Stanford CRFM: https://huggingface.co/stanford-crfm\r\n- Stanford NLP: https://huggingface.co/stanfordnlp\r\n- Stanford CS329S: Machine Learning Systems Design: https://huggingface.co/stanford-cs329s\r\n\r\nAlso note that you in your organization namespace:\r\n- you can add any number of members\r\n- you can store both raw datasets and models, and those can be immediately accessed using `datasets` and `transformers`\r\n\r\nOnce you have created an organization, these are the steps to upload/host a raw dataset: \r\n- The no-code procedure: https://huggingface.co/docs/datasets/upload_dataset.html\r\n- Using the command line (terminal): https://huggingface.co/docs/datasets/share.html#add-a-community-dataset\r\n\r\nPlease, feel free to ping me if you have any further questions or need help.\r\n", "Ah I see, I think I was unclear whether there were benefits to uploading a canonical dataset vs. a community provided dataset. Thanks for clarifying. I'll see if we want to create an organization namespace and otherwise, will upload the dataset under my personal namespace." ]
1,635,204,501,000
1,635,623,691,000
1,635,623,691,000
NONE
null
I'm interested in sharing the CaseHOLD dataset (https://arxiv.org/abs/2104.08671) as a canonical dataset on the HuggingFace Hub and would like to add the raw data files to the Hub with GitHub LFS, since it seems like a more sustainable long term storage solution, compared to other storage solutions available to my team. From what I can tell, this option is not immediately supported if one follows the sharing steps detailed here: [https://huggingface.co/docs/datasets/share_dataset.html#sharing-a-canonical-dataset](https://huggingface.co/docs/datasets/share_dataset.html#sharing-a-canonical-dataset), since GitHub LFS is not supported for public forks. Is there a way to request this? Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3164/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3164/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3163
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3163/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3163/comments
https://api.github.com/repos/huggingface/datasets/issues/3163/events
https://github.com/huggingface/datasets/pull/3163
1,035,475,061
PR_kwDODunzps4tpI44
3,163
Add Image feature
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Awesome, looking forward to using it :)", "Few additional comments:\r\n* the current API doesn't meet the requirements mentioned in #3145 (e.g. image mime-type). However, this will be doable soon as we also plan to store image bytes alongside paths in arrow files (see https://github.com/huggingface/datasets/pull/3129#discussion_r738426187). Then, PIL can return the correct mime-type: \r\n ```python\r\n from PIL import Image\r\n import io\r\n\r\n mimetype = Image.open(io.BytesIO(image_bytes)).get_format_mimetype()\r\n ``` \r\n I plan to add this change in a separate PR.\r\n* currently, I'm returning an `np.ndarray` object after decoding for consistency with the Audio feature. However, the vision models from Transformers prefer an `Image` object to avoid the `Image.fromarray` call in the corresponding feature extractors (see [this warning](https://huggingface.co/transformers/master/model_doc/vit.html#transformers.ViTFeatureExtractor.__call__) in the Transformers docs) cc @NielsRogge \r\n\r\nSo I'm not entirely sure whether to return only a NumPy array, only a PIL Image, or both when decoding. The last point worries me because we shouldn't provide an API that leads to a warning in Transformers (in the docs, not in code :)). At the same time, it makes sense to preserve consistency with the Audio feature and return a NumPy array. \r\n\r\nThat's why I would appreciate your opinions on this.", "That is a good question. Also pinging @nateraw .\r\n\r\nCurrently we only support returning numpy arrays because of numpy/tf/torch/jax formatting features that we have, and to keep things simple. See the [set_format docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.set_format) for more info", "I don't think centering the discussion on what ViT expects is good, as the vision Transformers model are still in an experimental stage and we can adapt those depending on what you do here :-).\r\n\r\nIMO, the discussion should revolve on what a user will want to do with a vision dataset, and they will want to:\r\n- lazily decode their images\r\n- maybe apply data augmentation (for the training set)\r\n- resize to a fixed shape for batching\r\n\r\nThe libraries that provide step 2 and 3 either use PIL (thinking torchvision) or cv2 (thinking albumentations). NumPy does not have any function to resize an image or do basic data augmentation (like a rotate) so I think it shouldn't be the default format for an image dataset, PIL or cv2 (in an ideal world with the ability to switch between the two depending on what the users prefer) would be better.\r\n\r\nSide note: I will work on the vision integration in Transformers with Niels next month so please keep me in the loop for those awesome new vision features!", "@sgugger I completely agree with you, especially after trying to convert the `run_image_classification` script from Transformers to use this feature. The current API doesn't seem intuitive there due to the torchvision transforms, which, as you say, prefer PIL over NumPy arrays. \r\n\r\nSo the default format would return `Image` (PIL) / `np.ndarray` (cv2) and `set_format(numpy/tf/pt)` would return image tensors if I understand you correctly. IMO this makes a lot more sense (and flexibility) than the current API.", "Also, one additional library worth mentioning here is AugLy which supports image file paths and `PIL.Image.Image` objects.", "That's so nice !\r\n\r\nAlso I couldn't help myself so I've played with it already ^^\r\nI was agreeably surprised that with minor additions I managed to even allow this, which I find very satisfactory:\r\n```python\r\nimport PIL.Image\r\nfrom datasets import Dataset\r\n\r\npath = \"docs/source/imgs/datasets_logo_name.jpg\"\r\n\r\ndataset = Dataset.from_dict({\"img\": [PIL.Image.open(path)]})\r\nprint(dataset.features)\r\n# {'img': Image(id=None)}\r\nprint(dataset[0][\"img\"])\r\n# <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1200x300 at 0x129DE4AC8>\r\n```\r\n\r\nLet me know if that's a behavior you'd also like to see \r\n\r\nEDIT: just pushed my changes on a branch, you can see the diff [here](https://github.com/mariosasko/datasets-1/compare/add-image-feature...huggingface:image-type-inference) if you want", "Thanks, @lhoestq! I like your change. Very elegant indeed.\r\n\r\nP.S. I have to write a big comment that explains all the changes/things left to consider. Will do that in the next few days!", "I'm marking this PR as ready for review.\r\n\r\nThanks to @sgugger's comment, the API is much more flexible now as it decodes images (lazily) as `PIL.Image.Image` objects and supports transforms directly on them.\r\n\r\nAlso, we no longer return paths explicitly (previously, we would return `{\"path\": image_path, \"image\": pil_image}`) for the following reasons:\r\n* what to return when reading an image from an URL or a NumPy array. We could set `path` to `None` in these situations, but IMO we should avoid redundant information.\r\n* returning a dict doesn't match nicely with the requirement of supporting image modifications - what to do if the user modifies both the image path and the image\r\n\r\n(Btw, for the images stored locally, you can access their paths with `dset[idx][\"image\"].filename`, or by avoiding decoding with `paths = [ex[\"path\"] for ex in dset]`. @lhoestq @albertvillanova WDYT about having an option to skip decoding for complex features, e. g. `Audio(decode=False)`? This way, the user can easily access the underlying data.)\r\n\r\nExamples of what you can do:\r\n```python\r\n# load local images\r\ndset = Dataset.from_dict(\"image\": [local_image_path], features=Features({\"images\": Image()}))\r\n# load remote images (we got this for free by adding support for streaming)\r\ndset = Dataset.from_dict(\"image\": [image_url], features=Features({\"images\": Image()}))\r\n# from np.ndarray\r\ndset = Dataset.from_dict({\"image\": [np.array(...)]}, features=Features({\"images\": Image()}))\r\n# cast column\r\ndset = Dataset.from_dict({\"image\": [local_image_path]})\r\ndset.cast_column(\"image\", Image())\r\n\r\n# automatic type inference\r\ndset = Dataset.from_dict({\"image\": [PIL.Image.open(local_image_path)]})\r\n\r\n# transforms\r\ndef img_transform(example):\r\n ...\r\n example[\"image\"] = transformed_pil_image_or_np_ndarray\r\n return example\r\ndset.map(img_trnasform)\r\n\r\n# transform that adds a new column with images (automatic inference of the feature type)\r\ndset.map(lambda ex: {\"image_resized\": ex[\"image\"].resize((100, 100))})\r\nprint(dset.features[\"image_resized\"]) # will print Image()\r\n```\r\n\r\nSome more cool features:\r\n* We store the image filename (`pil_image.filename`) whenever possible to avoid costly conversion to bytes\r\n* if possible, we use native compression when encoding images. Otherwise, we fall back to the lossless PNG format (e.g. after image ops or when storing NumPy arrays)\r\n\r\nHints to make reviewing easier:\r\n* feel free to ignore the extension type part because it's related to PyArrow internals.\r\n* also, let me know if we are too strict/ too flexible in terms of types the Image feature can encode/decode. Hints:\r\n * `encode_example` handles encoding during dataset generation (you can think of it as `yield key, features.encode_example(example)`)\r\n * `objects_to_list_of_image_dicts` handles encoding of returned examples in `map`\r\n\r\nP.S. I'll fork the PR branch and start adding the Image feature to the existing image datasets (will also update the `ImageClassification` template while doing that).", "> WDYT about having an option to skip decoding for complex features, e. g. Audio(decode=False)?\r\n\r\nYes definitely, also I think it could be useful for the dataset viewer to not decode the data but instead return either the bytes or the (possibly chained) URL. cc @severo ", "We want to merge this today/tomorrow, so I'd really appreciate your reviews @sgugger @nateraw.\r\n\r\nAlso, you can test this feature on the existing image datasets (MNIST, beans, food101, ...) by installing `datasets` from the PR branch:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@adapt-image-datasets\r\n```\r\n", "Thanks for the review @nateraw!\r\n\r\n1. This is a copy of your notebook with the fixed map call: https://colab.research.google.com/gist/mariosasko/e351a717682a9392ca03908e65a2600e/image-feature-demo.ipynb\r\n (Sorry for misleading you with the map call in my un-updated notebook)\r\n Also, we can avoid this cast by trying to infer the type of the column (`\"pixel_values\"`) returned by the image feature extractor (we are already doing something similar for the columns with names: `\"attention_mask\"`, `\"input_ids\"`, ...). I plan to add this QOL improvement soon. \r\n2. It should work OK even without updating Pillow and PyArrow (these two libraries are pre-installed in Colab, so updating them requires a restart of the runtime). \r\n > I noticed an error that I'm guessing you ran into when I tried using the older version\r\n\r\n Do you recall which type of error it was because everything works fine on my side if I run the notebooks with the lowest supported version of Pillow (`6.2.1`)?", "Thanks for playing with it @nateraw and for sharing your notebook, this is useful :)\r\n\r\nI think this is ready now, congrats @mariosasko !" ]
1,635,188,868,000
1,638,815,744,000
1,638,812,942,000
CONTRIBUTOR
null
Adds the Image feature. This feature is heavily inspired by the recently added Audio feature (#2324). Currently, this PR is pretty simple. Some considerations that need further discussion: * I've decided to use `Pillow`/`PIL` as the image decoding library. Another candidate I considered is `torchvision`, mostly because of its `accimage` backend, which should be faster for loading `jpeg` images than `Pillow`. However, `torchvision`'s io module only supports png and jpeg images, has `torch` as a hard dependency, and requires magic to work with image bytes ( `torch.ByteTensor(torch.ByteStorage.from_buffer(image_bytes)))`). * Currently, I'm converting `PIL`'s `Image` type to `np.ndarray`. The vision models in Transformers such as ViT prefer the raw `Image` type and not the decoded tensors, so there is a small overhead due to [this conversion](https://github.com/huggingface/transformers/blob/3e8761ab8077e3bb243fe2f78b2a682bd2257cf1/src/transformers/image_utils.py#L62-L73). IMO this is justified to keep this part aligned with the Audio feature, which also returns `np.ndarray`. What do you think? * Still have to work on the channel decoding logic: * PyTorch prefers the channel-first ordering (C, H, W); TF and Flax the channel-last ordering (H, W, C). One cool feature would be adjusting the channel order based on the selected formatter (`torch`, `tf`, `jax`). * By default, `Image.open` returns images of shape (H, W, C). However, ViT's feature extractor expects the format (C, H, W) if the image is passed as an array (explained [here](https://huggingface.co/transformers/model_doc/vit.html#transformers.ViTFeatureExtractor.__call__)), so I'm more inclined to the format (C, H, W). Which one do you prefer, (C, H, W) or (H, W, C)? * Are there any options you'd like to see? (the user could change those via `cast_column`, such as `sampling_rate` in the Audio feature) TODOs: * [x] tests * in subsequent PRs: * docs - a section in the docs, which gives some additional info on the Image and Audio feature and compares them to `ArrayND` * streaming (waiting for #3129 and #3133 to get merged first) * update the image tasks and the datasets to use the new feature * Image/Audio formatting [Colab Notebook](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c?usp=sharing) where you can play with this feature. I'm also adding a link to the [Image](https://github.com/tensorflow/datasets/blob/7ac7d506488d46038a5854961d068926b3f93c7f/tensorflow_datasets/core/features/image_feature.py#L155) feature in TFDS because one of our goals is to parse TFDS scripts eventually, so our Image feature has to (at least) support all the formats theirs does. Feel free to cc anyone who might be interested. P.S. Please ignore the changes in the `datasets/**/*.py` files 😄.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3163/reactions", "total_count": 8, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 7, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/3163/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3163", "html_url": "https://github.com/huggingface/datasets/pull/3163", "diff_url": "https://github.com/huggingface/datasets/pull/3163.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3163.patch", "merged_at": 1638812942000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3161/comments
https://api.github.com/repos/huggingface/datasets/issues/3161/events
https://github.com/huggingface/datasets/pull/3161
1,035,444,292
PR_kwDODunzps4tpCsm
3,161
Add riddle_sense dataset
{ "login": "ziyiwu9494", "id": 44691149, "node_id": "MDQ6VXNlcjQ0NjkxMTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/44691149?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ziyiwu9494", "html_url": "https://github.com/ziyiwu9494", "followers_url": "https://api.github.com/users/ziyiwu9494/followers", "following_url": "https://api.github.com/users/ziyiwu9494/following{/other_user}", "gists_url": "https://api.github.com/users/ziyiwu9494/gists{/gist_id}", "starred_url": "https://api.github.com/users/ziyiwu9494/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ziyiwu9494/subscriptions", "organizations_url": "https://api.github.com/users/ziyiwu9494/orgs", "repos_url": "https://api.github.com/users/ziyiwu9494/repos", "events_url": "https://api.github.com/users/ziyiwu9494/events{/privacy}", "received_events_url": "https://api.github.com/users/ziyiwu9494/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq \r\nI address all the comments, I think. Thanks! \r\n", "The five test fails are unrelated to this PR and fixed on master so we can ignore them" ]
1,635,186,656,000
1,636,034,475,000
1,636,034,475,000
CONTRIBUTOR
null
Adding a new dataset for QA with riddles. I'm confused about the tagging process because it looks like the streamlit app loads data from the current repo, so is it something that should be done after merging or off my fork?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3161/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3161/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3161", "html_url": "https://github.com/huggingface/datasets/pull/3161", "diff_url": "https://github.com/huggingface/datasets/pull/3161.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3161.patch", "merged_at": 1636034474000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3160
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3160/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3160/comments
https://api.github.com/repos/huggingface/datasets/issues/3160/events
https://github.com/huggingface/datasets/pull/3160
1,035,274,640
PR_kwDODunzps4tofO0
3,160
Better error msg if `len(predictions)` doesn't match `len(references)` in metrics
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Can't test this now but it may be a good improvement indeed.", "I added a function, but it only works with the `list` type. For arrays/tensors, we delegate formatting to the frameworks. " ]
1,635,175,505,000
1,636,112,699,000
1,636,104,662,000
CONTRIBUTOR
null
Improve the error message in `Metric.add_batch` if `len(predictions)` doesn't match `len(references)`. cc: @BramVanroy (feel free to test this code on your examples and review this PR)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3160/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3160/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3160", "html_url": "https://github.com/huggingface/datasets/pull/3160", "diff_url": "https://github.com/huggingface/datasets/pull/3160.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3160.patch", "merged_at": 1636104662000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3159
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3159/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3159/comments
https://api.github.com/repos/huggingface/datasets/issues/3159/events
https://github.com/huggingface/datasets/pull/3159
1,035,174,560
PR_kwDODunzps4toKD5
3,159
Make inspect.get_dataset_config_names always return a non-empty list
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This PR is already working (although not very beautiful; see below): the idea was to have the `DatasetModule.builder_kwargs` accessible from the `builder_cls`, so that this can generate the default builder config (at the class level, without requiring the builder to be instantiated).\r\n\r\nI have a plan for a follow-up refactoring (same functionality, better implementation, much nicer), but I think we could already merge this, so that @severo can test it in the datasets previewer and report any potential issues.", "Yes @lhoestq you are completely right. Indeed I was exclusively using `builder_cls.kwargs` to get the community dataset `name` (nothing else): \"lhoestq___demo1\"\r\n\r\nSee et: https://github.com/huggingface/datasets/pull/3159/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R413-R415\r\n\r\nIn your example, the `name` I was getting from `builder_cls.kwargs` was:\r\n```python\r\n{\"name\": \"lhoestq___demo1\",...}\r\n```\r\n\r\nI'm going to refactor all the approach... as I only need the name for this specific case ;)", "I think this makes more sense now, @lhoestq @severo 😅 ", "It works well, thanks!" ]
1,635,170,383,000
1,635,513,277,000
1,635,399,889,000
MEMBER
null
Make all named configs cases, so that no special unnamed config case needs to be handled differently. Fix #3135.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3159/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3159/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3159", "html_url": "https://github.com/huggingface/datasets/pull/3159", "diff_url": "https://github.com/huggingface/datasets/pull/3159.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3159.patch", "merged_at": 1635399889000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3158/comments
https://api.github.com/repos/huggingface/datasets/issues/3158/events
https://github.com/huggingface/datasets/pull/3158
1,035,158,070
PR_kwDODunzps4toGpe
3,158
Fix string encoding for Value type
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "That was fast! \r\n" ]
1,635,169,453,000
1,635,171,126,000
1,635,171,125,000
MEMBER
null
Some metrics have `string` features but currently it fails if users pass integers instead. Indeed feature encoding that handles the conversion of the user's objects to the right python type is missing a case for `string`, while it already works as expected for integers, floats and booleans Here is an example code that didn't work previously, but that works with this fix: ```python import datasets # Note that 'id' is an integer while the SQuAD metric uses strings predictions = [{'prediction_text': '1976', 'id': 5}] references = [{'answers': {'answer_start': [97], 'text': ['1976']}, 'id': 5}] squad_metric = datasets.load_metric("squad") squad_metric.add_batch(predictions=predictions, references=references) results = squad_metric.compute() # {'exact_match': 100.0, 'f1': 100.0} ``` cc @sgugger @philschmid
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3158/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3158/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3158", "html_url": "https://github.com/huggingface/datasets/pull/3158", "diff_url": "https://github.com/huggingface/datasets/pull/3158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3158.patch", "merged_at": 1635171125000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3157
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3157/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3157/comments
https://api.github.com/repos/huggingface/datasets/issues/3157/events
https://github.com/huggingface/datasets/pull/3157
1,034,775,165
PR_kwDODunzps4tm3_I
3,157
Fixed: duplicate parameter and missing parameter in docstring
{ "login": "PanQiWei", "id": 46810637, "node_id": "MDQ6VXNlcjQ2ODEwNjM3", "avatar_url": "https://avatars.githubusercontent.com/u/46810637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PanQiWei", "html_url": "https://github.com/PanQiWei", "followers_url": "https://api.github.com/users/PanQiWei/followers", "following_url": "https://api.github.com/users/PanQiWei/following{/other_user}", "gists_url": "https://api.github.com/users/PanQiWei/gists{/gist_id}", "starred_url": "https://api.github.com/users/PanQiWei/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PanQiWei/subscriptions", "organizations_url": "https://api.github.com/users/PanQiWei/orgs", "repos_url": "https://api.github.com/users/PanQiWei/repos", "events_url": "https://api.github.com/users/PanQiWei/events{/privacy}", "received_events_url": "https://api.github.com/users/PanQiWei/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,635,146,760,000
1,635,170,539,000
1,635,170,539,000
CONTRIBUTOR
null
changing duplicate parameter `data_files` in `DatasetBuilder.__init__` to the missing parameter `data_dir`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3157/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3157/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3157", "html_url": "https://github.com/huggingface/datasets/pull/3157", "diff_url": "https://github.com/huggingface/datasets/pull/3157.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3157.patch", "merged_at": 1635170538000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3155
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3155/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3155/comments
https://api.github.com/repos/huggingface/datasets/issues/3155/events
https://github.com/huggingface/datasets/issues/3155
1,034,468,757
I_kwDODunzps49qL2V
3,155
Illegal instruction (core dumped) at datasets import
{ "login": "hacobe", "id": 91226467, "node_id": "MDQ6VXNlcjkxMjI2NDY3", "avatar_url": "https://avatars.githubusercontent.com/u/91226467?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hacobe", "html_url": "https://github.com/hacobe", "followers_url": "https://api.github.com/users/hacobe/followers", "following_url": "https://api.github.com/users/hacobe/following{/other_user}", "gists_url": "https://api.github.com/users/hacobe/gists{/gist_id}", "starred_url": "https://api.github.com/users/hacobe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hacobe/subscriptions", "organizations_url": "https://api.github.com/users/hacobe/orgs", "repos_url": "https://api.github.com/users/hacobe/repos", "events_url": "https://api.github.com/users/hacobe/events{/privacy}", "received_events_url": "https://api.github.com/users/hacobe/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "It seems to be an issue with how conda-forge is building the binaries. It works on some machines, but not a machine with AMD Opteron 8384 processors." ]
1,635,096,096,000
1,637,262,424,000
1,637,262,423,000
CONTRIBUTOR
null
## Describe the bug I install datasets using conda and when I import datasets I get: "Illegal instruction (core dumped)" ## Steps to reproduce the bug ``` conda create --prefix path/to/env conda activate path/to/env conda install -c huggingface -c conda-forge datasets # exits with output "Illegal instruction (core dumped)" python -m datasets ``` ## Environment info When I run "datasets-cli env", I also get "Illegal instruction (core dumped)" If I run the following commands: ``` conda create --prefix path/to/another/new/env conda activate path/to/another/new/env conda install -c huggingface transformers transformers-cli env ``` Then I get: - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-67-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No Let me know what additional information you need in order to debug this issue. Thanks in advance!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3155/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3155/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3154
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3154/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3154/comments
https://api.github.com/repos/huggingface/datasets/issues/3154/events
https://github.com/huggingface/datasets/issues/3154
1,034,361,806
I_kwDODunzps49pxvO
3,154
Sacrebleu unexpected behaviour/requirement for data format
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @BramVanroy!\r\n\r\nGood question. This project relies on PyArrow (tables) to store data too big to fit in RAM. In the case of metrics, this means that the number of predictions and references has to match to form a table.\r\n\r\nThat's why your example throws an error even though it matches the schema:\r\n```python\r\nrefs = [\r\n ['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],\r\n ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.'],\r\n] # len(refs) = 2\r\n\r\nhyps = ['The dog bit the man.', \"It wasn't surprising.\", 'The man had just bitten him.'] # len(hyps) = 3\r\n```\r\n\r\nInstead, it should be:\r\n```python\r\nrefs = [\r\n ['The dog bit the man.', 'The dog had bit the man.'],\r\n ['It was not unexpected.', 'No one was surprised.'],\r\n ['The man bit him first.', 'The man had bitten the dog.'], \r\n] # len(refs) = 3\r\n\r\nhyps = ['The dog bit the man.', \"It wasn't surprising.\", 'The man had just bitten him.'] # len(hyps) = 3\r\n```\r\n\r\nHowever, `sacreblue` works with the format that's described in your example, hence this part:\r\nhttps://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L94-L99\r\n\r\nHope you get an idea!", "Thanks, that makes sense. It is a bit unfortunate because it may be confusing to users since the input format is suddenly different than what they may expect from the underlying library/metric. But it is understandable due to how `datasets` works!" ]
1,635,065,733,000
1,635,671,312,000
1,635,671,311,000
CONTRIBUTOR
null
## Describe the bug When comparing with the original `sacrebleu` implementation, the `datasets` implementation does some strange things that I do not quite understand. This issue was triggered when I was trying to implement TER and found the datasets implementation of BLEU [here](https://github.com/huggingface/datasets/pull/3153). In the below snippet, the original sacrebleu snippet works just fine whereas the datasets implementation throws an error. ## Steps to reproduce the bug ```python import sacrebleu import datasets refs = [ ['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'], ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.'], ] hyps = ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.'] expected_bleu = 48.530827 ds_bleu = datasets.load_metric("sacrebleu") bleu_score_sb = sacrebleu.corpus_bleu(hyps, refs).score print(bleu_score_sb, expected_bleu) # works: 48.5308... bleu_score_ds = ds_bleu.compute(predictions=hyps, references=refs)["score"] print(bleu_score_ds, expected_bleu) # ValueError: Predictions and/or references don't match the expected format. ``` This seems to be related to how datasets forces the features format here: https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L94-L99 and then manipulates the references during the compute stage here https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/metrics/sacrebleu/sacrebleu.py#L119-L122 I do not quite understand why that is required since sacrebleu handles argument parsing quite well [by itself](https://github.com/mjpost/sacrebleu/blob/2787185dd0f8d224c72ee5a831d163c2ac711a47/sacrebleu/metrics/base.py#L229). ## Actual results Traceback (most recent call last): File "C:\Users\bramv\AppData\Roaming\JetBrains\PyCharm2020.3\scratches\scratch_23.py", line 23, in <module> bleu_score_ds = ds_bleu.compute(predictions=hyps, references=refs)["score"] File "C:\dev\python\datasets\src\datasets\metric.py", line 392, in compute self.add_batch(predictions=predictions, references=references) File "C:\dev\python\datasets\src\datasets\metric.py", line 439, in add_batch raise ValueError( ValueError: Predictions and/or references don't match the expected format. Expected format: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Value(dtype='string', id='sequence'), length=-1, id='references')}, Input predictions: ['The dog bit the man.', "It wasn't surprising.", 'The man had just bitten him.'], Input references: [['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'], ['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.']] ## Environment info - `datasets` version: 1.14.1.dev0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.2 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3154/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3154/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3153/comments
https://api.github.com/repos/huggingface/datasets/issues/3153/events
https://github.com/huggingface/datasets/pull/3153
1,034,179,198
PR_kwDODunzps4tlEVE
3,153
Add TER (as implemented in sacrebleu)
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The problem appears to stem from the omission of the lines that you mentioned. If you add them back and try examples from [this](https://huggingface.co/docs/datasets/using_metrics.html) tutorial (sacrebleu metric example) the code you implemented works fine.\r\n\r\nI think the purpose of these lines is follows:\r\n\r\n1. Sacrebleu metrics confusingly expect a nested list of strings when you have just one reference for each hypothesis (i.e. `[[\"example1\", \"example2\", \"example3]]`), while for cases with more than one reference a _nested list of lists of strings_ (i.e. `[[\"ref1a\", \"ref1b\"], [\"ref2a\", \"ref2b\"], [\"ref3a\", \"ref3b\"]]`) is expected instead. So `transformed_references` line outputs the required single reference format for sacrebleu's ter implementation which you can't pass directly to `compute`.\r\n2. I'm assuming that an additional check is also related to that confusing format with one/many references, because it's really difficult to tell what exactly you're doing wrong if you're not aware of that issue." ]
1,634,999,205,000
1,635,851,051,000
1,635,851,051,000
CONTRIBUTOR
null
Implements TER (Translation Edit Rate) as per its implementation in sacrebleu. Sacrebleu for BLEU scores is already implemented in `datasets` so I thought this would be a nice addition. I started from the sacrebleu implementation, as the two metrics have a lot in common. Verified with sacrebleu's [testing suite](https://github.com/mjpost/sacrebleu/blob/078c440168c6adc89ba75fe6d63f0d922d42bcfe/test/test_ter.py) that this indeed works as intended. ```python import datasets test_cases = [ (['aaaa bbbb cccc dddd'], ['aaaa bbbb cccc dddd'], 0), # perfect match (['dddd eeee ffff'], ['aaaa bbbb cccc'], 1), # no overlap ([''], ['a'], 1), # corner case, empty hypothesis (['d e f g h a b c'], ['a b c d e f g h'], 1 / 8), # a single shift fixes MT ( [ 'wählen Sie " Bild neu berechnen , " um beim Ändern der Bildgröße Pixel hinzuzufügen oder zu entfernen , damit das Bild ungefähr dieselbe Größe aufweist wie die andere Größe .', 'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren möchten , wählen Sie im Menü des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "', 'klicken Sie auf der Registerkarte " Optionen " auf die Schaltfläche " Benutzerdefiniert " und geben Sie Werte für " Fehlerkorrektur-Level " und " Y / X-Verhältnis " ein .', 'Sie können beispielsweise ein Dokument erstellen , das ein Auto über die Bühne enthält .', 'wählen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "', ], [ 'wählen Sie " Bild neu berechnen , " um beim Ändern der Bildgröße Pixel hinzuzufügen oder zu entfernen , damit die Darstellung des Bildes in einer anderen Größe beibehalten wird .', 'wenn Sie alle Aufgaben im aktuellen Dokument aktualisieren möchten , wählen Sie im Menü des Aufgabenbedienfelds die Option " Alle Aufgaben aktualisieren . "', 'klicken Sie auf der Registerkarte " Optionen " auf die Schaltfläche " Benutzerdefiniert " und geben Sie für " Fehlerkorrektur-Level " und " Y / X-Verhältnis " niedrigere Werte ein .', 'Sie können beispielsweise ein Dokument erstellen , das ein Auto enthalt , das sich über die Bühne bewegt .', 'wählen Sie im Dialogfeld " Neu aus Vorlage " eine Vorlage aus und klicken Sie auf " Neu . "', ], 0.136 # realistic example from WMT dev data (2019) ), ] ter = datasets.load_metric(r"path\to\datasets\metrics\ter") predictions = ["hello there general kenobi", "foo bar foobar"] references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]] print(ter.compute(predictions=predictions, references=references)) for hyp, ref, score in test_cases: # Note the reference transformation which is different from scarebleu's input format results = ter.compute(predictions=hyp, references=[[r] for r in ref]) assert 100*score == results["score"], f"expected {100*score}, got {results['score']}" ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3153/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3153/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3153", "html_url": "https://github.com/huggingface/datasets/pull/3153", "diff_url": "https://github.com/huggingface/datasets/pull/3153.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3153.patch", "merged_at": 1635851051000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3152
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3152/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3152/comments
https://api.github.com/repos/huggingface/datasets/issues/3152/events
https://github.com/huggingface/datasets/pull/3152
1,034,039,379
PR_kwDODunzps4tkqi-
3,152
Fix some typos in the documentation
{ "login": "h4iku", "id": 3812788, "node_id": "MDQ6VXNlcjM4MTI3ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/3812788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/h4iku", "html_url": "https://github.com/h4iku", "followers_url": "https://api.github.com/users/h4iku/followers", "following_url": "https://api.github.com/users/h4iku/following{/other_user}", "gists_url": "https://api.github.com/users/h4iku/gists{/gist_id}", "starred_url": "https://api.github.com/users/h4iku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h4iku/subscriptions", "organizations_url": "https://api.github.com/users/h4iku/orgs", "repos_url": "https://api.github.com/users/h4iku/repos", "events_url": "https://api.github.com/users/h4iku/events{/privacy}", "received_events_url": "https://api.github.com/users/h4iku/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,953,115,000
1,635,172,056,000
1,635,170,628,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3152/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3152/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3152", "html_url": "https://github.com/huggingface/datasets/pull/3152", "diff_url": "https://github.com/huggingface/datasets/pull/3152.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3152.patch", "merged_at": 1635170628000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3151
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3151/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3151/comments
https://api.github.com/repos/huggingface/datasets/issues/3151/events
https://github.com/huggingface/datasets/pull/3151
1,033,890,501
PR_kwDODunzps4tkL7t
3,151
Re-add faiss to windows testing suite
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,931,269,000
1,635,850,054,000
1,635,847,563,000
CONTRIBUTOR
null
In recent versions, `faiss-cpu` seems to be available for Windows as well. See the [PyPi page](https://pypi.org/project/faiss-cpu/#files) to confirm. We can therefore included it for Windows in the setup file. At first tests didn't pass due to problems with permissions as caused by `NamedTemporaryFile` on Windows. This built-in library is notoriously poor in playing nice on Windows. The required change isn't pretty, but it works. First set `delete=False` to not automatically try to delete the file on `exit`. Then, manually delete the file with `unlink`. It's weird, I know, but it works. ```python with tempfile.NamedTemporaryFile(delete=False) as tmp_file: # do stuff os.unlink(tmp_file.name) ``` closes #3150
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3151/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3151/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3151", "html_url": "https://github.com/huggingface/datasets/pull/3151", "diff_url": "https://github.com/huggingface/datasets/pull/3151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3151.patch", "merged_at": 1635847563000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3150
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3150/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3150/comments
https://api.github.com/repos/huggingface/datasets/issues/3150/events
https://github.com/huggingface/datasets/issues/3150
1,033,831,530
I_kwDODunzps49nwRq
3,150
Faiss _is_ available on Windows
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sure, feel free to open a PR." ]
1,634,926,036,000
1,635,847,563,000
1,635,847,563,000
CONTRIBUTOR
null
In the setup file, I find the following: https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/setup.py#L171 However, FAISS does install perfectly fine on Windows on my system. You can also confirm this on the [PyPi page](https://pypi.org/project/faiss-cpu/#files), where Windows wheels are available. Maybe this was true for older versions? For current versions, this can be removed I think. (This isn't really a bug but didn't know how else to tag.) If you agree I can do a quick PR and remove that line.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3150/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3150/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3149
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3149/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3149/comments
https://api.github.com/repos/huggingface/datasets/issues/3149/events
https://github.com/huggingface/datasets/pull/3149
1,033,747,625
PR_kwDODunzps4tjuUt
3,149
Add CMU Hinglish DoG Dataset for MT
{ "login": "Ishan-Kumar2", "id": 46553104, "node_id": "MDQ6VXNlcjQ2NTUzMTA0", "avatar_url": "https://avatars.githubusercontent.com/u/46553104?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ishan-Kumar2", "html_url": "https://github.com/Ishan-Kumar2", "followers_url": "https://api.github.com/users/Ishan-Kumar2/followers", "following_url": "https://api.github.com/users/Ishan-Kumar2/following{/other_user}", "gists_url": "https://api.github.com/users/Ishan-Kumar2/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ishan-Kumar2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ishan-Kumar2/subscriptions", "organizations_url": "https://api.github.com/users/Ishan-Kumar2/orgs", "repos_url": "https://api.github.com/users/Ishan-Kumar2/repos", "events_url": "https://api.github.com/users/Ishan-Kumar2/events{/privacy}", "received_events_url": "https://api.github.com/users/Ishan-Kumar2/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq, thanks a lot for the help. I have moved the part as suggested. \r\nAlthough still while running the dummy data script, I face this issue\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ishan/anaconda3/bin/datasets-cli\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/commands/datasets_cli.py\", line 33, in main\r\n service.run()\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/commands/dummy_data.py\", line 318, in run\r\n self._autogenerate_dummy_data(\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/commands/dummy_data.py\", line 363, in _autogenerate_dummy_data\r\n dataset_builder._prepare_split(split_generator)\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/builder.py\", line 1103, in _prepare_split\r\n example = self.info.features.encode_example(record)\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/features/features.py\", line 981, in encode_example\r\n return encode_nested_example(self, example)\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/features/features.py\", line 775, in encode_nested_example\r\n return {\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/features/features.py\", line 775, in <dictcomp>\r\n return {\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 99, in zip_dict\r\n yield key, tuple(d[key] for d in dicts)\r\n File \"/home/ishan/anaconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 99, in <genexpr>\r\n yield key, tuple(d[key] for d in dicts)\r\nKeyError: 'status'\r\n```\r\nThis KeyError is at times different from 'status' also.\r\nwhen I run \r\n```\r\ndatasets-cli dummy_data datasets/cmu_hinglish_dog --auto_generate --json_field='history'\r\n```\r\nI have tried removing unnecessary feature type definition, but that didn't help. Please let me know if I am missing something, thanks!", "The CI fail is unrelated to this PR and fixed on master. Merging !" ]
1,634,919,445,000
1,636,976,202,000
1,636,972,065,000
CONTRIBUTOR
null
Address part of #2841 Added the CMU Hinglish DoG Dataset as in GLUECoS. Added it as a seperate dataset as unlike other tasks of GLUE CoS this can't be evaluated for a BERT like model. Consists of parallel dataset between Hinglish (Hindi-English) and English, can be used for Machine Translation between the two. The data processing part is inspired from the GLUECoS repo [here](https://github.com/microsoft/GLUECoS/blob/7fdc51653e37a32aee17505c47b7d1da364fa77e/Data/Preprocess_Scripts/preprocess_mt_en_hi.py) The dummy data part is not working properly, it shows ``` UnboundLocalError: local variable 'generator_splits' referenced before assignment ``` when I run without ``--auto_generate``. Please let me know how I can fix that. Thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3149/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3149/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3149", "html_url": "https://github.com/huggingface/datasets/pull/3149", "diff_url": "https://github.com/huggingface/datasets/pull/3149.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3149.patch", "merged_at": 1636972065000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3147
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3147/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3147/comments
https://api.github.com/repos/huggingface/datasets/issues/3147/events
https://github.com/huggingface/datasets/pull/3147
1,033,607,659
PR_kwDODunzps4tjRHG
3,147
Fix CLI test to ignore verfications when saving infos
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,910,766,000
1,635,321,710,000
1,635,321,709,000
MEMBER
null
Fix #3146.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3147/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3147/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3147", "html_url": "https://github.com/huggingface/datasets/pull/3147", "diff_url": "https://github.com/huggingface/datasets/pull/3147.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3147.patch", "merged_at": 1635321709000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3146
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3146/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3146/comments
https://api.github.com/repos/huggingface/datasets/issues/3146/events
https://github.com/huggingface/datasets/issues/3146
1,033,605,947
I_kwDODunzps49m5M7
3,146
CLI test command throws NonMatchingSplitsSizesError when saving infos
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,910,653,000
1,635,321,709,000
1,635,321,709,000
MEMBER
null
When trying to generate a datset JSON metadata, a `NonMatchingSplitsSizesError` is thrown: ``` $ datasets-cli test datasets/arabic_billion_words --save_infos --all_configs Testing builder 'Alittihad' (1/10) Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: Unknown size, post-processed: Unknown size, total: 332.13 MiB) to .cache\arabic_billion_words\Alittihad\1.1.0\8175ff1c9714c6d5d15b1141b6042e5edf048276bb81a9c14e35e149a7a62ae4... Traceback (most recent call last): File "path\huggingface\datasets\.venv\Scripts\datasets-cli-script.py", line 33, in <module> sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')()) File "path\huggingface\datasets\src\datasets\commands\datasets_cli.py", line 33, in main service.run() File "path\huggingface\datasets\src\datasets\commands\test.py", line 144, in run builder.download_and_prepare( File "path\huggingface\datasets\src\datasets\builder.py", line 607, in download_and_prepare self._download_and_prepare( File "path\huggingface\datasets\src\datasets\builder.py", line 709, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "path\huggingface\datasets\src\datasets\utils\info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words')}] ``` This is due because a previous run generated a wrong `dataset_info.json`. This error can be avoided by passing `--ignore_verifications`, but I think this should be assumed when passing `--save_infos`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3146/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3146/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3141
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3141/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3141/comments
https://api.github.com/repos/huggingface/datasets/issues/3141/events
https://github.com/huggingface/datasets/pull/3141
1,033,555,910
PR_kwDODunzps4tjGYz
3,141
Fix caching bugs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,907,565,000
1,634,935,928,000
1,634,910,425,000
CONTRIBUTOR
null
This PR fixes some caching bugs (most likely introduced in the latest refactor): * remove ")" added by accident in the dataset dir name * correctly pass the namespace kwargs in `CachedDatasetModuleFactory` * improve the warning message if `HF_DATASETS_OFFLINE is `True`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3141/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3141/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3141", "html_url": "https://github.com/huggingface/datasets/pull/3141", "diff_url": "https://github.com/huggingface/datasets/pull/3141.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3141.patch", "merged_at": 1634910424000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3137
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3137/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3137/comments
https://api.github.com/repos/huggingface/datasets/issues/3137/events
https://github.com/huggingface/datasets/pull/3137
1,033,363,652
PR_kwDODunzps4tievk
3,137
Fix numpy deprecation warning for ragged tensors
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "This'll be a really helpful fix, thank you!" ]
1,634,894,266,000
1,634,918,655,000
1,634,918,654,000
MEMBER
null
Numpy shows a deprecation warning when we call `np.array` on a list of ragged tensors without specifying the `dtype`. If their shapes match, the tensors can be collated together, otherwise the resulting array should have `dtype=np.object`. Fix #3084 cc @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3137/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3137/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3137", "html_url": "https://github.com/huggingface/datasets/pull/3137", "diff_url": "https://github.com/huggingface/datasets/pull/3137.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3137.patch", "merged_at": 1634918654000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3136
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3136/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3136/comments
https://api.github.com/repos/huggingface/datasets/issues/3136/events
https://github.com/huggingface/datasets/pull/3136
1,033,360,396
PR_kwDODunzps4tieFi
3,136
Fix script of Arabic Billion Words dataset to return all data
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,894,064,000
1,634,909,321,000
1,634,909,320,000
MEMBER
null
The script has a bug and only parses and generates a portion of the entire dataset. This PR fixes the loading script so that is properly parses the entire dataset. Current implementation generates the same number of examples as reported in the [original paper](https://arxiv.org/abs/1611.04033) for all configurations except for one: - For "Youm7" we generate more examples (1172136) than the ones reported by the paper (1025027) | | Number of examples | Number of examples according to the source | |:---------------|-------------------:|-----:| | Alittihad | 349342 |349342 | | Almasryalyoum | 291723 |291723 | | Almustaqbal | 446873 |446873 | | Alqabas | 817274 |817274 | | Echoroukonline | 139732 |139732 | | Ryiadh | 858188 | 858188 | | Sabanews | 92149 |92149 | | SaudiYoum | 888068 |888068 | | Techreen | 314597 |314597 | | Youm7 | 1172136 |1025027 | Fix #3126.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3136/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3136/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3136", "html_url": "https://github.com/huggingface/datasets/pull/3136", "diff_url": "https://github.com/huggingface/datasets/pull/3136.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3136.patch", "merged_at": 1634909319000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3135/comments
https://api.github.com/repos/huggingface/datasets/issues/3135/events
https://github.com/huggingface/datasets/issues/3135
1,033,294,299
I_kwDODunzps49ltHb
3,135
Make inspect.get_dataset_config_names always return a non-empty list of configs
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @severo, I guess this issue requests not only to be able to access the configuration name (by using `inspect.get_dataset_config_names`), but the configuration itself as well (I mean you use the name to get the configuration afterwards, maybe using `builder_cls.builder_configs`), is this right?", "Yes, maybe the issue could be reformulated. As a user, I want to avoid having to manage special cases:\r\n- I want to be able to get the names of a dataset's configs, and use them in the rest of the API (get the data, get the split names, etc).\r\n- I don't want to have to manage datasets with named configs (`glue`) differently from datasets without named configs (`acronym_identification`, `Check/region_1`)" ]
1,634,889,770,000
1,635,399,889,000
1,635,399,889,000
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** Currently, some datasets have a configuration, while others don't. It would be simpler for the user to always have configuration names to refer to **Describe the solution you'd like** In that sense inspect.get_dataset_config_names should always return at least one configuration name, be it `default` or `Check___region_1` (for community datasets like `Check/region_1`). https://github.com/huggingface/datasets/blob/c5747a5e1dde2670b7f2ca6e79e2ffd99dff85af/src/datasets/inspect.py#L161
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3135/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3135/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3133/comments
https://api.github.com/repos/huggingface/datasets/issues/3133/events
https://github.com/huggingface/datasets/pull/3133
1,032,511,710
PR_kwDODunzps4tftyZ
3,133
Support Audio feature in streaming mode
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,823,477,000
1,636,726,385,000
1,636,726,384,000
MEMBER
null
Fix #3132.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3133/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3133/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3133", "html_url": "https://github.com/huggingface/datasets/pull/3133", "diff_url": "https://github.com/huggingface/datasets/pull/3133.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3133.patch", "merged_at": 1636726384000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3132/comments
https://api.github.com/repos/huggingface/datasets/issues/3132/events
https://github.com/huggingface/datasets/issues/3132
1,032,505,430
I_kwDODunzps49ishW
3,132
Support Audio feature in streaming mode
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,823,138,000
1,636,726,384,000
1,636,726,384,000
MEMBER
null
Currently, Audio feature is only supported for non-streaming datasets. Due to the large size of many speech datasets, we should also support Audio feature in streaming mode.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3132/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3132/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3129/comments
https://api.github.com/repos/huggingface/datasets/issues/3129/events
https://github.com/huggingface/datasets/pull/3129
1,032,234,167
PR_kwDODunzps4tezlA
3,129
Support Audio feature for TAR archives in sequential access
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Also do you think we can adapt `cast_column` to keep the same value for this new parameter when the user only wants to change the sampling rate ?", "Thanks for your comments, @lhoestq, I will address them afterwards.\r\n\r\nBut, I think it is more important/urgent first address the current blocking non-passing test: https://github.com/huggingface/datasets/runs/4143579241?check_suite_focus=true\r\n- I am thinking of a way of solving it, but if you have any hint, it will be more than welcome! 😅 \r\n\r\nBasically:\r\n```\r\n{'audio': '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_dataset_with_audio_featur1/data/test_audio_44100.wav'}\r\n``` \r\nbecomes\r\n```\r\n{'audio': {'bytes': None, 'path': '/tmp/pytest-of-runner/pytest-0/popen-gw1/test_dataset_with_audio_featur1/data/test_audio_44100.wav'}}\r\n```\r\nafter a `map`, which is what was stored in the Arrow file. However we expect it remains invariant after this `map`.", "@lhoestq, @mariosasko I finally proposed another implementation different from my last one:\r\n- Before: store Audio always a struct<path: string, bytes: binary>, where bytes can be None\r\n- Now, depending on the examples, either store Audio as a struct (as before), or as a string.\r\n\r\nPlease note that the main motivation for this change was the issue mentioned above: https://github.com/huggingface/datasets/pull/3129#issuecomment-964347056\r\n", "Until here we had the assumption that a Features object always has an associated, deterministic, pyarrow schema. This is useful to ensure that we are able to concatenate two datasets that have the same features for example.\r\n\r\nBy breaking this assumption for the Audio type, how can we ensure that we can concatenate two audio datasets if one has Audio as a struct and the other a string ?", "Oh I noticed that the Audio feature type has a private attribute `_storage_dtype`, so the assumption still holds, since they are now different feature types depending on the this attribute :)\r\n(i mean different from the python equal operator point of view)", "I think this PR is ready, @lhoestq, @mariosasko. ", "Nit: We should also mention the new storage structure in the `Features` docstring [here](https://github.com/huggingface/datasets/blob/b29fb550c31de337b952035a7584147e0f18c0cf/src/datasets/features/features.py#L966) for users to know what type of value to return in their dataset scripts (we also have a link to that docstring in the `ADD_NEW_DATASET` template)." ]
1,634,806,611,000
1,637,170,928,000
1,637,170,927,000
MEMBER
null
Add Audio feature support for TAR archived files in sequential access. Fix #3128.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3129/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3129/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3129", "html_url": "https://github.com/huggingface/datasets/pull/3129", "diff_url": "https://github.com/huggingface/datasets/pull/3129.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3129.patch", "merged_at": 1637170927000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3128/comments
https://api.github.com/repos/huggingface/datasets/issues/3128/events
https://github.com/huggingface/datasets/issues/3128
1,032,201,870
I_kwDODunzps49hiaO
3,128
Support Audio feature for TAR archives in sequential access
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,804,581,000
1,637,170,927,000
1,637,170,927,000
MEMBER
null
Currently, Audio feature accesses each audio file by their file path. However, streamed TAR archive files do not allow random access to their archived files. Therefore, we should enhance the Audio feature to support TAR archived files in sequential access.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3128/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3128/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3126
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3126/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3126/comments
https://api.github.com/repos/huggingface/datasets/issues/3126/events
https://github.com/huggingface/datasets/issues/3126
1,032,093,055
I_kwDODunzps49hH1_
3,126
"arabic_billion_words" dataset does not create the full dataset
{ "login": "vitalyshalumov", "id": 33824221, "node_id": "MDQ6VXNlcjMzODI0MjIx", "avatar_url": "https://avatars.githubusercontent.com/u/33824221?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vitalyshalumov", "html_url": "https://github.com/vitalyshalumov", "followers_url": "https://api.github.com/users/vitalyshalumov/followers", "following_url": "https://api.github.com/users/vitalyshalumov/following{/other_user}", "gists_url": "https://api.github.com/users/vitalyshalumov/gists{/gist_id}", "starred_url": "https://api.github.com/users/vitalyshalumov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vitalyshalumov/subscriptions", "organizations_url": "https://api.github.com/users/vitalyshalumov/orgs", "repos_url": "https://api.github.com/users/vitalyshalumov/repos", "events_url": "https://api.github.com/users/vitalyshalumov/events{/privacy}", "received_events_url": "https://api.github.com/users/vitalyshalumov/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @vitalyshalumov.\r\n\r\nApparently the script to parse the data has a bug, and does not generate the entire dataset.\r\n\r\nI'm fixing it." ]
1,634,796,158,000
1,634,909,320,000
1,634,909,320,000
NONE
null
## Describe the bug When running: raw_dataset = load_dataset('arabic_billion_words','Alittihad') the correct dataset file is pulled from the url. But, the generated dataset includes just a small portion of the data included in the file. This is true for all other portions of the "arabic_billion_words" dataset ('Almasryalyoum',.....) ## Steps to reproduce the bug ```python # Sample code to reproduce the bug raw_dataset = load_dataset('arabic_billion_words','Alittihad') #The screen message Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 20.62 MiB, post-processed: Unknown size, total: 352.74 MiB) ## Expected results over 100K sentences ## Actual results only 11K sentences ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.0 - Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3126/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3126/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3125/comments
https://api.github.com/repos/huggingface/datasets/issues/3125/events
https://github.com/huggingface/datasets/pull/3125
1,032,046,666
PR_kwDODunzps4teNPC
3,125
Add SLR83 to OpenSLR
{ "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,790,360,000
1,634,933,405,000
1,634,891,422,000
CONTRIBUTOR
null
The PR resolves #3119, adding SLR83 (UK and Ireland dialects) to the previously created OpenSLR dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3125/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3125/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3125", "html_url": "https://github.com/huggingface/datasets/pull/3125", "diff_url": "https://github.com/huggingface/datasets/pull/3125.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3125.patch", "merged_at": 1634891422000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3124/comments
https://api.github.com/repos/huggingface/datasets/issues/3124/events
https://github.com/huggingface/datasets/pull/3124
1,031,976,286
PR_kwDODunzps4td-5w
3,124
More efficient nested features encoding
{ "login": "eladsegal", "id": 13485709, "node_id": "MDQ6VXNlcjEzNDg1NzA5", "avatar_url": "https://avatars.githubusercontent.com/u/13485709?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eladsegal", "html_url": "https://github.com/eladsegal", "followers_url": "https://api.github.com/users/eladsegal/followers", "following_url": "https://api.github.com/users/eladsegal/following{/other_user}", "gists_url": "https://api.github.com/users/eladsegal/gists{/gist_id}", "starred_url": "https://api.github.com/users/eladsegal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eladsegal/subscriptions", "organizations_url": "https://api.github.com/users/eladsegal/orgs", "repos_url": "https://api.github.com/users/eladsegal/repos", "events_url": "https://api.github.com/users/eladsegal/events{/privacy}", "received_events_url": "https://api.github.com/users/eladsegal/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq @albertvillanova @mariosasko\r\nCan you please check this out?", "Thanks, done!" ]
1,634,781,331,000
1,635,865,633,000
1,635,851,044,000
CONTRIBUTOR
null
Nested encoding of features wastes a lot of time on operations which are effectively doing nothing when lists are used. For example, if in the input we have a list of integers, `encoded_nested_example` will iterate over it and apply `encoded_nested_example` on every element even though it just return the int as is. A similar issue is handled at an earlier stage when casting pytorch/tensorflow/pandas objects to python lists/numpy arrays: https://github.com/huggingface/datasets/blob/c98c23c4260edadab00f997d1a5d66b7f2e93ce9/src/datasets/features/features.py#L149-L156 https://github.com/huggingface/datasets/blob/c98c23c4260edadab00f997d1a5d66b7f2e93ce9/src/datasets/features/features.py#L212-L228 In this pull request I suggest to use the same approach in `encoded_nested_example`. In my setup there was a major speedup with this change: loading the data was at least x4 faster.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3124/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3124/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3124", "html_url": "https://github.com/huggingface/datasets/pull/3124", "diff_url": "https://github.com/huggingface/datasets/pull/3124.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3124.patch", "merged_at": 1635851044000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3123/comments
https://api.github.com/repos/huggingface/datasets/issues/3123/events
https://github.com/huggingface/datasets/issues/3123
1,031,793,207
I_kwDODunzps49f-o3
3,123
Segmentation fault when loading datasets from file
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! I created an issue on Arrow's JIRA after making a minimum reproducible example\r\n\r\nhttps://issues.apache.org/jira/browse/ARROW-14439\r\n\r\n```python\r\nimport io\r\n\r\nimport pyarrow.json as paj\r\n\r\nbatch = b'{\"a\": [], \"b\": 1}\\n{\"b\": 1}'\r\nblock_size = 12\r\n\r\npaj.read_json(\r\n io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)\r\n)\r\n```\r\n\r\nI don't see a way to workaround this properly now without hurting the performance of the JSON loader significantly though", "The issue has been fixed in pyarrow 6.0.0, please update pyarrow :)\r\n\r\nThe issue was due to missing fields in the JSON data of type list. Now it's working fine and missing list fields are replaced with empty lists" ]
1,634,760,971,000
1,635,865,027,000
1,635,865,027,000
MEMBER
null
## Describe the bug Custom dataset loading sometimes segfaults and kills the process if chunks contain a variety of features/ ## Steps to reproduce the bug Download an example file: ``` wget https://gist.githubusercontent.com/TevenLeScao/11e2184394b3fa47d693de2550942c6b/raw/4232704d08fbfcaf93e5b51def9e5051507651ad/tiny_kelm.jsonl ``` Then in Python: ``` import datasets tiny_kelm = datasets.load_dataset("json", data_files="tiny_kelm.jsonl", chunksize=100000) ``` ## Expected results a `tiny_kelm` functional dataset ## Actual results ☠️ `Segmentation fault (core dumped)` ☠️ ## Environment info - `datasets` version: 1.14.0 - Platform: Linux-5.11.0-38-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3123/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3123/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3122/comments
https://api.github.com/repos/huggingface/datasets/issues/3122/events
https://github.com/huggingface/datasets/issues/3122
1,031,787,509
I_kwDODunzps49f9P1
3,122
OSError with a custom dataset loading script
{ "login": "suzanab", "id": 38602977, "node_id": "MDQ6VXNlcjM4NjAyOTc3", "avatar_url": "https://avatars.githubusercontent.com/u/38602977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/suzanab", "html_url": "https://github.com/suzanab", "followers_url": "https://api.github.com/users/suzanab/followers", "following_url": "https://api.github.com/users/suzanab/following{/other_user}", "gists_url": "https://api.github.com/users/suzanab/gists{/gist_id}", "starred_url": "https://api.github.com/users/suzanab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/suzanab/subscriptions", "organizations_url": "https://api.github.com/users/suzanab/orgs", "repos_url": "https://api.github.com/users/suzanab/repos", "events_url": "https://api.github.com/users/suzanab/events{/privacy}", "received_events_url": "https://api.github.com/users/suzanab/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthere is a difference in how the `data_dir` is zipped between the `classla/janes_tag` and the `classla/reldi_hr` dataset. After unzipping, for the former, the data files (`*.conllup`) are in the root directory (root -> data files), and for the latter, they are inside the `data` directory (root -> `data` -> data files).\r\n\r\nThis can be fixed by removing the `os.path.join` call in https://huggingface.co/datasets/classla/janes_tag/blob/main/janes_tag.py#L86\r\n\r\nLet me know if this works for you.", "Hi Mario,\r\n\r\nI had already tried that before, but it didn't work. I have now recreated the `classla/janes_tag` zip file so that it also contains the `data` directory, but I am still getting the same error.", "Hi,\r\n\r\nI just tried to download the `classla/janes_tag` dataset, and this time the zip file is extracted correctly. However, the script is now throwing the IndexError, probably due to a bug in the `_generate_examples`.\r\n\r\nLet me know if you are still getting the same error.", "I am still getting the same error.", "Hi, \r\n\r\ncould you try to download the dataset with a different `cache_dir` like so:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('classla/janes_tag', split='validation', cache_dir=\"path/to/different/cache/dir\")\r\n```\r\nIf this works, then most likely the cached extracted data is causing issues. This data is stored at `~/.cache/huggingface/datasets/downloads/extracted` and needs to be deleted, and then it should work (you can easily locate the directory with the path given in the `OSError` message). Additionally, I'd suggest you to update `datasets` to the newest version with:\r\n```\r\npip install -U datasets\r\n```", "Thank you, deleting the `~/.cache/huggingface/datasets/downloads/extracted` directory helped. However, I am still having problems.\r\n\r\nThere was indeed a bug in the script that was throwing an `IndexError`, which I have now corrected (added the condition to skip the lines starting with '# text') and it is working locally, but still throws an error when I try to load the dataset from HuggingFace. I literally copied and pasted the `_generate_examples` function and ran it on the `dev_all.conllup` file, which I even re-downloaded from the repository to be certain that the files are exactly the same. I also deleted everything again just in case, but it didn't help. The code works locally, but throws an `IndexError` when loading from `datasets.`", "Hi,\r\n\r\nDid some investigation.\r\n\r\nTo fix the dataset script on the Hub, append the following labels to the `names` list of the `upos_tags` field:\r\n```'INTJ NOUN', 'AUX PRON', 'PART ADV', 'PRON ADP', 'INTJ INTJ', 'VERB NOUN', 'NOUN AUX'```.\r\n\r\nThis step is required to avoid an error due to missing labels in the following step which is:\r\n```python\r\nload_dataset(\"classla/janes_tag\", split=\"validation\", download_mode=\"force_redownload\")\r\n```\r\nThis will generate and cache the dataset, so specifying `download_mode` will not be required anymore unless you update the script/data on the Hub.", "It works now, thank you!" ]
1,634,760,519,000
1,637,661,338,000
1,637,661,338,000
NONE
null
## Describe the bug I am getting an OS error when trying to load the newly uploaded dataset classla/janes_tag. What puzzles me is that I have already uploaded a very similar dataset - classla/reldi_hr - with no issues. The loading scripts for the two datasets are almost identical and they have the same directory structure, yet I am only getting an error with janes_tag. ## Steps to reproduce the bug ```python dataset = datasets.load_dataset('classla/janes_tag', split='validation') ``` ## Expected results Dataset correctly loaded. ## Actual results Traceback (most recent call last): File "C:/mypath/test.py", line 91, in <module> load_and_print('janes_tag') File "C:/mypath/test.py", line 32, in load_and_print dataset = datasets.load_dataset('classla/{}'.format(ds_name), split='validation') File "C:\mypath\venv\lib\site-packages\datasets\load.py", line 1632, in load_dataset use_auth_token=use_auth_token, File "C:\mypath\venv\lib\site-packages\datasets\builder.py", line 608, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "C:\mypath\venv\lib\site-packages\datasets\builder.py", line 704, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: [Errno 2] No such file or directory: 'C:\\mypath\\.cache\\huggingface\\datasets\\downloads\\2c9996e44bdc5af9c89bffb9e6d7a3e42fdb2f56bacab45de13b20f3032ea7ca\\data\\train_all.conllup' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.5 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3122/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3122/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3121
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3121/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3121/comments
https://api.github.com/repos/huggingface/datasets/issues/3121/events
https://github.com/huggingface/datasets/pull/3121
1,031,673,115
PR_kwDODunzps4tc_6q
3,121
Use huggingface_hub.HfApi to list datasets/metrics
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,752,109,000
1,636,112,708,000
1,636,105,716,000
CONTRIBUTOR
null
Delete `datasets.inspect.HfApi` and use `huggingface_hub.HfApi` instead. WIP until https://github.com/huggingface/huggingface_hub/pull/429 is merged, then wait for the new release of `huggingface_hub`, update the `huggingface_hub` version in `setup.py` and merge this PR. cc: @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3121/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3121/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3121", "html_url": "https://github.com/huggingface/datasets/pull/3121", "diff_url": "https://github.com/huggingface/datasets/pull/3121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3121.patch", "merged_at": 1636105715000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3120/comments
https://api.github.com/repos/huggingface/datasets/issues/3120/events
https://github.com/huggingface/datasets/pull/3120
1,031,574,511
PR_kwDODunzps4tcril
3,120
Correctly update metadata to preserve features when concatenating datasets with axis=1
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,745,298,000
1,634,891,331,000
1,634,827,821,000
CONTRIBUTOR
null
This PR correctly updates metadata to preserve higher-level feature types (e.g. `ClassLabel`) in `datasets.concatenate_datasets` when `axis=1`. Previously, we would delete the feature metadata in `datasets.concatenate_datasets` if `axis=1` and restore the feature types from the arrow table schema in `Dataset.__init__`. However, this approach only works for simple feature types (e.g. `Value`). Fixes #3111
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3120/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3120/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3120", "html_url": "https://github.com/huggingface/datasets/pull/3120", "diff_url": "https://github.com/huggingface/datasets/pull/3120.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3120.patch", "merged_at": 1634827821000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3119
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3119/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3119/comments
https://api.github.com/repos/huggingface/datasets/issues/3119/events
https://github.com/huggingface/datasets/issues/3119
1,031,328,044
I_kwDODunzps49eNEs
3,119
Add OpenSLR 83 - Crowdsourced high-quality UK and Ireland English Dialect speech
{ "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false }
[ { "login": "tyrius02", "id": 4561309, "node_id": "MDQ6VXNlcjQ1NjEzMDk=", "avatar_url": "https://avatars.githubusercontent.com/u/4561309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tyrius02", "html_url": "https://github.com/tyrius02", "followers_url": "https://api.github.com/users/tyrius02/followers", "following_url": "https://api.github.com/users/tyrius02/following{/other_user}", "gists_url": "https://api.github.com/users/tyrius02/gists{/gist_id}", "starred_url": "https://api.github.com/users/tyrius02/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tyrius02/subscriptions", "organizations_url": "https://api.github.com/users/tyrius02/orgs", "repos_url": "https://api.github.com/users/tyrius02/repos", "events_url": "https://api.github.com/users/tyrius02/events{/privacy}", "received_events_url": "https://api.github.com/users/tyrius02/received_events", "type": "User", "site_admin": false } ]
null
[ "Ugh. The index files for SLR83 are CSV, not TSV. I need to add logic to process these index files." ]
1,634,731,507,000
1,634,929,252,000
1,634,891,422,000
CONTRIBUTOR
null
## Adding a Dataset - **Name:** *openslr** - **Description:** *Data set which contains male and female recordings of English from various dialects of the UK and Ireland.* - **Paper:** *https://www.openslr.org/resources/83/about.html* - **Data:** *Eleven separate data files can be found via https://www.openslr.org/resources/83/* - **Motivation:** *Increase english ASR data with UK and Irish dialects* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). The *openslr* dataset already exists, this will add additional subset, *SLR83*.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3119/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3119/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3118
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3118/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3118/comments
https://api.github.com/repos/huggingface/datasets/issues/3118/events
https://github.com/huggingface/datasets/pull/3118
1,031,309,549
PR_kwDODunzps4tb0LY
3,118
Fix CI error at each release commit
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,730,278,000
1,634,734,956,000
1,634,734,956,000
MEMBER
null
Fix test_load_dataset_canonical at release commit. Fix #3117.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3118/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3118/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3118", "html_url": "https://github.com/huggingface/datasets/pull/3118", "diff_url": "https://github.com/huggingface/datasets/pull/3118.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3118.patch", "merged_at": 1634734955000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3117/comments
https://api.github.com/repos/huggingface/datasets/issues/3117/events
https://github.com/huggingface/datasets/issues/3117
1,031,308,083
I_kwDODunzps49eIMz
3,117
CI error at each release commit
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,730,173,000
1,634,734,955,000
1,634,734,955,000
MEMBER
null
After 1.12.0, there is a recurrent CI error at each release commit: https://app.circleci.com/pipelines/github/huggingface/datasets/8289/workflows/665d954d-e409-4602-8202-e678594d2946/jobs/51110 ``` ____________________ LoadTest.test_load_dataset_canonical _____________________ [gw0] win32 -- Python 3.6.8 C:\tools\miniconda3\python.exe self = <tests.test_load.LoadTest testMethod=test_load_dataset_canonical> def test_load_dataset_canonical(self): scripts_version = os.getenv("HF_SCRIPTS_VERSION", SCRIPTS_VERSION) with self.assertRaises(FileNotFoundError) as context: datasets.load_dataset("_dummy") self.assertIn( f"https://raw.githubusercontent.com/huggingface/datasets/{scripts_version}/datasets/_dummy/_dummy.py", > str(context.exception), ) E AssertionError: 'https://raw.githubusercontent.com/huggingface/datasets/1.14.0/datasets/_dummy/_dummy.py' not found in "Couldn't find a dataset script at C:\\Users\\circleci\\datasets\\_dummy\\_dummy.py or any data file in the same directory. Couldn't find '_dummy' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/_dummy/_dummy.py" tests\test_load.py:358: AssertionError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3117/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3117/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3116/comments
https://api.github.com/repos/huggingface/datasets/issues/3116/events
https://github.com/huggingface/datasets/pull/3116
1,031,270,611
PR_kwDODunzps4tbr6g
3,116
Update doc links to point to new docs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[]
1,634,727,647,000
1,634,891,368,000
1,634,891,205,000
CONTRIBUTOR
null
This PR: * updates the README links and the ADD_NEW_DATASET template to point to the new docs (the new docs don't have a section with the list of all the possible features, so I added that info to the `Features` docstring, which is then referenced in the ADD_NEW_DATASET template) * fixes some broken links in the `.rst` files (fixed with the `make linkcheck` tool)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3116/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3116/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3116", "html_url": "https://github.com/huggingface/datasets/pull/3116", "diff_url": "https://github.com/huggingface/datasets/pull/3116.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3116.patch", "merged_at": 1634891205000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3115/comments
https://api.github.com/repos/huggingface/datasets/issues/3115/events
https://github.com/huggingface/datasets/pull/3115
1,030,737,524
PR_kwDODunzps4tZ-Vr
3,115
Fill in dataset card for NCBI disease dataset
{ "login": "edugp", "id": 17855740, "node_id": "MDQ6VXNlcjE3ODU1NzQw", "avatar_url": "https://avatars.githubusercontent.com/u/17855740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/edugp", "html_url": "https://github.com/edugp", "followers_url": "https://api.github.com/users/edugp/followers", "following_url": "https://api.github.com/users/edugp/following{/other_user}", "gists_url": "https://api.github.com/users/edugp/gists{/gist_id}", "starred_url": "https://api.github.com/users/edugp/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/edugp/subscriptions", "organizations_url": "https://api.github.com/users/edugp/orgs", "repos_url": "https://api.github.com/users/edugp/repos", "events_url": "https://api.github.com/users/edugp/events{/privacy}", "received_events_url": "https://api.github.com/users/edugp/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,677,025,000
1,634,891,107,000
1,634,891,107,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3115/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3115/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3115", "html_url": "https://github.com/huggingface/datasets/pull/3115", "diff_url": "https://github.com/huggingface/datasets/pull/3115.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3115.patch", "merged_at": 1634891107000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3111/comments
https://api.github.com/repos/huggingface/datasets/issues/3111/events
https://github.com/huggingface/datasets/issues/3111
1,030,598,983
I_kwDODunzps49bbFH
3,111
concatenate_datasets removes ClassLabel typing.
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Something like this would fix it I think: https://github.com/huggingface/datasets/compare/master...Dref360:HF-3111/concatenate_types?expand=1" ]
1,634,666,731,000
1,634,827,821,000
1,634,827,821,000
CONTRIBUTOR
null
## Describe the bug When concatenating two datasets, we lose typing of ClassLabel columns. I can work on this if this is a legitimate bug, ## Steps to reproduce the bug ```python import datasets from datasets import Dataset, ClassLabel, Value, concatenate_datasets DS_LEN = 100 my_dataset = Dataset.from_dict( { "sentence": [f"{chr(i % 10)}" for i in range(DS_LEN)], "label": [i % 2 for i in range(DS_LEN)] } ) my_predictions = Dataset.from_dict( { "pred": [(i + 1) % 2 for i in range(DS_LEN)] } ) my_dataset = my_dataset.cast(datasets.Features({"sentence": Value("string"), "label": ClassLabel(2, names=["POS", "NEG"])})) print("Original") print(my_dataset) print(my_dataset.features) concat_ds = concatenate_datasets([my_dataset, my_predictions], axis=1) print("Concatenated") print(concat_ds) print(concat_ds.features) ``` ## Expected results The features of `concat_ds` should contain ClassLabel. ## Actual results On master, I get: ``` {'sentence': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None), 'pred': Value(dtype='int64', id=None)} ``` ## Environment info - `datasets` version: 1.14.1.dev0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3111/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3110
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3110/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3110/comments
https://api.github.com/repos/huggingface/datasets/issues/3110/events
https://github.com/huggingface/datasets/pull/3110
1,030,558,484
PR_kwDODunzps4tZakS
3,110
Stream TAR-based dataset using iter_archive
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I'm creating a new branch `stream-tar-audio` just for the audio datasets since they need https://github.com/huggingface/datasets/pull/3129 to be merged first", "The CI fails are only related to missing sections or tags in the dataset cards - which is unrelated to this PR" ]
1,634,663,784,000
1,636,134,529,000
1,636,134,528,000
MEMBER
null
I converted all the dataset based on TAR archive to use iter_archive instead, so that they can be streamable. It means that around 80 datasets become streamable :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3110/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3110/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3110", "html_url": "https://github.com/huggingface/datasets/pull/3110", "diff_url": "https://github.com/huggingface/datasets/pull/3110.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3110.patch", "merged_at": 1636134528000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3109
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3109/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3109/comments
https://api.github.com/repos/huggingface/datasets/issues/3109/events
https://github.com/huggingface/datasets/pull/3109
1,030,543,284
PR_kwDODunzps4tZXmC
3,109
Update BibTeX entry
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,662,771,000
1,634,663,608,000
1,634,663,607,000
MEMBER
null
Update BibTeX entry.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3109/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3109/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3109", "html_url": "https://github.com/huggingface/datasets/pull/3109", "diff_url": "https://github.com/huggingface/datasets/pull/3109.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3109.patch", "merged_at": 1634663607000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3108/comments
https://api.github.com/repos/huggingface/datasets/issues/3108/events
https://github.com/huggingface/datasets/pull/3108
1,030,405,618
PR_kwDODunzps4tY8ID
3,108
Add Google BLEU (aka GLEU) metric
{ "login": "slowwavesleep", "id": 44175589, "node_id": "MDQ6VXNlcjQ0MTc1NTg5", "avatar_url": "https://avatars.githubusercontent.com/u/44175589?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slowwavesleep", "html_url": "https://github.com/slowwavesleep", "followers_url": "https://api.github.com/users/slowwavesleep/followers", "following_url": "https://api.github.com/users/slowwavesleep/following{/other_user}", "gists_url": "https://api.github.com/users/slowwavesleep/gists{/gist_id}", "starred_url": "https://api.github.com/users/slowwavesleep/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slowwavesleep/subscriptions", "organizations_url": "https://api.github.com/users/slowwavesleep/orgs", "repos_url": "https://api.github.com/users/slowwavesleep/repos", "events_url": "https://api.github.com/users/slowwavesleep/events{/privacy}", "received_events_url": "https://api.github.com/users/slowwavesleep/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,654,918,000
1,635,170,824,000
1,635,170,824,000
CONTRIBUTOR
null
This PR adds the NLTK implementation of Google BLEU metric. This is also a part of an effort to resolve an unfortunate naming collision between GLEU for machine translation and GLEU for grammatical error correction. I used [this page](https://huggingface.co/docs/datasets/add_metric.html) for reference. Please, point me to the right direction if I missed anything.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3108/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3108/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3108", "html_url": "https://github.com/huggingface/datasets/pull/3108", "diff_url": "https://github.com/huggingface/datasets/pull/3108.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3108.patch", "merged_at": 1635170824000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3107/comments
https://api.github.com/repos/huggingface/datasets/issues/3107/events
https://github.com/huggingface/datasets/pull/3107
1,030,357,527
PR_kwDODunzps4tYyhF
3,107
Add paper BibTeX citation
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,652,491,000
1,634,653,582,000
1,634,653,581,000
MEMBER
null
Add paper BibTeX citation to README file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3107/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3107/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3107", "html_url": "https://github.com/huggingface/datasets/pull/3107", "diff_url": "https://github.com/huggingface/datasets/pull/3107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3107.patch", "merged_at": 1634653581000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3106/comments
https://api.github.com/repos/huggingface/datasets/issues/3106/events
https://github.com/huggingface/datasets/pull/3106
1,030,112,473
PR_kwDODunzps4tYA6i
3,106
Fix URLs in blog_authorship_corpus dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,637,965,000
1,634,647,840,000
1,634,647,839,000
MEMBER
null
After contacting the authors of the paper "Effects of Age and Gender on Blogging", they confirmed: - the old URLs are no longer valid - there are alternative host URLs Fix #3091.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3106/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3106/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3106", "html_url": "https://github.com/huggingface/datasets/pull/3106", "diff_url": "https://github.com/huggingface/datasets/pull/3106.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3106.patch", "merged_at": 1634647839000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3104/comments
https://api.github.com/repos/huggingface/datasets/issues/3104/events
https://github.com/huggingface/datasets/issues/3104
1,029,080,412
I_kwDODunzps49VoVc
3,104
Missing Zenodo 1.13.3 release
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Zenodo has fixed on their side the 1.13.3 release: https://zenodo.org/record/5589150" ]
1,634,561,838,000
1,634,908,945,000
1,634,908,944,000
MEMBER
null
After `datasets` 1.13.3 release, this does not appear in Zenodo releases: https://zenodo.org/record/5570305 TODO: - [x] Contact Zenodo support - [x] Check it is fixed
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3104/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3103/comments
https://api.github.com/repos/huggingface/datasets/issues/3103/events
https://github.com/huggingface/datasets/pull/3103
1,029,069,310
PR_kwDODunzps4tUzJQ
3,103
Fix project description in PyPI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,561,249,000
1,634,561,997,000
1,634,561,996,000
MEMBER
null
Fix project description appearing in PyPI, so that it contains the content of the README.md file (like transformers). Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/ Fix #3102.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3103/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3103/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3103", "html_url": "https://github.com/huggingface/datasets/pull/3103", "diff_url": "https://github.com/huggingface/datasets/pull/3103.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3103.patch", "merged_at": 1634561996000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3102/comments
https://api.github.com/repos/huggingface/datasets/issues/3102/events
https://github.com/huggingface/datasets/issues/3102
1,029,067,062
I_kwDODunzps49VlE2
3,102
Unsuitable project description in PyPI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,561,100,000
1,634,561,996,000
1,634,561,996,000
MEMBER
null
Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3102/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3101
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3101/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3101/comments
https://api.github.com/repos/huggingface/datasets/issues/3101/events
https://github.com/huggingface/datasets/pull/3101
1,028,966,968
PR_kwDODunzps4tUelE
3,101
Update SUPERB to use Audio features
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thank you! Sorry I forgot this one @albertvillanova" ]
1,634,555,118,000
1,634,560,434,000
1,634,558,806,000
CONTRIBUTOR
null
This is the same dataset refresh as the other Audio ones: https://github.com/huggingface/datasets/pull/3081 cc @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3101/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3101/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3101", "html_url": "https://github.com/huggingface/datasets/pull/3101", "diff_url": "https://github.com/huggingface/datasets/pull/3101.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3101.patch", "merged_at": 1634558806000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3100
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3100/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3100/comments
https://api.github.com/repos/huggingface/datasets/issues/3100/events
https://github.com/huggingface/datasets/pull/3100
1,028,738,180
PR_kwDODunzps4tTwpn
3,100
Replace FSTimeoutError with parent TimeoutError
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,542,629,000
1,634,543,515,000
1,634,543,514,000
MEMBER
null
PR #3050 introduced a dependency on `fsspec.FSTiemoutError`. Note that this error only exists from `fsspec` version `2021.06.0` (June 2021). To fix #3097, there are 2 alternatives: - Either pinning `fsspec` to versions newer or equal to `2021.06.0` - Or replacing `fsspec.FSTimeoutError` wth its parent `asyncio.TimeoutError`, which exists from Python 3.8.0 (Sep 2018). This PR implements the second approach. Fix #3097.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3100/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3100/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3100", "html_url": "https://github.com/huggingface/datasets/pull/3100", "diff_url": "https://github.com/huggingface/datasets/pull/3100.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3100.patch", "merged_at": 1634543514000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3099/comments
https://api.github.com/repos/huggingface/datasets/issues/3099/events
https://github.com/huggingface/datasets/issues/3099
1,028,338,078
I_kwDODunzps49SzGe
3,099
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
{ "login": "JTWang2000", "id": 49268567, "node_id": "MDQ6VXNlcjQ5MjY4NTY3", "avatar_url": "https://avatars.githubusercontent.com/u/49268567?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JTWang2000", "html_url": "https://github.com/JTWang2000", "followers_url": "https://api.github.com/users/JTWang2000/followers", "following_url": "https://api.github.com/users/JTWang2000/following{/other_user}", "gists_url": "https://api.github.com/users/JTWang2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/JTWang2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JTWang2000/subscriptions", "organizations_url": "https://api.github.com/users/JTWang2000/orgs", "repos_url": "https://api.github.com/users/JTWang2000/repos", "events_url": "https://api.github.com/users/JTWang2000/events{/privacy}", "received_events_url": "https://api.github.com/users/JTWang2000/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @JTWang2000, thanks for reporting.\r\n\r\nHowever, I cannot reproduce your reported bug:\r\n```python\r\n>>> from datasets import load_dataset\r\n\r\n>>> dataset = load_dataset(\"sst\", \"default\")\r\n>>> dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tree'],\r\n num_rows: 8544\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tree'],\r\n num_rows: 1101\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tree'],\r\n num_rows: 2210\r\n })\r\n})\r\n```\r\n\r\nMaybe, the cause is that you have a quite old version of `huggingface_hub`. Could you please try to update it and confirm if the problem persists?\r\n```\r\npip install -U huggingface_hub\r\n```", "Im facing the same issue. I did run the upgrade command but that doesnt seem to resolve the issue", "Hi @aneeshjain, could you please specify which `huggingface_hub` version you are using?\r\n\r\nBesides that, please run `datasets-cli env` and copy-and-paste its output below.", "The problem seems to be with the latest version of `datasets`. After running `pip install -U datasets huggingface_hub`, I get the following: \r\n\r\n```bash\r\npython -c \"import huggingface_hub; print(f'hbvers={huggingface_hub.__version__}'); import datasets; print(f'dvers={datasets.__version__}')\"\r\nhbvers=0.0.8\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/__init__.py\", line 37, in <module>\r\n from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/builder.py\", line 44, in <module>\r\n from .data_files import DataFilesDict, _sanitize_patterns\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/data_files.py\", line 122, in <module>\r\n allowed_extensions: Optional[list] = None,\r\nAttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'\r\n````\r\nNote that pip reports the latest `datasets` version as \r\n```bash\r\n pip show datasets\r\nName: datasets\r\nVersion: 1.14.0\r\n```\r\nHowever, if I downgrade datasets with `pip install datasets==1.11.0`, things now work\r\n```bash\r\npython -c \"import huggingface_hub; print(f'hbvers={huggingface_hub.__version__}'); import datasets; print(f'dvers={datasets.__version__}')\"\r\nhbvers=0.0.8\r\ndvers=1.11.0\r\n````", "> Hi @JTWang2000, thanks for reporting.\r\n> \r\n> However, I cannot reproduce your reported bug:\r\n> \r\n> ```python\r\n> >>> from datasets import load_dataset\r\n> \r\n> >>> dataset = load_dataset(\"sst\", \"default\")\r\n> >>> dataset\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['sentence', 'label', 'tokens', 'tree'],\r\n> num_rows: 8544\r\n> })\r\n> validation: Dataset({\r\n> features: ['sentence', 'label', 'tokens', 'tree'],\r\n> num_rows: 1101\r\n> })\r\n> test: Dataset({\r\n> features: ['sentence', 'label', 'tokens', 'tree'],\r\n> num_rows: 2210\r\n> })\r\n> })\r\n> ```\r\n> \r\n> Maybe, the cause is that you have a quite old version of `huggingface_hub`. Could you please try to update it and confirm if the problem persists?\r\n> \r\n> ```\r\n> pip install -U huggingface_hub\r\n> ```\r\n\r\nMy problem solved after updating huggingface hub command. Thanks a lot and sorry for the late reply. ", "@tjruwase, please note that versions of `datsets` and `huggingface_hub` must be compatible one with each other:\r\n- In `datasets` version `1.11.0`, the requirement on `huggingface_hub` was: `huggingface_hub<0.1.0`\r\n https://github.com/huggingface/datasets/blob/2cc00f372a96133e701275eec4d6b26d15257289/setup.py#L90\r\n - Therefore, your installed `huggingface_hub` version `0.0.8` was compatible\r\n- In `datasets` version `1.12.0`, the requirement on `huggingface_hub` was: `huggingface_hub>=0.0.14,<0.1.0`\r\n https://github.com/huggingface/datasets/blob/6c766f9115d686182d76b1b937cb27e099c45d68/setup.py#L104\r\n - Therefore, your installed `huggingface_hub` version `0.0.8` was no longer compatible \r\n- Currently, in `datasets` version `1.15.1`, the requirement on `huggingface_hub` is: `huggingface_hub>=0.1.0,<1.0.0`\r\n https://github.com/huggingface/datasets/blob/018100679d21cf27136f0eccb1c50e3a9c968ce2/setup.py#L102\r\n\r\n@JTWang2000, thanks for your answer. I close this issue then." ]
1,634,480,267,000
1,636,476,149,000
1,636,476,148,000
NONE
null
## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sst", "default") ``` ## Actual results --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-fbe7981e6e21> in <module> 1 import torch 2 import transformers ----> 3 from datasets import load_dataset 4 5 dataset = load_dataset("sst", "default") ~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/__init__.py in <module> 35 from .arrow_reader import ArrowReader, ReadInstruction 36 from .arrow_writer import ArrowWriter ---> 37 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder 38 from .combine import interleave_datasets 39 from .dataset_dict import DatasetDict, IterableDatasetDict ~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/builder.py in <module> 42 ) 43 from .arrow_writer import ArrowWriter, BeamWriter ---> 44 from .data_files import DataFilesDict, _sanitize_patterns 45 from .dataset_dict import DatasetDict, IterableDatasetDict 46 from .fingerprint import Hasher ~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/data_files.py in <module> 118 119 def _exec_patterns_in_dataset_repository( --> 120 dataset_info: huggingface_hub.hf_api.DatasetInfo, 121 patterns: List[str], 122 allowed_extensions: Optional[list] = None, AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.13.3 - Platform: macOS-11.3.1-arm64-arm-64bit - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3099/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3098/comments
https://api.github.com/repos/huggingface/datasets/issues/3098/events
https://github.com/huggingface/datasets/pull/3098
1,028,210,790
PR_kwDODunzps4tSRSZ
3,098
Push to hub capabilities for `Dataset` and `DatasetDict`
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thank you for your reviews! I should have addressed all of your comments, and I added a test to ensure that `private` datasets work correctly too. I have merged the changes in `huggingface_hub`, so the `main` branch can be installed now; and I will release v0.1.0 soon.\r\n\r\nAs blockers for this PR:\r\n- It's still waiting for #3027 to be addressed as the folder name will dictate the split name\r\n- The `self.split` name is set to `None` when the dataset dict is instantiated as follows:\r\n```py\r\nds = Dataset.from_dict({\"x\": [1, 2, 3], \"y\": [4, 5, 6]})\r\nlocal_ds = DatasetDict({\"random\": ds})\r\n\r\nlocal_ds['random'].split # returns None\r\n```\r\nIn order to remove the `split=key` I would need to know of a different way to test here as it relies on the above as a surefire way of constructing a `DatasetDict`.\r\n- Finally, the `threading` parameter is flaky on moon-staging which results in many errors server side. I propose to leave it as an argument instead of having it having it set to `True` so that users may toggle it according to their wish. ", "Currently it looks like it only saves the last split.\r\nIndeed when writing the data of one split, it deletes all the other files from the other splits\r\n```python\r\n>>> dataset.push_to_hub(\"lhoestq/squad_titles\", shard_size=50<<10) \r\nPushing split train to the Hub.\r\nPushing dataset shards to the dataset hub: 100%|█| 31/31 [00:22<00:00, 1.38\r\nPushing split validation to the Hub.\r\nThe repository already exists: the `private` keyword argument will be ignored.\r\nDeleting unused files from dataset repository: 100%|█| 31/31 [00:14<00:00, \r\nPushing dataset shards to the dataset hub: 100%|█| 4/4 [00:03<00:00, 1.18it\r\n```\r\nNote the \"Deleting\" part.", "I think this PR should fix #3035, so feel free to link it. ", "Thank you for your comments! I have rebased on `master` to have PR #3221. I've updated all tests to reflect the `-` instead of the `_` in the filenames.\r\n\r\n@lhoestq, I have fixed the issue with splits and added a corresponding test.\r\n\r\n@mariosasko I have not updated the `load_dataset` method to work differently, so I don't expect #3035 to be resolved with `push_to_hub`.\r\n\r\nOnly remaining issues before merging:\r\n- Take a good look at the `threading` and if that's something we want to keep.\r\n- As mentioned above:\r\n>The self.split name is set to None when the dataset dict is instantiated as follows:\r\n> ```\r\n> ds = Dataset.from_dict({\"x\": [1, 2, 3], \"y\": [4, 5, 6]})\r\n> local_ds = DatasetDict({\"random\": ds})\r\n> \r\n> local_ds['random'].split # returns None\r\n> ```\r\nI need to understand how to build a `DatasetDict` from some `Dataset` objects to be able to leverage the `split` parameter in `DatasetDict.push_to_hub`", "Cool thanks ! And indeed this won't solve https://github.com/huggingface/datasets/issues/3035 yet\r\n\r\n> I need to understand how to build a DatasetDict from some Dataset objects to be able to leverage the split parameter in DatasetDict.push_to_hub\r\n\r\nYou can use the key in the DatasetDict instead of the `split` attribute", "What do you think about bumping the minimum version of pyarrow to 3.0.0 ? This is the minimum required version to write parquet files, which is needed for push_to_hub. That's why our pyarrow 1 CI is failing.\r\n\r\nI think it's fine since it's been available for a long time (january 2021) and it's also the version that is installed on Google Colab.", "Pushing pyarrow to 3.0.0 is fine for me. I don’t think we need to keep a lot of backward support for pyarrow.", "Hi.\r\nI published in the forum about my experience with `DatasetDict.push_to_hub()`: here is my [post.](https://discuss.huggingface.co/t/save-datasetdict-to-huggingface-hub/12075/4)\r\nOn my side, there is a problem as my train and validation `Datasets` are concatenated when I do a `load_dataset()` from the `DatasetDict` I pushed to the HF datasets hub.", "Hi ! Let me respond here as well in case other people have the same issues and come here:\r\n\r\n`push_to_hub` was introduced in `datasets` 1.16, and to be able to properly load a dataset with separated splits you need to have `datasets>=1.16.0` as well. \r\n\r\nOld version of `datasets` used to concatenate everything in the `train` split." ]
1,634,443,964,000
1,638,836,532,000
1,637,753,136,000
MEMBER
null
This PR implements a `push_to_hub` method on `Dataset` and `DatasetDict`. This does not currently work in `IterableDatasetDict` nor `IterableDataset` as those are simple dicts and I would like your opinion on how you would like to implement this before going ahead and doing it. This implementation needs to be used with the following `huggingface_hub` branch in order to work correctly: https://github.com/huggingface/huggingface_hub/pull/415 ### Implementation The `push_to_hub` API is entirely based on HTTP requests rather than a git-based workflow: - This allows pushing changes without firstly cloning the repository, which reduces the time in half for the `push_to_hub` method. - Collaboration, as well as the system of branches/merges/rebases is IMO less straightforward than for models and spaces. In the situation where such collaboration is needed, I would *heavily* advocate for the `Repository` helper of the `huggingface_hub` to be used instead of the `push_to_hub` method which will always be, by design, limiting in that regard (even if based on a git-workflow instead of HTTP requests) In order to overcome the limit of 5GB files set by the HTTP requests, dataset sharding is used. ### Testing The test suite implemented here makes use of the moon-staging instead of the production setup. As several repositories are created and deleted, it is better to use the staging. It does not require setting an environment variable or any kind of special attention but introduces a new decorator `with_staging_testing` which patches global variables to use the staging endpoint instead of the production endpoint. ### Examples The tests cover a lot of examples and behavior.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3098/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/3098/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3098", "html_url": "https://github.com/huggingface/datasets/pull/3098", "diff_url": "https://github.com/huggingface/datasets/pull/3098.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3098.patch", "merged_at": 1637753136000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3097/comments
https://api.github.com/repos/huggingface/datasets/issues/3097/events
https://github.com/huggingface/datasets/issues/3097
1,027,750,811
I_kwDODunzps49Qjub
3,097
`ModuleNotFoundError: No module named 'fsspec.exceptions'`
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @VictorSanh.\r\n\r\nI'm fixing it." ]
1,634,326,478,000
1,634,543,514,000
1,634,543,514,000
MEMBER
null
## Describe the bug I keep runnig into a fsspec ModuleNotFound error ## Steps to reproduce the bug ```python >>> from datasets import get_dataset_infos 2021-10-15 15:25:37.863206: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-10-15 15:25:37.863252: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/__init__.py", line 37, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 56, in <module> from .utils.streaming_download_manager import StreamingDownloadManager File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/streaming_download_manager.py", line 11, in <module> from fsspec.exceptions import FSTimeoutError ModuleNotFoundError: No module named 'fsspec.exceptions' ``` Yet, I do have `fsspec`: ```bash hf@victor-scale:~/dev/promptsource$ pip show fsspec Name: fsspec Version: 2021.5.0 Summary: File-system specification Home-page: http://github.com/intake/filesystem_spec Author: None Author-email: None License: BSD Location: /home/hf/dev/promptsource/.venv/lib/python3.7/site-packages Requires: Required-by: datasets ``` With the same version of fsspec and `datasets==1.9.0`, I don't see this problem.... ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> I can't even run `datasets-cli env` actually.., but here's my env: - `datasets` version: 1.13.3 - Platform: Ubuntu 18.04 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3097/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3096/comments
https://api.github.com/repos/huggingface/datasets/issues/3096/events
https://github.com/huggingface/datasets/pull/3096
1,027,535,685
PR_kwDODunzps4tQblQ
3,096
Fix Audio feature mp3 resampling
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,310,319,000
1,634,312,310,000
1,634,312,310,000
MEMBER
null
Issue #3095 is related to mp3 resampling, not to `cast_column`. This PR fixes Audio feature mp3 resampling. Fix #3095.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3096/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3096/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3096", "html_url": "https://github.com/huggingface/datasets/pull/3096", "diff_url": "https://github.com/huggingface/datasets/pull/3096.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3096.patch", "merged_at": 1634312309000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3095
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3095/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3095/comments
https://api.github.com/repos/huggingface/datasets/issues/3095/events
https://github.com/huggingface/datasets/issues/3095
1,027,453,146
I_kwDODunzps49PbDa
3,095
`cast_column` makes audio decoding fail
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "cc @anton-l @albertvillanova ", "Thanks for reporting, @patrickvonplaten.\r\n\r\nI think the issue is related to mp3 resampling, not to `cast_column`.\r\n\r\nYou can check that `cast_column` works OK with non-mp3 audio files:\r\n```python\r\nfrom datasets import load_dataset\r\nimport datasets\r\nds = load_dataset(\"arabic_speech_corpus\", split=\"train\")\r\nds = ds.cast_column(\"audio\", datasets.features.Audio(sampling_rate=16_000))\r\nprint(ds[0][\"audio\"])\r\n```\r\n\r\nI'm fixing it." ]
1,634,305,018,000
1,634,312,310,000
1,634,312,310,000
MEMBER
null
## Describe the bug After changing the sampling rate automatic decoding fails. ## Steps to reproduce the bug ```python from datasets import load_dataset import datasets ds = load_dataset("common_voice", "ab", split="train") ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000)) print(ds[0]["audio"]) # <- this fails currently ``` yields: ``` TypeError: forward() takes 2 positional arguments but 4 were given ``` ## Expected results no failure ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> Copy-and-paste the text below in your GitHub issue. - `datasets` version: 1.13.2 (master) - Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3095/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3092/comments
https://api.github.com/repos/huggingface/datasets/issues/3092/events
https://github.com/huggingface/datasets/pull/3092
1,027,260,383
PR_kwDODunzps4tPj6e
3,092
Fix JNLBA dataset
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Fix #3089.", "@albertvillanova all tests are passing now. Either you or @lhoestq can review it!" ]
1,634,290,274,000
1,634,891,037,000
1,634,891,037,000
CONTRIBUTOR
null
As mentioned in #3089, I've added more tags and also updated the link for dataset which was earlier using a Google Drive link. I'm having problem with generating dummy data as `datasets-cli dummy_data ./datasets/jnlpba --auto_generate --match_text_files "*.iob2"` is giving `datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! ` error. I'll try to add dummy data manually.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3092/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3092/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3092", "html_url": "https://github.com/huggingface/datasets/pull/3092", "diff_url": "https://github.com/huggingface/datasets/pull/3092.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3092.patch", "merged_at": 1634891037000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3091
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3091/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3091/comments
https://api.github.com/repos/huggingface/datasets/issues/3091/events
https://github.com/huggingface/datasets/issues/3091
1,027,251,530
I_kwDODunzps49Op1K
3,091
`blog_authorship_corpus` is broken
{ "login": "fdtomasi", "id": 12514317, "node_id": "MDQ6VXNlcjEyNTE0MzE3", "avatar_url": "https://avatars.githubusercontent.com/u/12514317?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fdtomasi", "html_url": "https://github.com/fdtomasi", "followers_url": "https://api.github.com/users/fdtomasi/followers", "following_url": "https://api.github.com/users/fdtomasi/following{/other_user}", "gists_url": "https://api.github.com/users/fdtomasi/gists{/gist_id}", "starred_url": "https://api.github.com/users/fdtomasi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fdtomasi/subscriptions", "organizations_url": "https://api.github.com/users/fdtomasi/orgs", "repos_url": "https://api.github.com/users/fdtomasi/repos", "events_url": "https://api.github.com/users/fdtomasi/events{/privacy}", "received_events_url": "https://api.github.com/users/fdtomasi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @fdtomasi, thanks for reporting.\r\n\r\nYou are right: the original host data URL does no longer exist.\r\n\r\nI've contacted the authors of the dataset to ask them if they host this dataset in another URL.", "Hi, @fdtomasi, the URL is fixed.\r\n\r\nThe fix is already in our master branch and it will be accessible in our next release.\r\n\r\nIn the meantime, you can include the fix if you install the `datasets` library from the master branch:\r\n```\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```", "Awesome thank you so much for the quick fix!" ]
1,634,289,640,000
1,634,648,770,000
1,634,647,839,000
NONE
null
## Describe the bug The dataset `blog_authorship_corpus` is broken. By bypassing the checksum checks, the loading does not return any error but the resulting dataset is empty. I suspect it is because the data download url is broken (http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip). ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("blog_authorship_corpus", split="train", download_mode='force_redownload') ``` ## Expected results No error. ## Actual results ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) /tmp/ipykernel_5237/1729238701.py in <module> 2 ds = load_dataset( 3 "blog_authorship_corpus", split="train", ----> 4 download_mode='force_redownload' 5 ) /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 1115 ignore_verifications=ignore_verifications, 1116 try_from_hf_gcs=try_from_hf_gcs, -> 1117 use_auth_token=use_auth_token, 1118 ) 1119 /opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 635 if not downloaded_from_gcs: 636 self._download_and_prepare( --> 637 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 638 ) 639 # Sync info /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 707 if verify_infos: 708 verify_checksums( --> 709 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" 710 ) 711 /opt/conda/lib/python3.7/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip'] ``` ## Environment info - `datasets` version: 1.13.2 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3091/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3090
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3090/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3090/comments
https://api.github.com/repos/huggingface/datasets/issues/3090/events
https://github.com/huggingface/datasets/pull/3090
1,027,100,371
PR_kwDODunzps4tPEtH
3,090
Update BibTeX entry
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,276,367,000
1,634,283,357,000
1,634,283,357,000
MEMBER
null
Update BibTeX entry.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3090/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3090/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3090", "html_url": "https://github.com/huggingface/datasets/pull/3090", "diff_url": "https://github.com/huggingface/datasets/pull/3090.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3090.patch", "merged_at": 1634283357000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3089/comments
https://api.github.com/repos/huggingface/datasets/issues/3089/events
https://github.com/huggingface/datasets/issues/3089
1,026,973,360
I_kwDODunzps49Nl6w
3,089
JNLPBA Dataset
{ "login": "sciarrilli", "id": 10460111, "node_id": "MDQ6VXNlcjEwNDYwMTEx", "avatar_url": "https://avatars.githubusercontent.com/u/10460111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sciarrilli", "html_url": "https://github.com/sciarrilli", "followers_url": "https://api.github.com/users/sciarrilli/followers", "following_url": "https://api.github.com/users/sciarrilli/following{/other_user}", "gists_url": "https://api.github.com/users/sciarrilli/gists{/gist_id}", "starred_url": "https://api.github.com/users/sciarrilli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sciarrilli/subscriptions", "organizations_url": "https://api.github.com/users/sciarrilli/orgs", "repos_url": "https://api.github.com/users/sciarrilli/repos", "events_url": "https://api.github.com/users/sciarrilli/events{/privacy}", "received_events_url": "https://api.github.com/users/sciarrilli/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "# Steps to reproduce\r\n\r\nTo reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('jnlpba')\r\n\r\ndataset['train'].features['ner_tags']\r\n```\r\nOutput:\r\n```python\r\nSequence(feature=ClassLabel(num_classes=3, names=['O', 'B', 'I'], names_file=None, id=None), length=-1, id=None)\r\n```\r\n\r\n", "Since I cannot create a branch here is the updated code:\r\n\r\n```python\r\n\r\n# coding=utf-8\r\n# Copyright 2020 HuggingFace Datasets Authors.\r\n#\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n#\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n#\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\r\n# Lint as: python3\r\n\"\"\"Introduction to the Bio-Entity Recognition Task at JNLPBA\"\"\"\r\n\r\nimport os\r\n\r\nimport datasets\r\n\r\n\r\nlogger = datasets.logging.get_logger(__name__)\r\n\r\n\r\n_CITATION = \"\"\"\\\r\n@inproceedings{kim2004introduction,\r\n title={Introduction to the bio-entity recognition task at JNLPBA},\r\n author={Kim, Jin-Dong and Ohta, Tomoko and Tsuruoka, Yoshimasa and Tateisi, Yuka and Collier, Nigel},\r\n booktitle={Proceedings of the international joint workshop on natural language processing in biomedicine and its applications},\r\n pages={70--75},\r\n year={2004},\r\n organization={Citeseer}\r\n}\r\n\"\"\"\r\n\r\n_DESCRIPTION = \"\"\"\\\r\nThe data came from the GENIA version 3.02 corpus (Kim et al., 2003). This was formed from a controlled search\r\non MEDLINE using the MeSH terms \u0018human\u0019, \u0018blood cells\u0019 and \u0018transcription factors\u0019. From this search 2,000 abstracts\r\nwere selected and hand annotated according to a small taxonomy of 48 classes based on a chemical classification.\r\nAmong the classes, 36 terminal classes were used to annotate the GENIA corpus.\r\n\"\"\"\r\n\r\n_HOMEPAGE = \"http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004\"\r\n_TRAIN_URL = \"http://www.nactem.ac.uk/GENIA/current/Shared-tasks/JNLPBA/Train/Genia4ERtraining.tar.gz\"\r\n_VAL_URL = 'http://www.nactem.ac.uk/GENIA/current/Shared-tasks/JNLPBA/Evaluation/Genia4ERtest.tar.gz'\r\n\r\n\r\n_URLS = {\r\n \"train\": _TRAIN_URL,\r\n \"val\": _VAL_URL,\r\n}\r\n\r\n_TRAIN_DIRECTORY = \"Genia4ERtraining\"\r\n_VAL_DIRECTORY = \"Genia4ERtest\"\r\n\r\n_TRAIN_FILE = \"Genia4ERtask1.iob2\"\r\n_VAL_FILE = \"Genia4EReval1.iob2\"\r\n\r\n\r\nclass JNLPBAConfig(datasets.BuilderConfig):\r\n \"\"\"BuilderConfig for JNLPBA\"\"\"\r\n\r\n def __init__(self, **kwargs):\r\n \"\"\"BuilderConfig for JNLPBA.\r\n Args:\r\n **kwargs: keyword arguments forwarded to super.\r\n \"\"\"\r\n super(JNLPBAConfig, self).__init__(**kwargs)\r\n\r\n\r\nclass JNLPBA(datasets.GeneratorBasedBuilder):\r\n \"\"\"JNLPBA dataset.\"\"\"\r\n\r\n BUILDER_CONFIGS = [\r\n JNLPBAConfig(name=\"jnlpba\", version=datasets.Version(\"1.0.0\"), description=\"JNLPBA dataset\"),\r\n ]\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"string\"),\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"ner_tags\": datasets.Sequence(\r\n datasets.features.ClassLabel(\r\n names=[\r\n 'O',\r\n 'B-DNA',\r\n 'I-DNA', \r\n 'B-RNA',\r\n 'I-RNA',\r\n 'B-cell_line',\r\n 'I-cell_line',\r\n 'B-cell_type',\r\n 'I-cell_type',\r\n 'B-protein',\r\n 'I-protein',\r\n ]\r\n )\r\n ),\r\n }\r\n ),\r\n supervised_keys=None,\r\n homepage=_HOMEPAGE,\r\n citation=_CITATION,\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n downloaded_files = dl_manager.download_and_extract(_URLS)\r\n \r\n return [\r\n datasets.SplitGenerator(name=datasets.Split.TRAIN, \r\n gen_kwargs={\"filepath\": os.path.join(downloaded_files['train'], _TRAIN_FILE)}),\r\n datasets.SplitGenerator(name=datasets.Split.VALIDATION, \r\n gen_kwargs={\"filepath\": os.path.join(downloaded_files['val'], _VAL_FILE)})\r\n ]\r\n \r\n\r\n def _generate_examples(self, filepath):\r\n logger.info(\"⏳ Generating examples from = %s\", filepath)\r\n with open(filepath, encoding=\"utf-8\") as f:\r\n guid = 0\r\n tokens = []\r\n ner_tags = []\r\n for line in f:\r\n if line.startswith('###'):\r\n continue\r\n if line == '' or line == '\\n':\r\n if tokens:\r\n yield guid, {\r\n \"id\": str(guid),\r\n \"tokens\": tokens,\r\n \"ner_tags\": ner_tags,\r\n }\r\n guid += 1\r\n tokens = []\r\n ner_tags = []\r\n else:\r\n # tokens are tab separated\r\n splits = line.split(\"\\t\")\r\n #print(splits)\r\n #print(len(splits))\r\n if len(splits) < 2:\r\n splits = splits[0].split()\r\n tokens.append(splits[0])\r\n ner_tags.append(splits[1].rstrip())\r\n # last example\r\n yield guid, {\r\n \"id\": str(guid),\r\n \"tokens\": tokens,\r\n \"ner_tags\": ner_tags,\r\n }\r\n```" ]
1,634,260,562,000
1,634,891,037,000
1,634,891,037,000
NONE
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition. The entities in the [script](https://github.com/huggingface/datasets/blob/master/datasets/jnlpba/jnlpba.py#L81-L83) are: O, B, and I. The correct entities from the original data file are: ['O', 'B-DNA', 'I-DNA', 'B-RNA', 'I-RNA', 'B-cell_line', 'I-cell_line', 'B-cell_type', 'I-cell_type', 'B-protein', 'I-protein'] ## Actual results The dataset loader script needs to include the following NER names: ['O', 'B-DNA', 'I-DNA', 'B-RNA', 'I-RNA', 'B-cell_line', 'I-cell_line', 'B-cell_type', 'I-cell_type', 'B-protein', 'I-protein'] And the [data](https://github.com/huggingface/datasets/blob/master/datasets/jnlpba/jnlpba.py#L46) that is being pulled has been modified from the original dataset and does not include the original NER tags. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3089/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3088
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3088/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3088/comments
https://api.github.com/repos/huggingface/datasets/issues/3088/events
https://github.com/huggingface/datasets/pull/3088
1,026,920,369
PR_kwDODunzps4tOhRx
3,088
Use template column_mapping to transmit_format instead of template features
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for fixing!" ]
1,634,255,380,000
1,634,308,805,000
1,634,292,664,000
CONTRIBUTOR
null
Use `template.column_mapping` to check for modified columns since `template.features` represent a generic template/column mapping. Fix #3087 TODO: - [x] Add a test
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3088/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3088/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3088", "html_url": "https://github.com/huggingface/datasets/pull/3088", "diff_url": "https://github.com/huggingface/datasets/pull/3088.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3088.patch", "merged_at": 1634292664000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3087
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3087/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3087/comments
https://api.github.com/repos/huggingface/datasets/issues/3087/events
https://github.com/huggingface/datasets/issues/3087
1,026,780,469
I_kwDODunzps49M201
3,087
Removing label column in a text classification dataset yields to errors
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,634,242,370,000
1,634,292,664,000
1,634,292,664,000
MEMBER
null
## Describe the bug This looks like #3059 but it's not linked to the cache this time. Removing the `label` column from a text classification dataset and then performing any processing will result in an error. To reproduce: ```py from datasets import load_dataset from transformers import AutoTokenizer raw_datasets = load_dataset("imdb") raw_datasets = raw_datasets.remove_columns("label") model_checkpoint = "distilbert-base-cased" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) context_length = 128 def tokenize_pad_and_truncate(texts): return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length) tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True) ``` Traceback: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-1-ba61bb32f786> in <module> 12 return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length) 13 ---> 14 tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True) ~/git/datasets/src/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc) 500 desc=desc, 501 ) --> 502 for k, dataset in self.items() 503 } 504 ) ~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0) 500 desc=desc, 501 ) --> 502 for k, dataset in self.items() 503 } 504 ) ~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2051 new_fingerprint=new_fingerprint, 2052 disable_tqdm=disable_tqdm, -> 2053 desc=desc, 2054 ) 2055 else: ~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 501 self: "Dataset" = kwargs.pop("self") 502 # apply actual function --> 503 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 504 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 505 for dataset in datasets: ~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 468 } 469 # apply actual function --> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 472 # re-apply format to the output ~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs) 404 # Call actual function 405 --> 406 out = func(self, *args, **kwargs) 407 408 # Update fingerprint of in-place transforms + update in-place history of transforms ~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2243 if os.path.exists(cache_file_name) and load_from_cache_file: 2244 logger.warning("Loading cached processed dataset at %s", cache_file_name) -> 2245 info = self.info.copy() 2246 info.features = features 2247 info.task_templates = None ~/git/datasets/src/datasets/info.py in copy(self) 278 279 def copy(self) -> "DatasetInfo": --> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) 281 282 ~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes) ~/git/datasets/src/datasets/info.py in __post_init__(self) 177 for idx, template in enumerate(self.task_templates): 178 if isinstance(template, TextClassification): --> 179 labels = self.features[template.label_column].names 180 self.task_templates[idx] = TextClassification( 181 text_column=template.text_column, label_column=template.label_column, labels=labels KeyError: 'label' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3087/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3087/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3086
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3086/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3086/comments
https://api.github.com/repos/huggingface/datasets/issues/3086/events
https://github.com/huggingface/datasets/pull/3086
1,026,481,905
PR_kwDODunzps4tNIvp
3,086
Remove _resampler from Audio fields
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,222,330,000
1,634,224,421,000
1,634,224,420,000
MEMBER
null
The `_resampler` Audio attribute was implemented to optimize audio resampling, but it should not be cached. This PR removes `_resampler` from Audio fields, so that it is not returned by `fields()` or `asdict()`. Fix #3083.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3086/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3086/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3086", "html_url": "https://github.com/huggingface/datasets/pull/3086", "diff_url": "https://github.com/huggingface/datasets/pull/3086.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3086.patch", "merged_at": 1634224420000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3085/comments
https://api.github.com/repos/huggingface/datasets/issues/3085/events
https://github.com/huggingface/datasets/pull/3085
1,026,467,384
PR_kwDODunzps4tNFza
3,085
Fixes to `to_tf_dataset`
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Can you give some details about why you need these changes ?", "Hey, sorry, I should have explained! I've been getting a lot of `VisibleDeprecationWarning` from Numpy, due to an issue in the formatter, see #3084 . This is a temporary workaround (since I'm using these methods in the upcoming course) until I can fix that issue, because I couldn't see an obvious fix for the Numpy formatter. If you can see a quick way to fix that, though, that might be even better!" ]
1,634,221,556,000
1,634,828,729,000
1,634,828,728,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3085/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3085/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3085", "html_url": "https://github.com/huggingface/datasets/pull/3085", "diff_url": "https://github.com/huggingface/datasets/pull/3085.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3085.patch", "merged_at": 1634828728000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3084/comments
https://api.github.com/repos/huggingface/datasets/issues/3084/events
https://github.com/huggingface/datasets/issues/3084
1,026,428,992
I_kwDODunzps49LhBA
3,084
VisibleDeprecationWarning when using `set_format("numpy")`
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I just opened a PR and I verified that the code you provided doesn't show any deprecation warning :)" ]
1,634,219,581,000
1,634,918,654,000
1,634,918,654,000
CONTRIBUTOR
null
Code to reproduce: ``` from datasets import load_dataset dataset = load_dataset("glue", "mnli") from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased') def tokenize_function(dataset): return tokenizer(dataset['premise']) tokenized_datasets = dataset.map(tokenize_function, batched=True, remove_columns=dataset['train'].features) tokenized_datasets.set_format("numpy") tokenized_datasets['train'][5:8] ``` Outputs: ``` python3.9/site-packages/datasets/formatting/formatting.py:167: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray return np.array(array, copy=False, **self.np_array_kwargs) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3084/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3083
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3083/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3083/comments
https://api.github.com/repos/huggingface/datasets/issues/3083/events
https://github.com/huggingface/datasets/issues/3083
1,026,397,062
I_kwDODunzps49LZOG
3,083
Datasets with Audio feature raise error when loaded from cache due to _resampler parameter
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,217,833,000
1,634,224,420,000
1,634,224,420,000
MEMBER
null
## Describe the bug As reported by @patrickvonplaten, when loaded from the cache, datasets containing the Audio feature raise TypeError. ## Steps to reproduce the bug ```python from datasets import load_dataset # load first time works ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") # load from cache breaks ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") ``` ## Actual results ``` TypeError: __init__() got an unexpected keyword argument '_resampler' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3083/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3082
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3082/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3082/comments
https://api.github.com/repos/huggingface/datasets/issues/3082/events
https://github.com/huggingface/datasets/pull/3082
1,026,388,994
PR_kwDODunzps4tM2BV
3,082
Fix error related to huggingface_hub timeout parameter
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,217,467,000
1,634,222,392,000
1,634,222,391,000
MEMBER
null
The `huggingface_hub` package added the parameter `timeout` from version 0.0.19. This PR bumps this minimal version. Fix #3080.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3082/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3082/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3082", "html_url": "https://github.com/huggingface/datasets/pull/3082", "diff_url": "https://github.com/huggingface/datasets/pull/3082.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3082.patch", "merged_at": 1634222391000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3081
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3081/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3081/comments
https://api.github.com/repos/huggingface/datasets/issues/3081/events
https://github.com/huggingface/datasets/pull/3081
1,026,383,749
PR_kwDODunzps4tM1Gy
3,081
[Audio datasets] Adapting all audio datasets
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq - are there other important speech datasets that I'm forgetting here? \r\n\r\nThink PR is good to go otherwise", "@lhoestq @albertvillanova - how can we make an exception for the AMI README so that the test doesn't fail? The dataset card definitely should have a data preprocessing section", "Hi @patrickvonplaten ,\r\n\r\nthe data preprocessing section is not defined as a valid section in the readme validation file. After this line:\r\nhttps://github.com/huggingface/datasets/blob/568db594d51110da9e23d224abded2a976b3c8c7/src/datasets/utils/resources/readme_structure.yaml#L20\r\nfeel free to insert (correctly indented of course):\r\n```python\r\n- name: \"Dataset Preprocessing\"\r\n allow_empty: true\r\n allow_empty_text: true\r\n subsections: null\r\n```\r\nand then the tests should pass.", "Thanks a lot @albertvillanova - I've added the feature to all audio datasets and corrected the task of `covost2`" ]
1,634,217,225,000
1,634,302,323,000
1,634,300,553,000
MEMBER
null
This PR adds the new `Audio(...)` features - see: https://github.com/huggingface/datasets/pull/2324 to the most important audio datasets: - Librispeech - Timit - Common Voice - AMI - ... (others I'm forgetting now) The PR is curently blocked because the following leads to a problem: ```python from datasets import load_dataset # load first time works ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") # load from cache breaks ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") ``` As soon as it's unblocked, I'll adapt the other audio datasets as well.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3081/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3081/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3081", "html_url": "https://github.com/huggingface/datasets/pull/3081", "diff_url": "https://github.com/huggingface/datasets/pull/3081.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3081.patch", "merged_at": 1634300553000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3080/comments
https://api.github.com/repos/huggingface/datasets/issues/3080/events
https://github.com/huggingface/datasets/issues/3080
1,026,380,626
I_kwDODunzps49LVNS
3,080
Error related to timeout keyword argument
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,217,058,000
1,634,222,391,000
1,634,222,391,000
MEMBER
null
## Describe the bug As reported by @patrickvonplaten, a TypeError is raised when trying to load a dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") ``` ## Actual results ``` TypeError: dataset_info() got an unexpected keyword argument 'timeout' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3080/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3077
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3077/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3077/comments
https://api.github.com/repos/huggingface/datasets/issues/3077/events
https://github.com/huggingface/datasets/pull/3077
1,026,150,362
PR_kwDODunzps4tMFPG
3,077
Fix loading a metric with internal import
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,202,418,000
1,634,202,896,000
1,634,202,895,000
MEMBER
null
After refactoring the module factory (#2986), a bug was introduced when loading metrics with internal imports. This PR adds a new test case and fixes this bug. Fix #3076. CC: @sgugger @merveenoyan
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3077/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3077/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3077", "html_url": "https://github.com/huggingface/datasets/pull/3077", "diff_url": "https://github.com/huggingface/datasets/pull/3077.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3077.patch", "merged_at": 1634202895000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3076
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3076/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3076/comments
https://api.github.com/repos/huggingface/datasets/issues/3076/events
https://github.com/huggingface/datasets/issues/3076
1,026,113,484
I_kwDODunzps49KT_M
3,076
Error when loading a metric
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,200,167,000
1,634,202,895,000
1,634,202,895,000
MEMBER
null
## Describe the bug As reported by @sgugger, after last release, exception is thrown when loading a metric. ## Steps to reproduce the bug ```python from datasets import load_metric metric = load_metric("squad_v2") ``` ## Actual results ``` FileNotFoundError Traceback (most recent call last) <ipython-input-1-e612a8cab787> in <module> 1 from datasets import load_metric ----> 2 metric = load_metric("squad_v2") d:\projects\huggingface\datasets\src\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, script_version, **metric_init_kwargs) 1336 ) 1337 revision = script_version -> 1338 metric_module = metric_module_factory( 1339 path, revision=revision, download_config=download_config, download_mode=download_mode 1340 ).module_path d:\projects\huggingface\datasets\src\datasets\load.py in metric_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, **download_kwargs) 1237 if not isinstance(e1, FileNotFoundError): 1238 raise e1 from None -> 1239 raise FileNotFoundError( 1240 f"Couldn't find a metric script at {relative_to_absolute_path(combined_path)}. " 1241 f"Metric '{path}' doesn't exist on the Hugging Face Hub either." FileNotFoundError: Couldn't find a metric script at D:\projects\huggingface\datasets\squad_v2\squad_v2.py. Metric 'squad_v2' doesn't exist on the Hugging Face Hub either. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3076/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3075/comments
https://api.github.com/repos/huggingface/datasets/issues/3075/events
https://github.com/huggingface/datasets/pull/3075
1,026,103,388
PR_kwDODunzps4tL75E
3,075
Updates LexGLUE and MultiEURLEX README.md files
{ "login": "iliaschalkidis", "id": 1626984, "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "gravatar_id": "", "url": "https://api.github.com/users/iliaschalkidis", "html_url": "https://github.com/iliaschalkidis", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,199,556,000
1,634,552,020,000
1,634,552,020,000
CONTRIBUTOR
null
Updates LexGLUE and MultiEURLEX README.md files - Fix leaderboard in LexGLUE. - Fix an error in the CaseHOLD data example. - Turn MultiEURLEX dataset statistics table into HTML to nicely render in HF website.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3075/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3075/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3075", "html_url": "https://github.com/huggingface/datasets/pull/3075", "diff_url": "https://github.com/huggingface/datasets/pull/3075.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3075.patch", "merged_at": 1634552020000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3074
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3074/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3074/comments
https://api.github.com/repos/huggingface/datasets/issues/3074/events
https://github.com/huggingface/datasets/pull/3074
1,025,940,085
PR_kwDODunzps4tLbe-
3,074
add XCSR dataset
{ "login": "yangxqiao", "id": 42788901, "node_id": "MDQ6VXNlcjQyNzg4OTAx", "avatar_url": "https://avatars.githubusercontent.com/u/42788901?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangxqiao", "html_url": "https://github.com/yangxqiao", "followers_url": "https://api.github.com/users/yangxqiao/followers", "following_url": "https://api.github.com/users/yangxqiao/following{/other_user}", "gists_url": "https://api.github.com/users/yangxqiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/yangxqiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yangxqiao/subscriptions", "organizations_url": "https://api.github.com/users/yangxqiao/orgs", "repos_url": "https://api.github.com/users/yangxqiao/repos", "events_url": "https://api.github.com/users/yangxqiao/events{/privacy}", "received_events_url": "https://api.github.com/users/yangxqiao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Hi ! Thanks for adding this dataset :)\r\n> \r\n> Do you know how the translations were done ? Maybe we can mention that in the dataset card.\r\n> \r\n> The rest looks all good to me :) good job with the dataset script and the dataset card !\r\n> \r\n> Just one thing: we try to have dummy_data.zip files that are as small as possible, however here each zip file is 70KB+. It think we can make them even smaller if we remove unnecessary files in them. In particular in the `ar` dummy data zip file, we don't need the data for all languages, but rather only the `ar` files. Could you try to remove the unnecessary files in the dummy data zip files ?\r\n\r\nHi! \r\n\r\nThank you so much for reviewing this PR. I've updated the README to briefly mention the translations and added a link to the paper, where a detailed description of the translation procedure can be found in the appendix.\r\n\r\nFor the dummy_data.zip files, is it possible to keep all the current files? I tried to remove some of the files, but the removal led to a failure in the local testing. We also think it may be better to keep the current dummy_data.zip files because all the data are useful actually. Thanks a lot!!", "Hi @lhoestq, just a gentle ping on this PR. :D " ]
1,634,186,399,000
1,636,379,556,000
1,636,379,556,000
CONTRIBUTOR
null
Hi, I wanted to add the [XCSR ](https://inklab.usc.edu//XCSR/xcsr_datasets) dataset to huggingface! :) I followed the instructions of adding new dataset to huggingface and have all the required files ready now! It would be super helpful if you can take a look and review them. Thanks in advance for your time and help. Look forward to hearing from you and can't wait to add XCSR to huggingface :D
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3074/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3074/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3074", "html_url": "https://github.com/huggingface/datasets/pull/3074", "diff_url": "https://github.com/huggingface/datasets/pull/3074.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3074.patch", "merged_at": 1636379556000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3073
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3073/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3073/comments
https://api.github.com/repos/huggingface/datasets/issues/3073/events
https://github.com/huggingface/datasets/issues/3073
1,025,718,469
I_kwDODunzps49IzjF
3,073
Import error installing with ppc64le
{ "login": "gcervantes8", "id": 21228908, "node_id": "MDQ6VXNlcjIxMjI4OTA4", "avatar_url": "https://avatars.githubusercontent.com/u/21228908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gcervantes8", "html_url": "https://github.com/gcervantes8", "followers_url": "https://api.github.com/users/gcervantes8/followers", "following_url": "https://api.github.com/users/gcervantes8/following{/other_user}", "gists_url": "https://api.github.com/users/gcervantes8/gists{/gist_id}", "starred_url": "https://api.github.com/users/gcervantes8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gcervantes8/subscriptions", "organizations_url": "https://api.github.com/users/gcervantes8/orgs", "repos_url": "https://api.github.com/users/gcervantes8/repos", "events_url": "https://api.github.com/users/gcervantes8/events{/privacy}", "received_events_url": "https://api.github.com/users/gcervantes8/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "This seems to be an issue with importing PyArrow so I posted the problem [here](https://issues.apache.org/jira/browse/ARROW-14323), and I'm closing this issue.\r\n" ]
1,634,161,043,000
1,634,229,346,000
1,634,229,208,000
NONE
null
## Describe the bug Installing the datasets library with a computer running with ppc64le seems to cause an issue when importing the datasets library. ``` python Python 3.6.13 | packaged by conda-forge | (default, Sep 23 2021, 07:37:44) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import datasets Illegal instruction (core dumped) ``` Error when importing `Illegal instruction (core dumped)` ## Steps to reproduce the bug I get this error when installing the library by using conda. I can't install with pip I believe because pyarrow only has the ppc64le library on conda forge ``` conda create --name transformers_py36_v2 python=3.6 conda activate transformers_py36_v2 conda install datasets ``` ## Tracebacks conda create --name transformers_py36_v2 python=3.6 ``` Collecting package metadata (current_repodata.json): done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 4.9.2 latest version: 4.10.3 Please update conda by running $ conda update -n base -c defaults conda ## Package Plan ## environment location: /p/home/gerryc/.conda/envs/transformers_py36_v2 added / updated specs: - python=3.6 The following NEW packages will be INSTALLED: _libgcc_mutex conda-forge/linux-ppc64le::_libgcc_mutex-0.1-conda_forge _openmp_mutex conda-forge/linux-ppc64le::_openmp_mutex-4.5-1_gnu ca-certificates conda-forge/linux-ppc64le::ca-certificates-2021.10.8-h1084571_0 certifi pkgs/main/linux-ppc64le::certifi-2020.12.5-py36h6ffa863_0 ld_impl_linux-ppc~ conda-forge/linux-ppc64le::ld_impl_linux-ppc64le-2.36.1-ha35d02b_2 libffi conda-forge/linux-ppc64le::libffi-3.4.2-h3b9df90_4 libgcc-ng conda-forge/linux-ppc64le::libgcc-ng-11.2.0-h7698a5e_11 libgomp conda-forge/linux-ppc64le::libgomp-11.2.0-h7698a5e_11 libstdcxx-ng conda-forge/linux-ppc64le::libstdcxx-ng-11.2.0-habdf983_11 libzlib conda-forge/linux-ppc64le::libzlib-1.2.11-h339bb43_1013 ncurses conda-forge/linux-ppc64le::ncurses-6.2-hea85c5d_4 openssl conda-forge/linux-ppc64le::openssl-1.1.1l-h4e0d66e_0 pip conda-forge/noarch::pip-21.3-pyhd8ed1ab_0 python conda-forge/linux-ppc64le::python-3.6.13-h57873ef_2_cpython readline conda-forge/linux-ppc64le::readline-8.1-h5c45dff_0 setuptools pkgs/main/linux-ppc64le::setuptools-58.0.4-py36h6ffa863_0 sqlite conda-forge/linux-ppc64le::sqlite-3.36.0-h4e2196e_2 tk conda-forge/linux-ppc64le::tk-8.6.11-h41c6715_1 wheel conda-forge/noarch::wheel-0.37.0-pyhd8ed1ab_1 xz conda-forge/linux-ppc64le::xz-5.2.5-h6eb9509_1 zlib conda-forge/linux-ppc64le::zlib-1.2.11-h339bb43_1013 Proceed ([y]/n)? y Preparing transaction: done Verifying transaction: done Executing transaction: done # # To activate this environment, use # # $ conda activate transformers_py36_v2 # # To deactivate an active environment, use # # $ conda deactivate ``` conda activate transformers_py36_v2 conda install datasets ``` Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 4.9.2 latest version: 4.10.3 Please update conda by running $ conda update -n base -c defaults conda ## Package Plan ## environment location: /p/home/gerryc/.conda/envs/transformers_py36_v2 added / updated specs: - datasets The following NEW packages will be INSTALLED: abseil-cpp conda-forge/linux-ppc64le::abseil-cpp-20210324.2-h3b9df90_0 aiohttp conda-forge/linux-ppc64le::aiohttp-3.7.4.post0-py36hc33305d_0 arrow-cpp conda-forge/linux-ppc64le::arrow-cpp-5.0.0-py36hf9cf308_8_cpu async-timeout conda-forge/noarch::async-timeout-3.0.1-py_1000 attrs conda-forge/noarch::attrs-21.2.0-pyhd8ed1ab_0 aws-c-cal conda-forge/linux-ppc64le::aws-c-cal-0.5.11-hb3fac3d_0 aws-c-common conda-forge/linux-ppc64le::aws-c-common-0.6.2-h4e0d66e_0 aws-c-event-stream conda-forge/linux-ppc64le::aws-c-event-stream-0.2.7-h76da5f2_13 aws-c-io conda-forge/linux-ppc64le::aws-c-io-0.10.5-hf6a6c7c_0 aws-checksums conda-forge/linux-ppc64le::aws-checksums-0.1.11-hfe76d68_7 aws-sdk-cpp conda-forge/linux-ppc64le::aws-sdk-cpp-1.8.186-h90855e8_3 brotlipy conda-forge/linux-ppc64le::brotlipy-0.7.0-py36hc33305d_1001 bzip2 conda-forge/linux-ppc64le::bzip2-1.0.8-h4e0d66e_4 c-ares conda-forge/linux-ppc64le::c-ares-1.17.2-h4e0d66e_0 cffi conda-forge/linux-ppc64le::cffi-1.14.6-py36h021ab3c_1 chardet conda-forge/linux-ppc64le::chardet-4.0.0-py36h270354c_1 colorama conda-forge/noarch::colorama-0.4.4-pyh9f0ad1d_0 cryptography conda-forge/linux-ppc64le::cryptography-3.4.7-py36hc71b123_0 dataclasses conda-forge/noarch::dataclasses-0.8-pyh787bdff_2 datasets conda-forge/noarch::datasets-1.12.1-pyhd8ed1ab_1 dill conda-forge/noarch::dill-0.3.4-pyhd8ed1ab_0 filelock conda-forge/noarch::filelock-3.3.0-pyhd8ed1ab_0 fsspec conda-forge/noarch::fsspec-2021.10.0-pyhd8ed1ab_0 gflags conda-forge/linux-ppc64le::gflags-2.2.2-hb209c28_1004 glog conda-forge/linux-ppc64le::glog-0.5.0-h4040248_0 grpc-cpp conda-forge/linux-ppc64le::grpc-cpp-1.40.0-h2bf711c_2 huggingface_hub conda-forge/noarch::huggingface_hub-0.0.19-pyhd8ed1ab_0 idna conda-forge/noarch::idna-2.10-pyh9f0ad1d_0 idna_ssl conda-forge/noarch::idna_ssl-1.0.0-0 importlib-metadata conda-forge/linux-ppc64le::importlib-metadata-4.8.1-py36h270354c_0 importlib_metadata conda-forge/noarch::importlib_metadata-4.8.1-hd8ed1ab_0 krb5 conda-forge/linux-ppc64le::krb5-1.19.2-haf43566_2 libblas conda-forge/linux-ppc64le::libblas-3.9.0-11_linuxppc64le_openblas libbrotlicommon conda-forge/linux-ppc64le::libbrotlicommon-1.0.9-h4e0d66e_5 libbrotlidec conda-forge/linux-ppc64le::libbrotlidec-1.0.9-h4e0d66e_5 libbrotlienc conda-forge/linux-ppc64le::libbrotlienc-1.0.9-h4e0d66e_5 libcblas conda-forge/linux-ppc64le::libcblas-3.9.0-11_linuxppc64le_openblas libcurl conda-forge/linux-ppc64le::libcurl-7.79.1-he415e40_1 libedit conda-forge/linux-ppc64le::libedit-3.1.20191231-h41a240f_2 libev conda-forge/linux-ppc64le::libev-4.33-h6eb9509_1 libevent conda-forge/linux-ppc64le::libevent-2.1.10-h97db324_4 libgfortran-ng conda-forge/linux-ppc64le::libgfortran-ng-11.2.0-hfdc3801_11 libgfortran5 conda-forge/linux-ppc64le::libgfortran5-11.2.0-he58fbb4_11 liblapack conda-forge/linux-ppc64le::liblapack-3.9.0-11_linuxppc64le_openblas libnghttp2 conda-forge/linux-ppc64le::libnghttp2-1.43.0-h42039ad_1 libopenblas conda-forge/linux-ppc64le::libopenblas-0.3.17-pthreads_h486567c_1 libprotobuf conda-forge/linux-ppc64le::libprotobuf-3.18.1-h690f14c_0 libssh2 conda-forge/linux-ppc64le::libssh2-1.10.0-ha5a9321_2 libthrift conda-forge/linux-ppc64le::libthrift-0.15.0-h54f692e_1 libutf8proc conda-forge/linux-ppc64le::libutf8proc-2.6.1-h4e0d66e_0 lz4-c conda-forge/linux-ppc64le::lz4-c-1.9.3-h3b9df90_1 multidict conda-forge/linux-ppc64le::multidict-5.2.0-py36hc33305d_0 multiprocess conda-forge/linux-ppc64le::multiprocess-0.70.12.2-py36hc33305d_0 numpy conda-forge/linux-ppc64le::numpy-1.19.5-py36h86665d4_1 orc conda-forge/linux-ppc64le::orc-1.7.0-hae6b4bd_0 packaging conda-forge/noarch::packaging-21.0-pyhd8ed1ab_0 pandas conda-forge/linux-ppc64le::pandas-1.1.5-py36hab1a6e6_0 parquet-cpp conda-forge/noarch::parquet-cpp-1.5.1-2 pyarrow conda-forge/linux-ppc64le::pyarrow-5.0.0-py36h7a46c7e_8_cpu pycparser conda-forge/noarch::pycparser-2.20-pyh9f0ad1d_2 pyopenssl conda-forge/noarch::pyopenssl-21.0.0-pyhd8ed1ab_0 pyparsing conda-forge/noarch::pyparsing-2.4.7-pyh9f0ad1d_0 pysocks conda-forge/linux-ppc64le::pysocks-1.7.1-py36h270354c_3 python-dateutil conda-forge/noarch::python-dateutil-2.8.2-pyhd8ed1ab_0 python-xxhash conda-forge/linux-ppc64le::python-xxhash-2.0.2-py36hc33305d_0 python_abi conda-forge/linux-ppc64le::python_abi-3.6-2_cp36m pytz conda-forge/noarch::pytz-2021.3-pyhd8ed1ab_0 pyyaml conda-forge/linux-ppc64le::pyyaml-5.4.1-py36hc33305d_1 re2 conda-forge/linux-ppc64le::re2-2021.09.01-h3b9df90_0 requests conda-forge/noarch::requests-2.25.1-pyhd3deb0d_0 s2n conda-forge/linux-ppc64le::s2n-1.0.10-h97db324_0 six conda-forge/noarch::six-1.16.0-pyh6c4a22f_0 snappy conda-forge/linux-ppc64le::snappy-1.1.8-hb209c28_3 tqdm conda-forge/noarch::tqdm-4.62.3-pyhd8ed1ab_0 typing-extensions conda-forge/noarch::typing-extensions-3.10.0.2-hd8ed1ab_0 typing_extensions conda-forge/noarch::typing_extensions-3.10.0.2-pyha770c72_0 urllib3 conda-forge/noarch::urllib3-1.26.7-pyhd8ed1ab_0 xxhash conda-forge/linux-ppc64le::xxhash-0.8.0-h4e0d66e_3 yaml conda-forge/linux-ppc64le::yaml-0.2.5-h6eb9509_0 yarl conda-forge/linux-ppc64le::yarl-1.6.3-py36hc33305d_2 zipp conda-forge/noarch::zipp-3.6.0-pyhd8ed1ab_0 zstd conda-forge/linux-ppc64le::zstd-1.5.0-h65c4b1a_0 The following packages will be UPDATED: certifi pkgs/main::certifi-2020.12.5-py36h6ff~ --> conda-forge::certifi-2021.5.30-py36h270354c_0 Proceed ([y]/n)? y Preparing transaction: done Verifying transaction: done Executing transaction: done ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Red Hat Enterprise Linux 8.2 (Ootpa) - Python version: 3.6 - PyArrow version: pyarrow - 5.0.0 - py36h7a46c7e_8_cpu - conda-forge Any help would be appreciated! I've been struggling on installing datasets on this machine.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3073/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3073/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3072
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3072/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3072/comments
https://api.github.com/repos/huggingface/datasets/issues/3072/events
https://github.com/huggingface/datasets/pull/3072
1,025,233,152
PR_kwDODunzps4tJNnD
3,072
Fix pathlib patches for streaming
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,130,675,000
1,634,131,865,000
1,634,131,865,000
MEMBER
null
Fix issue https://github.com/huggingface/datasets/issues/2866 (for good this time) `counter` now works in both streaming and non-streaming mode. And the `AttributeError: 'str' object has no attribute 'as_posix'` related to the patch of Path.open is fixed as well Note : the patches should only affect the datasets module, not the user's ones ! That's why we should probably use something else than patch.object to patch the Path class' methods. cc @severo @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3072/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3072/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3072", "html_url": "https://github.com/huggingface/datasets/pull/3072", "diff_url": "https://github.com/huggingface/datasets/pull/3072.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3072.patch", "merged_at": 1634131865000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3071/comments
https://api.github.com/repos/huggingface/datasets/issues/3071/events
https://github.com/huggingface/datasets/issues/3071
1,024,893,493
I_kwDODunzps49FqI1
3,071
Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder
{ "login": "zixiliuUSC", "id": 49173327, "node_id": "MDQ6VXNlcjQ5MTczMzI3", "avatar_url": "https://avatars.githubusercontent.com/u/49173327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zixiliuUSC", "html_url": "https://github.com/zixiliuUSC", "followers_url": "https://api.github.com/users/zixiliuUSC/followers", "following_url": "https://api.github.com/users/zixiliuUSC/following{/other_user}", "gists_url": "https://api.github.com/users/zixiliuUSC/gists{/gist_id}", "starred_url": "https://api.github.com/users/zixiliuUSC/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zixiliuUSC/subscriptions", "organizations_url": "https://api.github.com/users/zixiliuUSC/orgs", "repos_url": "https://api.github.com/users/zixiliuUSC/repos", "events_url": "https://api.github.com/users/zixiliuUSC/events{/privacy}", "received_events_url": "https://api.github.com/users/zixiliuUSC/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @zixiliuUSC, \r\n\r\nAs explained in the documentation (https://huggingface.co/docs/datasets/loading.html#json), we support loading any dataset in JSON (as well as CSV, text, Parquet) format:\r\n```python\r\nds = load_dataset('json', data_files='my_file.json')\r\n```" ]
1,634,110,330,000
1,634,113,624,000
1,634,113,623,000
NONE
null
## Adding a Dataset - **Name:** text, json, csv - **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only find a dataset loading template in [https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py](https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py) that can handle my circumstance. I'm afraid these templates are too old to use. Could you re-add these three templates to current master branch?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3071/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3070/comments
https://api.github.com/repos/huggingface/datasets/issues/3070/events
https://github.com/huggingface/datasets/pull/3070
1,024,856,745
PR_kwDODunzps4tIBRp
3,070
Fix Windows CI with FileNotFoundError when stting up s3_base fixture
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks ! Sorry for the inconvenience ^^' " ]
1,634,107,741,000
1,634,115,313,000
1,634,107,788,000
MEMBER
null
Fix #3069.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3070/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3070/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3070", "html_url": "https://github.com/huggingface/datasets/pull/3070", "diff_url": "https://github.com/huggingface/datasets/pull/3070.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3070.patch", "merged_at": 1634107788000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3069
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3069/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3069/comments
https://api.github.com/repos/huggingface/datasets/issues/3069/events
https://github.com/huggingface/datasets/issues/3069
1,024,818,680
I_kwDODunzps49FX34
3,069
CI fails on Windows with FileNotFoundError when stting up s3_base fixture
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,104,346,000
1,634,112,349,000
1,634,107,788,000
MEMBER
null
## Describe the bug After commit 9353fc863d0c99ab0427f83cc5a4f04fcf52f1df, the CI fails on Windows with FileNotFoundError when stting up s3_base fixture. See: https://app.circleci.com/pipelines/github/huggingface/datasets/8151/workflows/5db8d154-badd-4d3d-b202-ca7a318997a2/jobs/50321 Error summary: ``` ERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3 - FileNotF... ERROR tests/test_dataset_dict.py::test_dummy_dataset_serialize_s3 - FileNotFo... ``` Stack trace: ``` ______________ ERROR at setup of test_dummy_dataset_serialize_s3 ______________ [gw0] win32 -- Python 3.6.8 C:\tools\miniconda3\python.exe @pytest.fixture() def s3_base(): # writable local S3 system import shlex import subprocess # Mocked AWS Credentials for moto. old_environ = os.environ.copy() os.environ.update(S3_FAKE_ENV_VARS) > proc = subprocess.Popen(shlex.split("moto_server s3 -p %s" % s3_port)) tests\s3_fixtures.py:32: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ C:\tools\miniconda3\lib\subprocess.py:729: in __init__ restore_signals, start_new_session) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <subprocess.Popen object at 0x0000012BB8A4B908> args = 'moto_server s3 -p 5555', executable = None, preexec_fn = None close_fds = True, pass_fds = (), cwd = None, env = None startupinfo = <subprocess.STARTUPINFO object at 0x0000012BB8177630> creationflags = 0, shell = False, p2cread = -1, p2cwrite = -1, c2pread = -1 c2pwrite = -1, errread = -1, errwrite = -1, unused_restore_signals = True unused_start_new_session = False def _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_start_new_session): """Execute program (MS Windows version)""" assert not pass_fds, "pass_fds not supported on Windows." if not isinstance(args, str): args = list2cmdline(args) # Process startup details if startupinfo is None: startupinfo = STARTUPINFO() if -1 not in (p2cread, c2pwrite, errwrite): startupinfo.dwFlags |= _winapi.STARTF_USESTDHANDLES startupinfo.hStdInput = p2cread startupinfo.hStdOutput = c2pwrite startupinfo.hStdError = errwrite if shell: startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW startupinfo.wShowWindow = _winapi.SW_HIDE comspec = os.environ.get("COMSPEC", "cmd.exe") args = '{} /c "{}"'.format (comspec, args) # Start the process try: hp, ht, pid, tid = _winapi.CreateProcess(executable, args, # no special security None, None, int(not close_fds), creationflags, env, os.fspath(cwd) if cwd is not None else None, > startupinfo) E FileNotFoundError: [WinError 2] The system cannot find the file specified C:\tools\miniconda3\lib\subprocess.py:1017: FileNotFoundError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3069/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3068/comments
https://api.github.com/repos/huggingface/datasets/issues/3068/events
https://github.com/huggingface/datasets/pull/3068
1,024,681,264
PR_kwDODunzps4tHhOC
3,068
feat: increase streaming retry config
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq I had 2 runs for more than 2 days each, continuously streaming (they were failing before with 3 retries at 1 sec interval).\r\n\r\nThey are running on TPU's (so great internet connection) and only had connection errors a few times each (3 & 4). Each time it worked after only 1 retry.\r\nThe reason for a higher number of retries is for local connections. It would allow for almost 2mn of a wifi/ethernet disconnection. In practice this should not happen very often.\r\n\r\nLet me know if you think it's too much." ]
1,634,090,450,000
1,634,117,156,000
1,634,117,154,000
CONTRIBUTOR
null
Increase streaming config parameters: * retry interval set to 5 seconds * max retries set to 20 (so 1mn 40s)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3068/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3068/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3068", "html_url": "https://github.com/huggingface/datasets/pull/3068", "diff_url": "https://github.com/huggingface/datasets/pull/3068.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3068.patch", "merged_at": 1634117154000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3067/comments
https://api.github.com/repos/huggingface/datasets/issues/3067/events
https://github.com/huggingface/datasets/pull/3067
1,024,023,185
PR_kwDODunzps4tFSCy
3,067
add story_cloze
{ "login": "zaidalyafeai", "id": 15667714, "node_id": "MDQ6VXNlcjE1NjY3NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zaidalyafeai", "html_url": "https://github.com/zaidalyafeai", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks for pushing this dataset :)\r\n\r\nAccording to the CI, the file `cloze_test_val__spring2016 - cloze_test_ALL_val.csv` is missing in the dummy data zip file (the zip files seem empty). Feel free to add this file with 4-5 lines and it should be good\r\n\r\nAnd you can fix the YAML tags with\r\n```yaml\r\npretty_name: Story Cloze Test\r\n```\r\nand filling the other tags task_categories and task_ids\r\n\r\nIf the dataset doesn exist on paperswithcode, you can just leave\r\n```yaml\r\npaperswithcode_id: null\r\n```", "@lhoestq can't fix the last test fails.", "> Thanks @zaidalyafeai, the failing test is due to an issue in the master branch, that has already been fixed.\r\n> \r\n> You can include the fix:\r\n> \r\n> ```\r\n> git checkout add_story_cloze\r\n> git fetch upstream master\r\n> git merge upstream/master\r\n> ```\r\n\r\nThanks @albertvillanova, passed all the tests now. ", "Thanks Albert, I fixed the suggested comments. This dataset has no train splits, it is only used for evaluation." ]
1,634,056,613,000
1,634,132,893,000
1,634,132,893,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3067/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3067/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3067", "html_url": "https://github.com/huggingface/datasets/pull/3067", "diff_url": "https://github.com/huggingface/datasets/pull/3067.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3067.patch", "merged_at": 1634132893000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3066
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3066/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3066/comments
https://api.github.com/repos/huggingface/datasets/issues/3066/events
https://github.com/huggingface/datasets/pull/3066
1,024,005,311
PR_kwDODunzps4tFObl
3,066
Add iter_archive
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,055,436,000
1,634,548,367,000
1,634,548,366,000
MEMBER
null
Added the `iter_archive` method for the StreamingDownloadManager. It was already implemented in the regular DownloadManager. Now it can be used to stream from TAR archives as mentioned in https://github.com/huggingface/datasets/issues/2829 I also updated the `food101` dataset as an example. Any image/audio dataset using TAR archives can be updated to use `iter_archive` in order to be streamable :) cc @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3066/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3066/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3066", "html_url": "https://github.com/huggingface/datasets/pull/3066", "diff_url": "https://github.com/huggingface/datasets/pull/3066.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3066.patch", "merged_at": 1634548366000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3065/comments
https://api.github.com/repos/huggingface/datasets/issues/3065/events
https://github.com/huggingface/datasets/pull/3065
1,023,951,322
PR_kwDODunzps4tFDjk
3,065
Fix test command after refac
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,634,052,210,000
1,634,052,527,000
1,634,052,526,000
MEMBER
null
Fix the `datasets-cli` test command after the `prepare_module` change in #2986
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3065/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3065/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3065", "html_url": "https://github.com/huggingface/datasets/pull/3065", "diff_url": "https://github.com/huggingface/datasets/pull/3065.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3065.patch", "merged_at": 1634052526000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3062
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3062/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3062/comments
https://api.github.com/repos/huggingface/datasets/issues/3062/events
https://github.com/huggingface/datasets/pull/3062
1,023,209,592
PR_kwDODunzps4tCxfK
3,062
Update summary on PyPi beyond NLP
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomwolf/followers", "following_url": "https://api.github.com/users/thomwolf/following{/other_user}", "gists_url": "https://api.github.com/users/thomwolf/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomwolf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomwolf/subscriptions", "organizations_url": "https://api.github.com/users/thomwolf/orgs", "repos_url": "https://api.github.com/users/thomwolf/repos", "events_url": "https://api.github.com/users/thomwolf/events{/privacy}", "received_events_url": "https://api.github.com/users/thomwolf/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,633,994,866,000
1,634,115,354,000
1,634,115,354,000
MEMBER
null
More than just NLP now
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3062/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3062/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3062", "html_url": "https://github.com/huggingface/datasets/pull/3062", "diff_url": "https://github.com/huggingface/datasets/pull/3062.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3062.patch", "merged_at": 1634115353000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3060/comments
https://api.github.com/repos/huggingface/datasets/issues/3060/events
https://github.com/huggingface/datasets/issues/3060
1,022,936,396
I_kwDODunzps48-MVM
3,060
load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached"
{ "login": "RylanSchaeffer", "id": 8942987, "node_id": "MDQ6VXNlcjg5NDI5ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8942987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RylanSchaeffer", "html_url": "https://github.com/RylanSchaeffer", "followers_url": "https://api.github.com/users/RylanSchaeffer/followers", "following_url": "https://api.github.com/users/RylanSchaeffer/following{/other_user}", "gists_url": "https://api.github.com/users/RylanSchaeffer/gists{/gist_id}", "starred_url": "https://api.github.com/users/RylanSchaeffer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RylanSchaeffer/subscriptions", "organizations_url": "https://api.github.com/users/RylanSchaeffer/orgs", "repos_url": "https://api.github.com/users/RylanSchaeffer/repos", "events_url": "https://api.github.com/users/RylanSchaeffer/events{/privacy}", "received_events_url": "https://api.github.com/users/RylanSchaeffer/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @RylanSchaeffer, thanks for reporting.\r\n\r\nI'm sorry, but I was not able to reproduce your problem.\r\n\r\nNormally, the reason for this type of error is that, during your download of the data files, this was not fully complete.\r\n\r\nCould you please try to load the dataset again but forcing its redownload? Please use:\r\n```python\r\ndataset = load_dataset(\"openwebtext\", download_mode=\"FORCE_REDOWNLOAD\")\r\n```\r\n\r\nLet me know if the problem persists.", "I close this issue for the moment. Feel free to re-open it again if the problem persists." ]
1,633,971,927,000
1,635,400,341,000
1,635,400,341,000
NONE
null
## Describe the bug When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error. ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('openwebtext') ``` ## Expected results I expect the `dataset` variable to be properly constructed. ## Actual results ``` File "/home/rschaef/CoCoSci-Language-Distillation/distillation_v2/ratchet_learning/tasks/base.py", line 37, in create_dataset dataset_str, File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/load.py", line 1117, in load_dataset use_auth_token=use_auth_token, File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 637, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 704, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rschaef/.cache/huggingface/modules/datasets_modules/datasets/openwebtext/85b3ae7051d2d72e7c5fdf6dfb462603aaa26e9ed506202bf3a24d261c6c40a1/openwebtext.py", line 61, in _split_generators dl_dir = dl_manager.download_and_extract(_URL) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 261, in extract partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 197, in map_nested return function(data_struct) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 316, in cached_path output_path, force_extract=download_config.force_extract File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 40, in extract self.extractor.extract(input_path, output_path, extractor=extractor) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 179, in extract return extractor.extract(input_path, output_path) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 53, in extract tar_file.extractall(output_path) File "/usr/lib/python3.6/tarfile.py", line 2010, in extractall numeric_owner=numeric_owner) File "/usr/lib/python3.6/tarfile.py", line 2052, in extract numeric_owner=numeric_owner) File "/usr/lib/python3.6/tarfile.py", line 2122, in _extract_member self.makefile(tarinfo, targetpath) File "/usr/lib/python3.6/tarfile.py", line 2171, in makefile copyfileobj(source, target, tarinfo.size, ReadError, bufsize) File "/usr/lib/python3.6/tarfile.py", line 249, in copyfileobj buf = src.read(bufsize) File "/usr/lib/python3.6/lzma.py", line 200, in read return self._buffer.read(size) File "/usr/lib/python3.6/_compression.py", line 68, in readinto data = self.read(len(byte_view)) File "/usr/lib/python3.6/_compression.py", line 99, in read raise EOFError("Compressed file ended before the " python-BaseException EOFError: Compressed file ended before the end-of-stream marker was reached ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial - Python version: 3.6.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3060/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3060/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3059/comments
https://api.github.com/repos/huggingface/datasets/issues/3059/events
https://github.com/huggingface/datasets/pull/3059
1,022,620,057
PR_kwDODunzps4tA54w
3,059
Fix task reloading from cache
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,633,953,784,000
1,633,955,019,000
1,633,955,019,000
MEMBER
null
When reloading a dataset from the cache when doing `map`, the tasks templates were kept instead of being updated regarding the output of the `map` function. This is an issue because we drop the tasks templates that are not compatible anymore after `map`, for example if a column of the template was removed. This PR fixes this and for convenience introduces a decorator `@transmit_tasks` that takes care of doing this verification, similar to the `@transmit_format` decorator. This should fix issue https://github.com/huggingface/datasets/issues/3047 cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3059/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3059/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3059", "html_url": "https://github.com/huggingface/datasets/pull/3059", "diff_url": "https://github.com/huggingface/datasets/pull/3059.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3059.patch", "merged_at": 1633955018000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3057
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3057/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3057/comments
https://api.github.com/repos/huggingface/datasets/issues/3057/events
https://github.com/huggingface/datasets/issues/3057
1,022,508,315
I_kwDODunzps488j0b
3,057
Error in per class precision computation
{ "login": "tidhamecha2", "id": 38906722, "node_id": "MDQ6VXNlcjM4OTA2NzIy", "avatar_url": "https://avatars.githubusercontent.com/u/38906722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tidhamecha2", "html_url": "https://github.com/tidhamecha2", "followers_url": "https://api.github.com/users/tidhamecha2/followers", "following_url": "https://api.github.com/users/tidhamecha2/following{/other_user}", "gists_url": "https://api.github.com/users/tidhamecha2/gists{/gist_id}", "starred_url": "https://api.github.com/users/tidhamecha2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tidhamecha2/subscriptions", "organizations_url": "https://api.github.com/users/tidhamecha2/orgs", "repos_url": "https://api.github.com/users/tidhamecha2/repos", "events_url": "https://api.github.com/users/tidhamecha2/events{/privacy}", "received_events_url": "https://api.github.com/users/tidhamecha2/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @tidhamecha2, thanks for reporting.\r\n\r\nIndeed, we fixed this issue just one week ago: #3008\r\n\r\nThe fix will be included in our next version release.\r\n\r\nIn the meantime, you can incorporate the fix by installing `datasets` from the master branch:\r\n```\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```" ]
1,633,946,719,000
1,633,947,464,000
1,633,947,376,000
NONE
null
## Describe the bug When trying to get the per class precision values by providing `average=None`, following error is thrown `ValueError: can only convert an array of size 1 to a Python scalar` ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric precision_metric = load_metric("precision") predictions = [0, 2, 1, 0, 0, 1] references = [0, 1, 2, 0, 1, 2] results = precision_metric.compute(predictions=predictions, references=references, average=None) ``` ## Expected results ` {'precision': array([0.66666667, 0. , 0. ])}` as per https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py ## Actual results ``` output = self._compute(predictions=predictions, references=references, **kwargs) File "~/.cache/huggingface/modules/datasets_modules/metrics/precision/94709a71c6fe37171ef49d3466fec24dee9a79846c9f176dff66a649e9811690/precision.py", line 110, in _compute sample_weight=sample_weight, ValueError: can only convert an array of size 1 to a Python scalar ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: linux - Python version: 3.6.9 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3057/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3057/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3056/comments
https://api.github.com/repos/huggingface/datasets/issues/3056/events
https://github.com/huggingface/datasets/pull/3056
1,022,345,564
PR_kwDODunzps4tAB9h
3,056
Fix meteor metric for version >= 3.6.4
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,633,936,304,000
1,633,937,360,000
1,633,937,359,000
MEMBER
null
After `nltk` update, the meteor metric expects pre-tokenized inputs (breaking change). This PR fixes this issue, while maintaining compatibility with older versions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3056/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3056/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3056", "html_url": "https://github.com/huggingface/datasets/pull/3056", "diff_url": "https://github.com/huggingface/datasets/pull/3056.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3056.patch", "merged_at": 1633937359000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3055/comments
https://api.github.com/repos/huggingface/datasets/issues/3055/events
https://github.com/huggingface/datasets/issues/3055
1,022,319,238
I_kwDODunzps4871qG
3,055
CI test suite fails after meteor metric update
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,633,934,232,000
1,633,937,431,000
1,633,937,431,000
MEMBER
null
## Describe the bug CI test suite fails: https://app.circleci.com/pipelines/github/huggingface/datasets/8110/workflows/f059ba43-9154-4632-bebb-82318447ddc9/jobs/50010 Stack trace: ``` ___________________ LocalMetricTest.test_load_metric_meteor ____________________ [gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 self = <tests.test_metric_common.LocalMetricTest testMethod=test_load_metric_meteor> metric_name = 'meteor' def test_load_metric(self, metric_name): doctest.ELLIPSIS_MARKER = "[...]" metric_module = importlib.import_module(datasets.load.prepare_module(os.path.join("metrics", metric_name))[0]) metric = datasets.load.import_main_class(metric_module.__name__, dataset=False) # check parameters parameters = inspect.signature(metric._compute).parameters self.assertTrue("predictions" in parameters) self.assertTrue("references" in parameters) self.assertTrue(all([p.kind != p.VAR_KEYWORD for p in parameters.values()])) # no **kwargs # run doctest with self.patch_intensive_calls(metric_name, metric_module.__name__): with self.use_local_metrics(): > results = doctest.testmod(metric_module, verbose=True, raise_on_error=True) tests/test_metric_common.py:75: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1951: in testmod runner.run(test) ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1839: in run r = DocTestRunner.run(self, test, compileflags, out, False) ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1476: in run return self.__run(test, compileflags, out) ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1382: in __run exception) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <doctest.DebugRunner object at 0x7f4c26bd3da0> out = <built-in method write of _io.TextIOWrapper object at 0x7f51a21852d0> test = <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Mete...ets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)> example = <doctest.Example object at 0x7f4c26bd3eb8> exc_info = (<class 'TypeError'>, TypeError('"hypothesis" expects pre-tokenized hypothesis (Iterable[str]): It is a guide to action which ensures that the military always obeys the commands of the party',), <traceback object at 0x7f4cd01afec8>) def report_unexpected_exception(self, out, test, example, exc_info): > raise UnexpectedException(test, example, exc_info) E doctest.UnexpectedException: <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Meteor from /tmp/pytest-of-circleci/pytest-0/popen-gw1/cache/modules/datasets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)> ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1845: UnexpectedException ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3055/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3054
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3054/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3054/comments
https://api.github.com/repos/huggingface/datasets/issues/3054/events
https://github.com/huggingface/datasets/pull/3054
1,022,108,186
PR_kwDODunzps4s_TmE
3,054
Update Biosses
{ "login": "bwang482", "id": 6764450, "node_id": "MDQ6VXNlcjY3NjQ0NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bwang482", "html_url": "https://github.com/bwang482", "followers_url": "https://api.github.com/users/bwang482/followers", "following_url": "https://api.github.com/users/bwang482/following{/other_user}", "gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}", "starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bwang482/subscriptions", "organizations_url": "https://api.github.com/users/bwang482/orgs", "repos_url": "https://api.github.com/users/bwang482/repos", "events_url": "https://api.github.com/users/bwang482/events{/privacy}", "received_events_url": "https://api.github.com/users/bwang482/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,633,904,712,000
1,634,115,867,000
1,634,115,867,000
CONTRIBUTOR
null
Fix variable naming
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3054/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3054/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3054", "html_url": "https://github.com/huggingface/datasets/pull/3054", "diff_url": "https://github.com/huggingface/datasets/pull/3054.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3054.patch", "merged_at": 1634115867000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3052/comments
https://api.github.com/repos/huggingface/datasets/issues/3052/events
https://github.com/huggingface/datasets/issues/3052
1,021,944,435
I_kwDODunzps486aJz
3,052
load_dataset cannot download the data and hangs on forever if cache dir specified
{ "login": "BenoitDalFerro", "id": 69694610, "node_id": "MDQ6VXNlcjY5Njk0NjEw", "avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BenoitDalFerro", "html_url": "https://github.com/BenoitDalFerro", "followers_url": "https://api.github.com/users/BenoitDalFerro/followers", "following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}", "gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}", "starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions", "organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs", "repos_url": "https://api.github.com/users/BenoitDalFerro/repos", "events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}", "received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Issue was environment inconsistency, updating packages did the trick\r\n\r\n`conda install -c huggingface -c conda-forge datasets`\r\n\r\n> Collecting package metadata (current_repodata.json): done\r\n> Solving environment: |\r\n> The environment is inconsistent, please check the package plan carefully\r\n> The following packages are causing the inconsistency:\r\n> \r\n> - conda-forge/noarch::datasets==1.12.1=pyhd8ed1ab_1\r\n> - conda-forge/win-64::multiprocess==0.70.12.2=py38h294d835_0\r\n> done\r\n> \r\n> Package Plan\r\n> \r\n> environment location: C:\\xxx\\anaconda3\\envs\\UnBias-94-1\r\n> \r\n> added / updated specs:\r\n> - datasets\r\n> \r\n> \r\n> The following NEW packages will be INSTALLED:\r\n> \r\n> dill conda-forge/noarch::dill-0.3.4-pyhd8ed1ab_0\r\n> \r\n> The following packages will be UPDATED:\r\n> \r\n> ca-certificates pkgs/main::ca-certificates-2021.9.30-~ --> conda-forge::ca-certificates-2021.10.8-h5b45459_0\r\n> certifi pkgs/main::certifi-2021.5.30-py38haa9~ --> conda-forge::certifi-2021.10.8-py38haa244fe_0\r\n> \r\n> The following packages will be SUPERSEDED by a higher-priority channel:\r\n> " ]
1,633,861,896,000
1,633,949,829,000
1,633,949,796,000
NONE
null
## Describe the bug After updating datasets, a code that ran just fine for ages began to fail. Specifying _datasets.load_dataset_'s _cache_dir_ optional argument on Windows 10 machine results in data download to hang on forever. Same call without cache_dir works just fine. Surprisingly exact same code just runs perfectly fine on Linux docker instance running in cloud. Unfortunately I updated Windows also at the same time and I can't remember which version of datasets was running in my conda environment prior to the update otherwise I would have tried both to check this out. :( ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` cache_dir = 'c:/data/datasets' dataset = load_dataset('wikipedia', '20200501.en', split='train',cache_dir=cache_dir) ``` Note that exact same code without specifying _cache_dir_ argument works perfectly fine. ``` cache_dir = 'c:/data/datasets' dataset = load_dataset('wikipedia', '20200501.en', split='train') ``` ## Expected results Downloads the dataset and cache is handled in the _cache_dir_ directory ## Actual results Data download keeps hanging on forever, **NO TRACEBACK**! ## Environment info - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.8.11 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3052/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3052/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3050/comments
https://api.github.com/repos/huggingface/datasets/issues/3050/events
https://github.com/huggingface/datasets/pull/3050
1,021,772,622
PR_kwDODunzps4s-anK
3,050
Fix streaming: catch Timeout error
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I'm running a large test.\r\nLet's see if I get any error within a few days.", "This time it stopped after 8h but correctly raised `ConnectionError: Server Disconnected`.\r\n\r\nTraceback:\r\n```\r\nTraceback (most recent call last): \r\n File \"/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py\", line 1027, in <module> \r\n main() \r\n File \"/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py\", line 991, in main \r\n for batch in tqdm( \r\n File \"/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/tqdm/std.py\", line 1180, in __iter__ \r\n for obj in iterable: \r\n File \"/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py\", line 376, in data_loader_streaming\r\n for item in dataset:\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 341, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 338, in _iter\r\n yield from ex_iterable\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 179, in __iter__\r\n key_examples_list = [(key, example)] + [\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 179, in <listcomp>\r\n key_examples_list = [(key, example)] + [\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 176, in __iter__\r\n for key, example in iterator:\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 225, in __iter__\r\n for x in self.ex_iterable:\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 99, in __iter__\r\n for key, example in self.generate_examples_fn(**kwargs_with_shuffled_shards):\r\n File \"/home/koush/datasets/src/datasets/iterable_dataset.py\", line 287, in wrapper\r\n for key, table in generate_tables_fn(**kwargs):\r\n File \"/home/koush/datasets/src/datasets/packaged_modules/json/json.py\", line 107, in _generate_tables\r\n batch = f.read(self.config.chunksize)\r\n File \"/home/koush/datasets/src/datasets/utils/streaming_download_manager.py\", line 136, in read_with_retries\r\n raise ConnectionError(\"Server Disconnected\")\r\nConnectionError: Server Disconnected\r\n```\r\n\r\nRight before this error, the warnings were correctly raised:\r\n\r\n```\r\n10/10/2021 06:02:26 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 1sec [1/3]\r\n10/10/2021 06:02:27 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 1sec [2/3] \r\n10/10/2021 06:02:28 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host. Retrying in 1sec [3/3\r\n```\r\n\r\nI'm going to see what happens if I change the max retries to 20 and the interval to 5.", "Also maybe we can raise the Server Disconnected error with more info about what kind of error caused it (client error, time out, etc.)", "I have 2 runs:\r\n* [run 1](https://wandb.ai/dalle-mini/dalle-mini/runs/1nj161cl?workspace=user-borisd13) with [this data](https://huggingface.co/datasets/dalle-mini/encoded) that I will remove soon because I now use the 2nd one\r\n* [run 2](https://wandb.ai/dalle-mini/dalle-mini/runs/he9rrc3q?workspace=user-borisd13) with [this data](https://huggingface.co/datasets/dalle-mini/encoded-vqgan_imagenet_f16_16384)\r\n* `load_dataset(dataset_repo, data_files={'train':'data/train/*.jsonl', 'validation':'data/valid/*.jsonl'}, streaming=True)`\r\n\r\nThey have now been running by a bit more than a day for one run and 15h for the other.\r\n\r\nThe error logs are not shown in wandb because the script use `pylogging` (not sure why, I should change it) but basically so far with the new settings I had one timeout in each with successful reconnect afterwards.\r\n\r\nSo I think it's a good idea to have:\r\n* `STREAMING_READ_RETRY_INTERVAL = 5` since before my runs would get 3 errors in a row (with the default 1 second pause)\r\n* `STREAMING_READ_MAX_RETRIES` should also be increased. Since this type of error does not happen a lot, I would still have a large number (at least 10) because a stopped training run may be a big issue if checkpointing/restart is not well implemented which is not always trivial", "I agree ! Feel free to open a PR to increase both values" ]
1,633,803,560,000
1,634,052,498,000
1,633,944,938,000
CONTRIBUTOR
null
Catches Timeout error during streaming. fix #3049
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3050/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3050/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3050", "html_url": "https://github.com/huggingface/datasets/pull/3050", "diff_url": "https://github.com/huggingface/datasets/pull/3050.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3050.patch", "merged_at": 1633944938000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3049/comments
https://api.github.com/repos/huggingface/datasets/issues/3049/events
https://github.com/huggingface/datasets/issues/3049
1,021,770,008
I_kwDODunzps485vkY
3,049
TimeoutError during streaming
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,633,802,811,000
1,633,944,938,000
1,633,944,938,000
CONTRIBUTOR
null
## Describe the bug I got a TimeoutError after streaming for about 10h. ## Steps to reproduce the bug Very long code but we could do a test of streaming indefinitely data, though error may take a while to appear. ## Expected results This error was not expected in the code which considers only `ClientError` but not `TimeoutError`. See [this line](https://github.com/huggingface/datasets/blob/2814fbd0e18150be409f10804670e98d9ecb87d4/src/datasets/utils/streaming_download_manager.py#L129). Based on the traceback, it looks like the `TimeoutError` was not captured. ## Actual results ``` File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 25, in _runner result[0] = await coro File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/implementations/http.py", line 614, in async_fetch_range out = await r.read() File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1032, in read self._body = await self.content.read() File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 370, in read block = await self.readany() File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 392, in readany await self._wait("readany") File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 306, in _wait await waiter File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/helpers.py", line 656, in __exit__ raise asyncio.TimeoutError from None asyncio.exceptions.TimeoutError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 1027, in <module> main() File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 991, in main for batch in tqdm( File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/tqdm/std.py", line 1180, in __iter__ for obj in iterable: File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 376, in data_loader_streaming for item in dataset: File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 341, in __iter__ for key, example in self._iter(): File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 338, in _iter yield from ex_iterable File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 179, in __iter__ key_examples_list = [(key, example)] + [ File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 179, in <listcomp> key_examples_list = [(key, example)] + [ File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 176, in __iter__ for key, example in iterator: File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 225, in __iter__ for x in self.ex_iterable: File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 99, in __iter__ for key, example in self.generate_examples_fn(**kwargs_with_shuffled_shards): File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 287, in wrapper for key, table in generate_tables_fn(**kwargs): File "/home/koush/datasets/src/datasets/packaged_modules/json/json.py", line 107, in _generate_tables batch = f.read(self.config.chunksize) File "/home/koush/datasets/src/datasets/utils/streaming_download_manager.py", line 126, in read_with_retries out = read(*args, **kwargs) File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/implementations/http.py", line 572, in read return super().read(length) File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/spec.py", line 1533, in read out = self.cache._fetch(self.loc, self.loc + length) File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/caching.py", line 390, in _fetch self.cache = self.fetcher(start, bend) File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 91, in wrapper return sync(self.loop, func, *args, **kwargs) File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 69, in sync raise FSTimeoutError from return_result fsspec.exceptions.FSTimeoutError ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.2.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3049/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3047/comments
https://api.github.com/repos/huggingface/datasets/issues/3047/events
https://github.com/huggingface/datasets/issues/3047
1,021,360,616
I_kwDODunzps484Lno
3,047
Loading from cache a dataset for LM built from a text classification dataset sometimes errors
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "This has been fixed in 1.15, let me know if you still have this issue" ]
1,633,717,391,000
1,635,959,588,000
1,635,959,588,000
MEMBER
null
## Describe the bug Yes, I know, that description sucks. So the problem is arising in the course when we build a masked language modeling dataset using the IMDB dataset. To reproduce (or try since it's a bit fickle). Create a dataset for masled-language modeling from the IMDB dataset. ```python from datasets import load_dataset from transformers import Autotokenizer tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased) imdb_dataset = load_dataset("imdb", split="train") def tokenize_function(examples): return tokenizer(examples["text"]) tokenized_dataset = imdb_dataset.map( tokenize_function, batched=True, remove_columns=["text", "label"] ) chunk_size = 128 def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} # Compute length of concatenated texts total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the last chunk if it's smaller than chunk_size total_length = (total_length // chunk_size) * chunk_size # Split by chunks of max_len. result = { k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)] for k, t in concatenated_examples.items() } # Create a new labels column result["labels"] = result["input_ids"].copy() return result lm_dataset = tokenized_dataset.map(group_texts, batched=True) ``` Until now, all is well. The problem comes when you re-execute that code, more specifically: ```python tokenized_dataset = imdb_dataset.map( tokenize_function, batched=True, remove_columns=["text", "label"] ) lm_dataset = tokenized_dataset.map(group_texts, batched=True) ``` Try several times if the bug doesn't appear instantly, or just each line at a time, ideally in a notebook/Colab and you should get at some point: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-40-357a56ee3d53> in <module> ----> 1 lm_dataset = tokenized_dataset.map(group_texts, batched=True) ~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1947 new_fingerprint=new_fingerprint, 1948 disable_tqdm=disable_tqdm, -> 1949 desc=desc, 1950 ) 1951 else: ~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 424 } 425 # apply actual function --> 426 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 427 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 428 # re-apply format to the output ~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs) 404 # Call actual function 405 --> 406 out = func(self, *args, **kwargs) 407 408 # Update fingerprint of in-place transforms + update in-place history of transforms ~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2138 if os.path.exists(cache_file_name) and load_from_cache_file: 2139 logger.warning("Loading cached processed dataset at %s", cache_file_name) -> 2140 info = self.info.copy() 2141 info.features = features 2142 return Dataset.from_file(cache_file_name, info=info, split=self.split) ~/git/datasets/src/datasets/info.py in copy(self) 278 279 def copy(self) -> "DatasetInfo": --> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) 281 282 ~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes) ~/git/datasets/src/datasets/info.py in __post_init__(self) 177 for idx, template in enumerate(self.task_templates): 178 if isinstance(template, TextClassification): --> 179 labels = self.features[template.label_column].names 180 self.task_templates[idx] = TextClassification( 181 text_column=template.text_column, label_column=template.label_column, labels=labels KeyError: 'label' ``` It seems that when loading the cache, the dataset tries to access some kind of text classification template (which I imagine comes from the original dataset) and to look at a key that has since been removed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3047/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3046/comments
https://api.github.com/repos/huggingface/datasets/issues/3046/events
https://github.com/huggingface/datasets/pull/3046
1,021,021,368
PR_kwDODunzps4s8MjS
3,046
Fix MedDialog metadata JSON
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,633,694,680,000
1,633,938,403,000
1,633,938,402,000
MEMBER
null
Fix #2969.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3046/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3046/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3046", "html_url": "https://github.com/huggingface/datasets/pull/3046", "diff_url": "https://github.com/huggingface/datasets/pull/3046.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3046.patch", "merged_at": 1633938402000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3045
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3045/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3045/comments
https://api.github.com/repos/huggingface/datasets/issues/3045/events
https://github.com/huggingface/datasets/pull/3045
1,020,968,704
PR_kwDODunzps4s8B2b
3,045
Fix inconsistent caching behaviour in Dataset.map() with multiprocessing #3044
{ "login": "vlievin", "id": 9859840, "node_id": "MDQ6VXNlcjk4NTk4NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/9859840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vlievin", "html_url": "https://github.com/vlievin", "followers_url": "https://api.github.com/users/vlievin/followers", "following_url": "https://api.github.com/users/vlievin/following{/other_user}", "gists_url": "https://api.github.com/users/vlievin/gists{/gist_id}", "starred_url": "https://api.github.com/users/vlievin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vlievin/subscriptions", "organizations_url": "https://api.github.com/users/vlievin/orgs", "repos_url": "https://api.github.com/users/vlievin/repos", "events_url": "https://api.github.com/users/vlievin/events{/privacy}", "received_events_url": "https://api.github.com/users/vlievin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for noticing this inconsistence and suggesting a fix :)\r\n\r\nIf I understand correctly you try to pass the same fingerprint to each processed shard of the dataset. This can be an issue since each shard is actually a different dataset with different data: they shouldn't have the same fingerprint.\r\n\r\nIdeally we want the result after `map` to have this fingerprint. The result after `map` is the concatenation of all the processed shards. In this case what we can do is add the `fingerprint` parameter to `concatenate_datasets` to overwrite the fingerprint here if needed:\r\nhttps://github.com/huggingface/datasets/blob/03b7f123cc17afc517c0aa2f912bbd90cb266185/src/datasets/arrow_dataset.py#L3588-L3590\r\n\r\nthen you can pass the fingerprint to `concatenate_datasets` here:\r\nhttps://github.com/huggingface/datasets/blob/03b7f123cc17afc517c0aa2f912bbd90cb266185/src/datasets/arrow_dataset.py#L2044-L2044", "Hi @lhoestq, thanks for the pointers! Not having a unique fingerprint per shard was indeed was indeed a problem. \r\n\r\nLet me look into this. I'll be back with a fix soon.", "Alright, to clarify about my problem. I using am using `datasets` with large datasets, and want to cache a heavy and non-deterministically fingerprintable function (using `datasets.fingerprint.Hasher`). Using `Dataset.map()` as it is would cause generating a random fingerprint. To circumvent this, I am generating custom deterministic fingerprints, which I pass as an argument to `Dataset.map()`. In that way, a deterministic fingerprint is set, and caching can be used. \r\n\r\nThis approach works well when using `num_proc==1`, but not so well when using `num_proc>1`. In both cases, `dataset._fingerprint` is effectively set to `new_fingerprint` at the end of the `.map()` call. However, caching is not used when `num_proc>1`, a non deterministically fingerprintable function and `new_fingerprint != null. The reason is that caching operates within `Dataset._map_single` and `new_fingerprint` is not passed here. \r\n\r\nThis pull request implements a quick fix (+unit test) by passing `new_fingerprint=f\"{new_fingerprint}-part{rank+1}-{num_proc}\"` to each `_map_single` call. Using a separate name for each call makes sure that each worker uses a different cache file (as you mentioned above).\r\n\r\nHowever, this solution still means that using a different value for `num_proc` will require computing new partial cache files. In the long run, performing the caching within `map()` instead of within `_map_single()` would be a cleaner solution.", "Hi @vlievin,\r\n\r\nIf I understand your example correctly, you are trying to use the `new_fingerprint` param to have a deterministic fingerprint of the transform, which is not hashable due to randomness. Any particular reason why you are not using the `cache_file_name` param instead? I did run your example with the `cache_file_name` specified, and it behaves as expected based on the logs. Internally, `new_fingerprint` is needed to inject the calculated fingerprint into a method by the `fingerprint_transform` decorator, which is then used to compute the cache file name in `Dataset._get_cache_file_path` if the user hasn't specified one. ", "Hi @lhoestq, I have cleaned up the unit test (incl. styling). It should be ready to merge as such. I am using this branch in my project and everything works fine. \r\n\r\nHi @mariosasko, the argument `new_fingerprint` allowed me to deterministically cache my transformation when using `num_proc=1`, so I assumed that was the right way to go. But maybe I have misinterpreted how `new_fingerprint` should be used.\r\n\r\nBut in any case, `map()` should perform consistently with regards to `num_proc`. In my opinion, the behaviour of `Dataset.map()` should perform the same, and this without requiring the user to input `cache_file_name` when `num_proc>1` is set.\r\nBut maybe there is a more elegant way to fix this using `cache_file_name` internally for each `_single_map()` call.\r\n\r\nSo, I think this is a more high level design decision and I will leave it to the maintainers :) ", "Hi @vlievin,\r\n\r\nI appreciate your effort, but `new_fingerprint` behaves as described in the `Dataset.map` docs, and we don't have to follow some artificial consistency in regards to `num_proc`:\r\nhttps://github.com/huggingface/datasets/blob/adc5cec58dd15ee672016086fefdea34b3143e4f/src/datasets/arrow_dataset.py#L1962-L1963\r\n\r\nAdditionally, to compute the cache file name, you are using a private method (`dset._get_cache_file_path(new_fingerprint)`); prefixed with `_`), so this is a sign you may be doing something wrong because you are relying on the internals. I suggest you use cache_file_name instead and follow the suffix template docs, which explain how to compute file paths of the created cache files when `num_proc > 1`.", "Hi @mariosasko, thanks for the pointer regarding the use of the private method in then unit tests. \r\n\r\nYes, `new_fingerprint` behaves as documented. If you don't think this is an issue, feel free to close this pull request. \r\n", "Allowing the users to pass the fingerprint themselves for functions that can't be hashed would be a nice improvements. However I agree that as @mariosasko mentioned this is currently not how we want the API to behave for now - since it has to do with the internals of the library.\r\n\r\nThough we can discuss what could be the right way of doing it in https://github.com/huggingface/datasets/issues/3044 if you don't mind !" ]
1,633,690,761,000
1,634,835,512,000
1,634,826,164,000
NONE
null
Fix #3044 1. A rough unit test that fails without the fix. It probably doesn't comply with your code standards, but that just to draft the idea. 2. A one liner fix
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3045/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3045/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3045", "html_url": "https://github.com/huggingface/datasets/pull/3045", "diff_url": "https://github.com/huggingface/datasets/pull/3045.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3045.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3041
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3041/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3041/comments
https://api.github.com/repos/huggingface/datasets/issues/3041/events
https://github.com/huggingface/datasets/pull/3041
1,018,911,385
PR_kwDODunzps4s1ZAc
3,041
Load private data files + use glob on ZIP archives for json/csv/etc. module inference
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I have an error on windows:\r\n```python\r\naiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')]\r\n```\r\nat the `fsspec` call in `xglob`:\r\n```python\r\nfs, *_ = fsspec.get_fs_token_paths(urlpath, storage_options=storage_options)\r\n```\r\n\r\nLooks like the windows CI has an SSL issue... ", "I can reproduce it on my windows machine. On linux it works fine though", "I'm just skipping the windows test for now", "The Windows CI failure seems unrelated to this PR\r\n```python\r\nERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3\r\n```" ]
1,633,544,196,000
1,634,052,348,000
1,634,052,346,000
MEMBER
null
As mentioned in https://github.com/huggingface/datasets/issues/3032 loading data files from private repository isn't working correctly because of the data files resolved. #2986 did a refactor of the data files resolver. I added authentication to it. I also improved it to glob inside ZIP archives to look for json/csv/etc. files and infer which dataset builder (json/csv/etc.) to use. Fix https://github.com/huggingface/datasets/issues/3032 Note that #2986 needs to get merged first
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3041/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3041/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3041", "html_url": "https://github.com/huggingface/datasets/pull/3041", "diff_url": "https://github.com/huggingface/datasets/pull/3041.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3041.patch", "merged_at": 1634052346000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3040
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3040/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3040/comments
https://api.github.com/repos/huggingface/datasets/issues/3040/events
https://github.com/huggingface/datasets/issues/3040
1,018,782,475
I_kwDODunzps48uWML
3,040
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi,\r\n\r\nthe `save_to_disk` docstring explains that `flatten_indices` has to be called on a dataset before saving it to save only the shard/slice of the dataset.", "That works! Thansk!\r\n\r\nMight be worth doing that automatically actually in case the `save_to_disk` is called on a dataset that has an indices mapping :-)", "I agree with @patrickvonplaten: this issue is reported recurrently, so better if we implement the `.flatten_indices()` automatically?", "That would be great indeed - I don't really see a use case where one would not like to call `.flatten_indices()` before calling `save_to_disk`", "+1 on this !" ]
1,633,540,127,000
1,635,867,668,000
1,635,867,668,000
MEMBER
null
## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the small dataset is saved but the whole dataset with an indices file. The problem with this is that the dataset is still very big. ## Steps to reproduce the bug E.g. run the following: ```python from datasets import load_dataset, save_to_disk nlp = load_dataset("glue", "mnli", split="train") nlp.save_to_disk("full") nlp = nlp.select(range(100)) nlp.save_to_disk("dummy") ``` Now one can see that both `"dummy"` and `"full"` have the same size. This shouldn't be the case IMO. ## Expected results IMO `"dummy"` should be much smaller so that one can easily play around with the dataset on the hub. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.2.dev0 - Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3040/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3039
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3039/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3039/comments
https://api.github.com/repos/huggingface/datasets/issues/3039/events
https://github.com/huggingface/datasets/pull/3039
1,018,219,800
PR_kwDODunzps4sy_J-
3,039
Add sberquad dataset
{ "login": "Alenush", "id": 13781234, "node_id": "MDQ6VXNlcjEzNzgxMjM0", "avatar_url": "https://avatars.githubusercontent.com/u/13781234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Alenush", "html_url": "https://github.com/Alenush", "followers_url": "https://api.github.com/users/Alenush/followers", "following_url": "https://api.github.com/users/Alenush/following{/other_user}", "gists_url": "https://api.github.com/users/Alenush/gists{/gist_id}", "starred_url": "https://api.github.com/users/Alenush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Alenush/subscriptions", "organizations_url": "https://api.github.com/users/Alenush/orgs", "repos_url": "https://api.github.com/users/Alenush/repos", "events_url": "https://api.github.com/users/Alenush/events{/privacy}", "received_events_url": "https://api.github.com/users/Alenush/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,633,523,522,000
1,634,120,351,000
1,634,120,164,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3039/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3039/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3039", "html_url": "https://github.com/huggingface/datasets/pull/3039", "diff_url": "https://github.com/huggingface/datasets/pull/3039.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3039.patch", "merged_at": 1634120164000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3038
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3038/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3038/comments
https://api.github.com/repos/huggingface/datasets/issues/3038/events
https://github.com/huggingface/datasets/pull/3038
1,018,113,499
PR_kwDODunzps4syno_
3,038
add sberquad dataset
{ "login": "Alenush", "id": 13781234, "node_id": "MDQ6VXNlcjEzNzgxMjM0", "avatar_url": "https://avatars.githubusercontent.com/u/13781234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Alenush", "html_url": "https://github.com/Alenush", "followers_url": "https://api.github.com/users/Alenush/followers", "following_url": "https://api.github.com/users/Alenush/following{/other_user}", "gists_url": "https://api.github.com/users/Alenush/gists{/gist_id}", "starred_url": "https://api.github.com/users/Alenush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Alenush/subscriptions", "organizations_url": "https://api.github.com/users/Alenush/orgs", "repos_url": "https://api.github.com/users/Alenush/repos", "events_url": "https://api.github.com/users/Alenush/events{/privacy}", "received_events_url": "https://api.github.com/users/Alenush/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,633,520,019,000
1,633,521,481,000
1,633,521,481,000
CONTRIBUTOR
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3038/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3038/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3038", "html_url": "https://github.com/huggingface/datasets/pull/3038", "diff_url": "https://github.com/huggingface/datasets/pull/3038.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3038.patch", "merged_at": null }
true