url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.07B
node_id
stringlengths
18
32
number
int64
1
3.39k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
1 value
locked
bool
1 class
assignee
null
assignees
sequence
milestone
null
comments
sequence
created_at
int64
1,587B
1,639B
updated_at
int64
1,587B
1,639B
closed_at
int64
1,587B
1,639B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
โŒ€
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3391
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3391/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3391/comments
https://api.github.com/repos/huggingface/datasets/issues/3391/events
https://github.com/huggingface/datasets/issues/3391
1,072,849,055
I_kwDODunzps4_8mCf
3,391
method to select columns
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "duplicate of #2655" ]
1,638,845,059,000
1,638,845,127,000
1,638,845,127,000
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** * There is currently no way to select some columns of a dataset. In pandas, one can use `df[['col1', 'col2']]` to select columns, but in `datasets`, it results in error. **Describe the solution you'd like** * A new method that can be used to create a new dataset with only a list of specified columns. **Describe alternatives you've considered** `.remove_columns(self, columns: Union[str, List[str]], inverse: bool = False)` Or `.select(self, indices: Iterable = None, columns: List[str] = None)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3391/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3391/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3390
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3390/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3390/comments
https://api.github.com/repos/huggingface/datasets/issues/3390/events
https://github.com/huggingface/datasets/issues/3390
1,072,462,456
I_kwDODunzps4_7Hp4
3,390
Loading dataset throws "KeyError: 'Field "builder_name" does not exist in table schema'"
{ "login": "R4ZZ3", "id": 25264037, "node_id": "MDQ6VXNlcjI1MjY0MDM3", "avatar_url": "https://avatars.githubusercontent.com/u/25264037?v=4", "gravatar_id": "", "url": "https://api.github.com/users/R4ZZ3", "html_url": "https://github.com/R4ZZ3", "followers_url": "https://api.github.com/users/R4ZZ3/followers", "following_url": "https://api.github.com/users/R4ZZ3/following{/other_user}", "gists_url": "https://api.github.com/users/R4ZZ3/gists{/gist_id}", "starred_url": "https://api.github.com/users/R4ZZ3/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/R4ZZ3/subscriptions", "organizations_url": "https://api.github.com/users/R4ZZ3/orgs", "repos_url": "https://api.github.com/users/R4ZZ3/repos", "events_url": "https://api.github.com/users/R4ZZ3/events{/privacy}", "received_events_url": "https://api.github.com/users/R4ZZ3/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Got solved it with push_to_hub, closing" ]
1,638,814,969,000
1,638,822,125,000
1,638,822,125,000
NONE
null
## Describe the bug I have prepared dataset to datasets and now I am trying to load it back Finnish-NLP/voxpopuli_fi I get "KeyError: 'Field "builder_name" does not exist in table schema'" My dataset folder and files should be like @patrickvonplaten has here https://huggingface.co/datasets/flax-community/german-common-voice-processed How my voxpopuli dataset looks like: ![image](https://user-images.githubusercontent.com/25264037/144895598-b7d9ae91-b04a-4046-9f06-b71ff0824d13.png) Part of the processing (path column is the absolute path to audio files) ``` def add_audio_column(example): example['audio'] = example['path'] return example voxpopuli = voxpopuli.map(add_audio_column) voxpopuli.cast_column("audio", Audio()) voxpopuli["audio"] <-- to my knowledge this does load the local files and prepares those arrays voxpopuli = voxpopuli.cast_column("audio", Audio(sampling_rate=16_000)) resampling 16kHz ``` I have then saved it to disk_ `voxpopuli.save_to_disk('/asr_disk/datasets_processed_new/voxpopuli')` and made folder structure same as @patrickvonplaten I also get same error while trying to load_dataset from his repo: ![image](https://user-images.githubusercontent.com/25264037/144895872-e9b8f326-cf2b-46cf-9417-606a0ce14077.png) ## Steps to reproduce the bug ```python dataset = load_dataset("Finnish-NLP/voxpopuli_fi") ``` ## Expected results Dataset is loaded correctly and looks like in the first picture ## Actual results Loading throws keyError: KeyError: 'Field "builder_name" does not exist in table schema' Resources I have been trying to follow: https://huggingface.co/docs/datasets/audio_process.html https://huggingface.co/docs/datasets/share_dataset.html ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.2.dev0 - Platform: Ubuntu 20.04.2 LTS - Python version: 3.8.12 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3390/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3390/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3388
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3388/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3388/comments
https://api.github.com/repos/huggingface/datasets/issues/3388/events
https://github.com/huggingface/datasets/pull/3388
1,072,022,021
PR_kwDODunzps4vbnyY
3,388
Fix flaky test of the temporary directory used by load_from_disk
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "CI failed because of a server error - merging" ]
1,638,788,971,000
1,638,789,903,000
1,638,789,889,000
MEMBER
null
The test is flaky, here is an example of random CI failure: https://github.com/huggingface/datasets/commit/73ed6615b4b3eb74d5311684f7b9e05cdb76c989 I fixed that by not checking the content of the random part of the temporary directory name
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3388/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3388", "html_url": "https://github.com/huggingface/datasets/pull/3388", "diff_url": "https://github.com/huggingface/datasets/pull/3388.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3388.patch", "merged_at": 1638789889000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3386
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3386/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3386/comments
https://api.github.com/repos/huggingface/datasets/issues/3386/events
https://github.com/huggingface/datasets/pull/3386
1,071,813,141
PR_kwDODunzps4va7-2
3,386
Fix typos in dataset cards
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,775,240,000
1,638,783,055,000
1,638,783,054,000
MEMBER
null
This PR: - Fix typos in dataset cards - Fix Papers With Code ID for: - Bilingual Corpus of Arabic-English Parallel Tweets - Tweets Hate Speech Detection - Add pretty name tags
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3386/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3386", "html_url": "https://github.com/huggingface/datasets/pull/3386", "diff_url": "https://github.com/huggingface/datasets/pull/3386.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3386.patch", "merged_at": 1638783054000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3381
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3381/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3381/comments
https://api.github.com/repos/huggingface/datasets/issues/3381/events
https://github.com/huggingface/datasets/issues/3381
1,071,283,879
I_kwDODunzps4_2n6n
3,381
Unable to load audio_features from common_voice dataset
{ "login": "ashu5644", "id": 8268102, "node_id": "MDQ6VXNlcjgyNjgxMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/8268102?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ashu5644", "html_url": "https://github.com/ashu5644", "followers_url": "https://api.github.com/users/ashu5644/followers", "following_url": "https://api.github.com/users/ashu5644/following{/other_user}", "gists_url": "https://api.github.com/users/ashu5644/gists{/gist_id}", "starred_url": "https://api.github.com/users/ashu5644/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ashu5644/subscriptions", "organizations_url": "https://api.github.com/users/ashu5644/orgs", "repos_url": "https://api.github.com/users/ashu5644/repos", "events_url": "https://api.github.com/users/ashu5644/events{/privacy}", "received_events_url": "https://api.github.com/users/ashu5644/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! Feel free to access `batch[\"audio\"][\"array\"]` and `batch[\"audio\"][\"sampling_rate\"]` instead\r\n\r\n`datasets` 1.16 introduced some changes in `common_voice` and now the `path` field is no longer a path to a local file (but rather the path to the file in the archive it's extracted from)", "Thanks for the information. It works.", "Cool ! Closing this issue then" ]
1,638,647,951,000
1,638,813,162,000
1,638,813,162,000
NONE
null
## Describe the bug I am not able to load audio features from common_voice dataset ## Steps to reproduce the bug ``` from datasets import load_dataset import torchaudio test_dataset = load_dataset("common_voice", "hi", split="test[:2%]") resampler = torchaudio.transforms.Resample(48_000, 16_000) def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) ``` ## Expected results This piece of code should return test_dataset after loading audio features. ## Actual results Reusing dataset common_voice (/home/jovyan/.cache/huggingface/datasets/common_voice/hi/6.1.0/b879a355caa529b11f2249400b61cadd0d9433f334d5c60f8c7216ccedfecfe1) /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:341: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 " Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 0%| | 0/3 [00:00<?, ?ex/s]formats: can't open input file `common_voice_hi_23795358.mp3': No such file or directory 0%| | 0/3 [00:00<?, ?ex/s] Traceback (most recent call last): File "demo_file.py", line 23, in <module> test_dataset = test_dataset.map(speech_file_to_array_fn) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2036, in map desc=desc, File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 518, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 485, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 411, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2368, in _map_single example = apply_function_on_filtered_inputs(example, i, offset=offset) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2277, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1978, in decorated result = f(decorated_item, *args, **kwargs) File "demo_file.py", line 19, in speech_file_to_array_fn speech_array, sampling_rate = torchaudio.load(batch["path"]) File "/opt/conda/lib/python3.7/site-packages/torchaudio/backend/sox_io_backend.py", line 154, in load filepath, frame_offset, num_frames, normalize, channels_first, format) RuntimeError: Error loading audio file: failed to open file common_voice_hi_23795358.mp3 ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-4.14.243 with-debian-bullseye-sid - Python version: 3.7.9 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3381/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3370
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3370/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3370/comments
https://api.github.com/repos/huggingface/datasets/issues/3370/events
https://github.com/huggingface/datasets/pull/3370
1,069,735,423
PR_kwDODunzps4vUVA3
3,370
Document a training loop for streaming dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,461,820,000
1,638,538,475,000
1,638,538,474,000
MEMBER
null
I added some docs about streaming dataset. In particular I added two subsections: - one on how to use `map` for preprocessing - one on how to use a streaming dataset in a pytorch training loop cc @patrickvonplaten @stevhliu if you have some comments cc @Rocketknight1 later we can add the one for TF and I might need your help ^^'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3370/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3370", "html_url": "https://github.com/huggingface/datasets/pull/3370", "diff_url": "https://github.com/huggingface/datasets/pull/3370.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3370.patch", "merged_at": 1638538474000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3368
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3368/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3368/comments
https://api.github.com/repos/huggingface/datasets/issues/3368/events
https://github.com/huggingface/datasets/pull/3368
1,069,403,624
PR_kwDODunzps4vTObo
3,368
Fix dict source_datasets tagset validator
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,442,340,000
1,638,460,118,000
1,638,460,117,000
MEMBER
null
Currently, the `source_datasets` tag validation does not support passing a dict with configuration keys. This PR: - Extends `tagset_validator` to support regex tags - Uses `tagset_validator` to validate dict `source_datasets`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3368/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3368", "html_url": "https://github.com/huggingface/datasets/pull/3368", "diff_url": "https://github.com/huggingface/datasets/pull/3368.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3368.patch", "merged_at": 1638460117000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3367
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3367/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3367/comments
https://api.github.com/repos/huggingface/datasets/issues/3367/events
https://github.com/huggingface/datasets/pull/3367
1,069,241,274
PR_kwDODunzps4vSsfk
3,367
Fix typo in other-structured-to-text task tag
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,432,147,000
1,638,461,234,000
1,638,461,233,000
MEMBER
null
Fix typo in task tag: - `other-stuctured-to-text` (before) - `other-structured-to-text` (now)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3367/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3367", "html_url": "https://github.com/huggingface/datasets/pull/3367", "diff_url": "https://github.com/huggingface/datasets/pull/3367.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3367.patch", "merged_at": 1638461233000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3363
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3363/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3363/comments
https://api.github.com/repos/huggingface/datasets/issues/3363/events
https://github.com/huggingface/datasets/pull/3363
1,068,824,340
PR_kwDODunzps4vRVCl
3,363
Update URL of Jeopardy! dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Closing this PR in favor of #3266." ]
1,638,389,290,000
1,638,534,901,000
1,638,534,901,000
CONTRIBUTOR
null
Updates the URL of the Jeopardy! dataset. Fix #3361
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3363/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3363/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3363", "html_url": "https://github.com/huggingface/datasets/pull/3363", "diff_url": "https://github.com/huggingface/datasets/pull/3363.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3363.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3361
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3361/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3361/comments
https://api.github.com/repos/huggingface/datasets/issues/3361/events
https://github.com/huggingface/datasets/issues/3361
1,068,736,268
I_kwDODunzps4_s58M
3,361
Jeopardy _URL access denied
{ "login": "tianjianjiang", "id": 4812544, "node_id": "MDQ6VXNlcjQ4MTI1NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tianjianjiang", "html_url": "https://github.com/tianjianjiang", "followers_url": "https://api.github.com/users/tianjianjiang/followers", "following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}", "gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions", "organizations_url": "https://api.github.com/users/tianjianjiang/orgs", "repos_url": "https://api.github.com/users/tianjianjiang/repos", "events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}", "received_events_url": "https://api.github.com/users/tianjianjiang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,638,382,893,000
1,638,789,391,000
1,638,789,391,000
NONE
null
## Describe the bug http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz returns Access Denied now. However, https://drive.google.com/file/d/0BwT5wj_P7BKXb2hfM3d2RHU1ckE/view?usp=sharing from the original Reddit post https://www.reddit.com/r/datasets/comments/1uyd0t/200000_jeopardy_questions_in_a_json_file/ may work. ## Steps to reproduce the bug ```shell > python Python 3.7.12 (default, Sep 5 2021, 08:34:29) [Clang 11.0.3 (clang-1103.0.32.62)] on darwin Type "help", "copyright", "credits" or "license" for more information. ``` ```python >>> from datasets import load_dataset >>> load_dataset("jeopardy") ``` ## Expected results The download completes. ## Actual results ```shell Downloading: 4.18kB [00:00, 1.60MB/s] Downloading: 2.03kB [00:00, 1.04MB/s] Using custom data configuration default Downloading and preparing dataset jeopardy/default (download: 12.13 MiB, generated: 34.46 MiB, post-processed: Unknown size, total: 46.59 MiB) to /Users/mike/.cache/huggingface/datasets/jeopardy/default/0.1.0/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/load.py", line 1632, in load_dataset use_auth_token=use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 608, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/mike/.cache/huggingface/modules/datasets_modules/datasets/jeopardy/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810/jeopardy.py", line 72, in _split_generators filepath = dl_manager.download_and_extract(_DATA_URL) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 197, in download download_func, url_or_urls, map_tuple=True, num_proc=download_config.num_proc, disable_tqdm=False File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 197, in map_nested return function(data_struct) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 305, in cached_path use_auth_token=download_config.use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` --- ```shell > curl http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` ```xml <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>70Y9R36XNPEQXMGV</RequestId><HostId>G6F5AK4qo7JdaEdKGMtS0P6gdLPeFOdEfSEfvTOZEfk9km0/jAfp08QLfKSTFFj1oWIKoAoBehM=</HostId></Error> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.0 - Platform: macOS Catalina 10.15.7 - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3361/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3360
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3360/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3360/comments
https://api.github.com/repos/huggingface/datasets/issues/3360/events
https://github.com/huggingface/datasets/pull/3360
1,068,724,697
PR_kwDODunzps4vQ_16
3,360
Add The Pile USPTO subset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,382,085,000
1,638,531,929,000
1,638,531,928,000
MEMBER
null
Add: - USPTO subset of The Pile: "uspto" config Close bigscience-workshop/data_tooling#297. CC: @StellaAthena
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3360/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3360", "html_url": "https://github.com/huggingface/datasets/pull/3360", "diff_url": "https://github.com/huggingface/datasets/pull/3360.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3360.patch", "merged_at": 1638531927000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3359
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3359/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3359/comments
https://api.github.com/repos/huggingface/datasets/issues/3359/events
https://github.com/huggingface/datasets/pull/3359
1,068,638,213
PR_kwDODunzps4vQtI0
3,359
Add The Pile Free Law subset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@albertvillanova Is there a specific reason youโ€™re adding the Pile under โ€œtheโ€ instead of under โ€œpileโ€? That does not appear to be consistent with other datasets.", "Hi @StellaAthena,\r\n\r\nI asked myself the same question, but at the end I decided to be consistent with previously added Pile subsets:\r\n- #2817\r\n\r\nI guess the reason is to stress that the definite article is always used before the name of the dataset (your site says: \"The Pile. An 800GB Dataset of Diverse Text for Language Modeling\"). Other datasets are not usually preceded by the definite article, like \"the SQuAD\" or \"the GLUE\" or \"the Common Voice\"...\r\n\r\nCC: @lhoestq ", "> I guess the reason is to stress that the definite article is always used before the name of the dataset (your site says: \"The Pile. An 800GB Dataset of Diverse Text for Language Modeling\").\r\n\r\nYes that's because of this that it starts with \"the\"" ]
1,638,377,164,000
1,638,785,537,000
1,638,379,844,000
MEMBER
null
Add: - Free Law subset of The Pile: "free_law" config Close bigscience-workshop/data_tooling#75. CC: @StellaAthena
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3359/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3359", "html_url": "https://github.com/huggingface/datasets/pull/3359", "diff_url": "https://github.com/huggingface/datasets/pull/3359.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3359.patch", "merged_at": 1638379843000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3358
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3358/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3358/comments
https://api.github.com/repos/huggingface/datasets/issues/3358/events
https://github.com/huggingface/datasets/issues/3358
1,068,623,216
I_kwDODunzps4_seVw
3,358
add new field, and get errors
{ "login": "yanllearnn", "id": 38966558, "node_id": "MDQ6VXNlcjM4OTY2NTU4", "avatar_url": "https://avatars.githubusercontent.com/u/38966558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanllearnn", "html_url": "https://github.com/yanllearnn", "followers_url": "https://api.github.com/users/yanllearnn/followers", "following_url": "https://api.github.com/users/yanllearnn/following{/other_user}", "gists_url": "https://api.github.com/users/yanllearnn/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanllearnn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanllearnn/subscriptions", "organizations_url": "https://api.github.com/users/yanllearnn/orgs", "repos_url": "https://api.github.com/users/yanllearnn/repos", "events_url": "https://api.github.com/users/yanllearnn/events{/privacy}", "received_events_url": "https://api.github.com/users/yanllearnn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi, \r\n\r\ncould you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests? ", "> Hi,\r\n> \r\n> could you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests?\r\n\r\nok." ]
1,638,376,538,000
1,638,411,982,000
1,638,411,982,000
NONE
null
after adding new field **tokenized_examples["example_id"]**, and get errors below, I think it is due to changing data to tensor, and **tokenized_examples["example_id"]** is string list **all fields** ``` ***************** train_dataset 1: Dataset({ features: ['attention_mask', 'end_positions', 'example_id', 'input_ids', 'start_positions', 'token_type_ids'], num_rows: 87714 }) ``` **Errors** ``` Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 705, in convert_to_tensors tensor = as_tensor(value) ValueError: too many dimensions 'str' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3358/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3354/comments
https://api.github.com/repos/huggingface/datasets/issues/3354/events
https://github.com/huggingface/datasets/pull/3354
1,068,307,271
PR_kwDODunzps4vPl9d
3,354
Remove duplicate name from dataset cards
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,359,140,000
1,638,364,470,000
1,638,364,469,000
MEMBER
null
Remove duplicate name from dataset card for: - ajgt_twitter_ar - emotone_ar
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3354/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3354", "html_url": "https://github.com/huggingface/datasets/pull/3354", "diff_url": "https://github.com/huggingface/datasets/pull/3354.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3354.patch", "merged_at": 1638364469000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3353
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3353/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3353/comments
https://api.github.com/repos/huggingface/datasets/issues/3353/events
https://github.com/huggingface/datasets/issues/3353
1,068,173,783
I_kwDODunzps4_qwnX
3,353
add one field "example_id", but I can't see it in the "comput_loss" function
{ "login": "yanllearnn", "id": 38966558, "node_id": "MDQ6VXNlcjM4OTY2NTU4", "avatar_url": "https://avatars.githubusercontent.com/u/38966558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanllearnn", "html_url": "https://github.com/yanllearnn", "followers_url": "https://api.github.com/users/yanllearnn/followers", "following_url": "https://api.github.com/users/yanllearnn/following{/other_user}", "gists_url": "https://api.github.com/users/yanllearnn/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanllearnn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanllearnn/subscriptions", "organizations_url": "https://api.github.com/users/yanllearnn/orgs", "repos_url": "https://api.github.com/users/yanllearnn/repos", "events_url": "https://api.github.com/users/yanllearnn/events{/privacy}", "received_events_url": "https://api.github.com/users/yanllearnn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Your function looks fine, I used to map `squad` locally and it indeed added the `example_id` field correctly.\r\n\r\nHowever I think that in the `compute_loss` method only a subset of the fields are available: the model inputs. Since `example_id` is not a model input (it's not passed as a parameter to the model), the data loader doesn't need to return it by default.\r\n\r\nHowever you can disable this behavior by setting `remove_unused_columns` to `False` to your training arguments. In this case in `compute_loss` you will get the full item with all the fields.\r\n\r\nNote that since the model doesn't take `example_id` as input, you will have to remove it from the inputs when `model(**inputs)` is called", "Hi, I have set **args.remove_unused_columns=False** and **training_args.remove_unused_columns=False**, but the field doesn't been contained yet.\r\n```\r\ndef main():\r\n argp = HfArgumentParser(TrainingArguments)\r\n # The HfArgumentParser object collects command-line arguments into an object (and provides default values for unspecified arguments).\r\n # In particular, TrainingArguments has several keys that you'll need/want to specify (when you call run.py from the command line):\r\n # --do_train\r\n # When included, this argument tells the script to train a model.\r\n # See docstrings for \"--task\" and \"--dataset\" for how the training dataset is selected.\r\n # --do_eval\r\n # When included, this argument tells the script to evaluate the trained/loaded model on the validation split of the selected dataset.\r\n # --per_device_train_batch_size <int, default=8>\r\n # This is the training batch size.\r\n # If you're running on GPU, you should try to make this as large as you can without getting CUDA out-of-memory errors.\r\n # For reference, with --max_length=128 and the default ELECTRA-small model, a batch size of 32 should fit in 4gb of GPU memory.\r\n # --num_train_epochs <float, default=3.0>\r\n # How many passes to do through the training data.\r\n # --output_dir <path>\r\n # Where to put the trained model checkpoint(s) and any eval predictions.\r\n # *This argument is required*.\r\n\r\n argp.add_argument('--model', type=str,\r\n default='google/electra-small-discriminator',\r\n help=\"\"\"This argument specifies the base model to fine-tune.\r\n This should either be a HuggingFace model ID (see https://huggingface.co/models)\r\n or a path to a saved model checkpoint (a folder containing config.json and pytorch_model.bin).\"\"\")\r\n argp.add_argument('--task', type=str, choices=['nli', 'qa'], required=True,\r\n help=\"\"\"This argument specifies which task to train/evaluate on.\r\n Pass \"nli\" for natural language inference or \"qa\" for question answering.\r\n By default, \"nli\" will use the SNLI dataset, and \"qa\" will use the SQuAD dataset.\"\"\")\r\n argp.add_argument('--dataset', type=str, default=None,\r\n help=\"\"\"This argument overrides the default dataset used for the specified task.\"\"\")\r\n argp.add_argument('--max_length', type=int, default=128,\r\n help=\"\"\"This argument limits the maximum sequence length used during training/evaluation.\r\n Shorter sequence lengths need less memory and computation time, but some examples may end up getting truncated.\"\"\")\r\n argp.add_argument('--max_train_samples', type=int, default=None,\r\n help='Limit the number of examples to train on.')\r\n argp.add_argument('--max_eval_samples', type=int, default=None,\r\n help='Limit the number of examples to evaluate on.')\r\n\r\n argp.remove_unused_columns = False\r\n training_args, args = argp.parse_args_into_dataclasses()\r\n args.remove_unused_columns=False\r\n training_args.remove_unused_columns=False\r\n```\r\n\r\n\r\n```\r\n**************** train_dataset: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n})\r\n\r\n\r\n**************** train_dataset_featurized: Dataset({\r\n features: ['attention_mask', 'end_positions', 'input_ids', 'start_positions', 'token_type_ids'],\r\n num_rows: 87714\r\n})\r\n```", "Hi, I print the value, all are set to False, but don't work.\r\n```\r\n********************* training_args: TrainingArguments(\r\n_n_gpu=1,\r\nadafactor=False,\r\nadam_beta1=0.9,\r\nadam_beta2=0.999,\r\nadam_epsilon=1e-08,\r\ndataloader_drop_last=False,\r\ndataloader_num_workers=0,\r\ndataloader_pin_memory=True,\r\nddp_find_unused_parameters=None,\r\ndebug=[],\r\ndeepspeed=None,\r\ndisable_tqdm=False,\r\ndo_eval=False,\r\ndo_predict=False,\r\ndo_train=True,\r\neval_accumulation_steps=None,\r\neval_steps=None,\r\nevaluation_strategy=IntervalStrategy.NO,\r\nfp16=False,\r\nfp16_backend=auto,\r\nfp16_full_eval=False,\r\nfp16_opt_level=O1,\r\ngradient_accumulation_steps=1,\r\ngreater_is_better=None,\r\ngroup_by_length=False,\r\nignore_data_skip=False,\r\nlabel_names=None,\r\nlabel_smoothing_factor=0.0,\r\nlearning_rate=5e-05,\r\nlength_column_name=length,\r\nload_best_model_at_end=False,\r\nlocal_rank=-1,\r\nlog_level=-1,\r\nlog_level_replica=-1,\r\nlog_on_each_node=True,\r\nlogging_dir=./re_trained_model/runs/Dec01_14-15-08_399b9290604c,\r\nlogging_first_step=False,\r\nlogging_steps=500,\r\nlogging_strategy=IntervalStrategy.STEPS,\r\nlr_scheduler_type=SchedulerType.LINEAR,\r\nmax_grad_norm=1.0,\r\nmax_steps=-1,\r\nmetric_for_best_model=None,\r\nmp_parameters=,\r\nno_cuda=False,\r\nnum_train_epochs=3.0,\r\noutput_dir=./re_trained_model,\r\noverwrite_output_dir=False,\r\npast_index=-1,\r\nper_device_eval_batch_size=8,\r\nper_device_train_batch_size=8,\r\nprediction_loss_only=False,\r\npush_to_hub=False,\r\npush_to_hub_model_id=re_trained_model,\r\npush_to_hub_organization=None,\r\npush_to_hub_token=None,\r\nremove_unused_columns=False,\r\nreport_to=['tensorboard'],\r\nresume_from_checkpoint=None,\r\nrun_name=./re_trained_model,\r\nsave_on_each_node=False,\r\nsave_steps=500,\r\nsave_strategy=IntervalStrategy.STEPS,\r\nsave_total_limit=None,\r\nseed=42,\r\nsharded_ddp=[],\r\nskip_memory_metrics=True,\r\ntpu_metrics_debug=False,\r\ntpu_num_cores=None,\r\nuse_legacy_prediction_loop=False,\r\nwarmup_ratio=0.0,\r\nwarmup_steps=0,\r\nweight_decay=0.0,\r\n)\r\n```\r\n```\r\n********************* args: Namespace(dataset='squad', max_eval_samples=None, max_length=128, max_train_samples=None, model='google/electra-small-discriminator', remove_unused_columns=False, task='qa')\r\n2021-12-01 14:15:10,048 - WARNING - datasets.builder - Reusing dataset squad (/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)\r\nSome weights of the model checkpoint at google/electra-small-discriminator were not used when initializing ElectraForQuestionAnswering: ['discriminator_predictions.dense_prediction.weight', 'discriminator_predictions.dense_prediction.bias', 'discriminator_predictions.dense.weight', 'discriminator_predictions.dense.bias']\r\n- This IS expected if you are initializing ElectraForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).\r\n- This IS NOT expected if you are initializing ElectraForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).\r\nSome weights of ElectraForQuestionAnswering were not initialized from the model checkpoint at google/electra-small-discriminator and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight']\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nPreprocessing data... (this takes a little bit, should only happen once per dataset)\r\n```", "Hmmm, it might be because the default data collator removes all the fields with `string` type:\r\n\r\nhttps://github.com/huggingface/transformers/blob/4c0dd199c8305903564c2edeae23d294edd4b321/src/transformers/data/data_collator.py#L107-L112\r\n\r\nI guess you also need a custom data collator that doesn't remove them.", "can you give a tutorial about how to do this?", "I overwrite **get_train_dataloader**, and remove **_remove_unused_columns**, but it doesn't work.\r\n\r\n```\r\n def get_train_dataloader(self) -> DataLoader:\r\n \"\"\"\r\n Returns the training :class:`~torch.utils.data.DataLoader`.\r\n\r\n Will use no sampler if :obj:`self.train_dataset` does not implement :obj:`__len__`, a random sampler (adapted\r\n to distributed training if necessary) otherwise.\r\n\r\n Subclass and override this method if you want to inject some custom behavior.\r\n \"\"\"\r\n if self.train_dataset is None:\r\n raise ValueError(\"Trainer: training requires a train_dataset.\")\r\n\r\n train_dataset = self.train_dataset\r\n # if is_datasets_available() and isinstance(train_dataset, datasets.Dataset):\r\n # train_dataset = self._remove_unused_columns(train_dataset, description=\"training\")\r\n\r\n if isinstance(train_dataset, torch.utils.data.IterableDataset):\r\n if self.args.world_size > 1:\r\n train_dataset = IterableDatasetShard(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n drop_last=self.args.dataloader_drop_last,\r\n num_processes=self.args.world_size,\r\n process_index=self.args.process_index,\r\n )\r\n\r\n return DataLoader(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n collate_fn=self.data_collator,\r\n num_workers=self.args.dataloader_num_workers,\r\n pin_memory=self.args.dataloader_pin_memory,\r\n )\r\n\r\n train_sampler = self._get_train_sampler()\r\n\r\n return DataLoader(\r\n train_dataset,\r\n batch_size=self.args.train_batch_size,\r\n sampler=train_sampler,\r\n collate_fn=self.data_collator,\r\n drop_last=self.args.dataloader_drop_last,\r\n num_workers=self.args.dataloader_num_workers,\r\n pin_memory=self.args.dataloader_pin_memory,\r\n )\r\n```", "Hi, it works now, thank you.\r\n1. **args.remove_unused_columns=False** and **training_args.remove_unused_columns=False**\r\n2. overwrite **get_train_dataloader**, and remove **_remove_unused_columns**\r\n3. add new fields, and can be got in **inputs**. " ]
1,638,351,309,000
1,638,374,559,000
1,638,374,559,000
NONE
null
Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs ``` *********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]], device='cuda:0'), 'end_positions': tensor([ 25, 97, 93, 44, 25, 112, 109, 134], device='cuda:0'), 'input_ids': tensor([[ 101, 2054, 2390, ..., 0, 0, 0], [ 101, 2054, 2515, ..., 0, 0, 0], [ 101, 2054, 2106, ..., 0, 0, 0], ..., [ 101, 2339, 2001, ..., 0, 0, 0], [ 101, 2054, 2515, ..., 0, 0, 0], [ 101, 2054, 2003, ..., 0, 0, 0]], device='cuda:0'), 'start_positions': tensor([ 20, 90, 89, 41, 25, 96, 106, 132], device='cuda:0'), 'token_type_ids': tensor([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], device='cuda:0')} ``` ``` # This function preprocesses a question answering dataset, tokenizing the question and context text # and finding the right offsets for the answer spans in the tokenized context (to use as labels). # Adapted from https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py def prepare_train_dataset_qa(examples, tokenizer, max_seq_length=None): questions = [q.lstrip() for q in examples["question"]] max_seq_length = tokenizer.model_max_length # tokenize both questions and the corresponding context # if the context length is longer than max_length, we split it to several # chunks of max_length tokenized_examples = tokenizer( questions, examples["context"], truncation="only_second", max_length=max_seq_length, stride=min(max_seq_length // 2, 128), return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length" ) # Since one example might give us several features if it has a long context, # we need a map from a feature to its corresponding example. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # The offset mappings will give us a map from token to character position # in the original context. This will help us compute the start_positions # and end_positions to get the final answer string. offset_mapping = tokenized_examples.pop("offset_mapping") tokenized_examples["start_positions"] = [] tokenized_examples["end_positions"] = [] tokenized_examples["example_id"] = [] for i, offsets in enumerate(offset_mapping): input_ids = tokenized_examples["input_ids"][i] # We will label features not containing the answer the index of the CLS token. cls_index = input_ids.index(tokenizer.cls_token_id) sequence_ids = tokenized_examples.sequence_ids(i) # from the feature idx to sample idx sample_index = sample_mapping[i] # get the answer for a feature answers = examples["answers"][sample_index] tokenized_examples["example_id"].append(examples["id"][sample_index]) if len(answers["answer_start"]) == 0: tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Start/end character index of the answer in the text. start_char = answers["answer_start"][0] end_char = start_char + len(answers["text"][0]) # Start token index of the current span in the text. token_start_index = 0 while sequence_ids[token_start_index] != 1: token_start_index += 1 # End token index of the current span in the text. token_end_index = len(input_ids) - 1 while sequence_ids[token_end_index] != 1: token_end_index -= 1 # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index). if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char): tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Otherwise move the token_start_index and token_end_index to the two ends of the answer. # Note: we could go after the last offset if the answer is the last word (edge case). while token_start_index < len(offsets) and \ offsets[token_start_index][0] <= start_char: token_start_index += 1 tokenized_examples["start_positions"].append( token_start_index - 1) while offsets[token_end_index][1] >= end_char: token_end_index -= 1 tokenized_examples["end_positions"].append(token_end_index + 1) return tokenized_examples ``` _Originally posted by @yanllearnn in https://github.com/huggingface/datasets/issues/3333#issuecomment-983457161_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3353/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3352
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3352/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3352/comments
https://api.github.com/repos/huggingface/datasets/issues/3352/events
https://github.com/huggingface/datasets/pull/3352
1,068,102,994
PR_kwDODunzps4vO6uZ
3,352
Make LABR dataset streamable
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,346,947,000
1,638,355,742,000
1,638,355,741,000
MEMBER
null
Fix LABR dataset to make it streamable. Related to: #3350.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3352/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3352/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3352", "html_url": "https://github.com/huggingface/datasets/pull/3352", "diff_url": "https://github.com/huggingface/datasets/pull/3352.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3352.patch", "merged_at": 1638355741000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3350/comments
https://api.github.com/repos/huggingface/datasets/issues/3350/events
https://github.com/huggingface/datasets/pull/3350
1,068,078,160
PR_kwDODunzps4vO1aj
3,350
Avoid content-encoding issue while streaming datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,345,408,000
1,638,346,501,000
1,638,346,500,000
MEMBER
null
This PR will fix streaming of datasets served with gzip content-encoding: ``` ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` Fix #2918. CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3350/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3350", "html_url": "https://github.com/huggingface/datasets/pull/3350", "diff_url": "https://github.com/huggingface/datasets/pull/3350.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3350.patch", "merged_at": 1638346500000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3347/comments
https://api.github.com/repos/huggingface/datasets/issues/3347/events
https://github.com/huggingface/datasets/pull/3347
1,067,738,902
PR_kwDODunzps4vNthw
3,347
iter_archive for zip files
{ "login": "Mehdi2402", "id": 56029953, "node_id": "MDQ6VXNlcjU2MDI5OTUz", "avatar_url": "https://avatars.githubusercontent.com/u/56029953?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehdi2402", "html_url": "https://github.com/Mehdi2402", "followers_url": "https://api.github.com/users/Mehdi2402/followers", "following_url": "https://api.github.com/users/Mehdi2402/following{/other_user}", "gists_url": "https://api.github.com/users/Mehdi2402/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehdi2402/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehdi2402/subscriptions", "organizations_url": "https://api.github.com/users/Mehdi2402/orgs", "repos_url": "https://api.github.com/users/Mehdi2402/repos", "events_url": "https://api.github.com/users/Mehdi2402/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehdi2402/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "And also don't always try streaming with Google Drive - it can have issues because of how Google Drive works (with quotas, restrictions, etc.) and it can indeed cause `BlockSizeError`.\r\n\r\nFeel free to host your test data elsewhere, such as in a dataset repository on https://huggingface.co (see [here](https://huggingface.co/docs/datasets/upload_dataset.html#upload-your-files) for a tutorial on how to upload files)" ]
1,638,311,657,000
1,638,577,342,000
1,638,577,331,000
CONTRIBUTOR
null
* In this PR, I added the option to iterate through zipfiles for `download_manager.py` only. * Next PR will be the same applied to `streaming_download_manager.py`. * Related issue #3272. ## Comments : * There is no `.isreg()` equivalent in zipfile library to check if file is Regular so I used `.is_dir()` instead to skip directories. * For now I got `streaming_download_manager.py` working for local zip files, but not for urls. I get the following error when I test it on an archive in google drive, so still working on it. `BlockSizeError: Got more bytes so far (>2112) than requested (22)` ## Tasks : - [x] download_manager.py - [ ] streaming_download_manager.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3347/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3347/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3347", "html_url": "https://github.com/huggingface/datasets/pull/3347", "diff_url": "https://github.com/huggingface/datasets/pull/3347.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3347.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3346
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3346/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3346/comments
https://api.github.com/repos/huggingface/datasets/issues/3346/events
https://github.com/huggingface/datasets/issues/3346
1,067,632,365
I_kwDODunzps4_osbt
3,346
Failed to convert `string` with pyarrow for QED since 1.15.0
{ "login": "tianjianjiang", "id": 4812544, "node_id": "MDQ6VXNlcjQ4MTI1NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tianjianjiang", "html_url": "https://github.com/tianjianjiang", "followers_url": "https://api.github.com/users/tianjianjiang/followers", "following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}", "gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions", "organizations_url": "https://api.github.com/users/tianjianjiang/orgs", "repos_url": "https://api.github.com/users/tianjianjiang/repos", "events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}", "received_events_url": "https://api.github.com/users/tianjianjiang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Scratch that, probably the old and incompatible usage of dataset builder from promptsource.", "Actually, re-opening this issue cause the error persists\r\n\r\n```python\r\n>>> load_dataset(\"qed\")\r\nDownloading and preparing dataset qed/qed (download: 13.43 MiB, generated: 9.70 MiB, post-processed: Unknown size, total: 23.14 MiB) to /home/victor_huggingface_co/.cache/huggingface/datasets/qed/qed/1.0.0/47d8b6f033393aa520a8402d4baf2d6bdc1b2fbde3dc156e595d2ef34caf7d75...\r\n100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:00<00:00, 2228.64it/s]\r\nTraceback (most recent call last): \r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/load.py\", line 1669, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py\", line 594, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py\", line 681, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/builder.py\", line 1083, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 468, in finalize\r\n self.write_examples_on_file()\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 339, in write_examples_on_file\r\n pa_array = pa.array(typed_sequence)\r\n File \"pyarrow/array.pxi\", line 229, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"/home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages/datasets/arrow_writer.py\", line 125, in __arrow_array__\r\n out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type)\r\n File \"pyarrow/array.pxi\", line 315, in pyarrow.lib.array\r\n File \"pyarrow/array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Could not convert 'in' with type str: tried to convert to boolean\r\n```\r\n\r\nEnvironment (datasets and pyarrow):\r\n\r\n```bash\r\n(promptsource) victor_huggingface_co@victor-dev:~/promptsource$ datasets-cli env\r\n\r\nCopy-and-paste the text below in your GitHub issue.\r\n\r\n- `datasets` version: 1.16.1\r\n- Platform: Linux-5.0.0-1020-gcp-x86_64-with-debian-buster-sid\r\n- Python version: 3.7.11\r\n- PyArrow version: 6.0.1\r\n```\r\n```bash\r\n(promptsource) victor_huggingface_co@victor-dev:~/promptsource$ pip show pyarrow\r\nName: pyarrow\r\nVersion: 6.0.1\r\nSummary: Python library for Apache Arrow\r\nHome-page: https://arrow.apache.org/\r\nAuthor: \r\nAuthor-email: \r\nLicense: Apache License, Version 2.0\r\nLocation: /home/victor_huggingface_co/miniconda3/envs/promptsource/lib/python3.7/site-packages\r\nRequires: numpy\r\nRequired-by: streamlit, datasets\r\n```" ]
1,638,303,102,000
1,638,304,581,000
1,638,304,581,000
NONE
null
## Describe the bug Loading QED was fine until 1.15.0. related: bigscience-workshop/promptsource#659, bigscience-workshop/promptsource#670 Not sure where the root cause is, but here are some candidates: - #3158 - #3120 - #3196 - #2891 ## Steps to reproduce the bug ```python load_dataset("qed") ``` ## Expected results Loading completed. ## Actual results ```shell ArrowInvalid: Could not convert in with type str: tried to convert to boolean Traceback: File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/script_runner.py", line 354, in _run_script exec(code, module.__dict__) File "/Users/s0s0cr3/Documents/GitHub/promptsource/promptsource/app.py", line 260, in <module> dataset = get_dataset(dataset_key, str(conf_option.name) if conf_option else None) File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/caching.py", line 543, in wrapped_func return get_or_create_cached_value() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/caching.py", line 527, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/Users/s0s0cr3/Documents/GitHub/promptsource/promptsource/utils.py", line 49, in get_dataset builder_instance.download_and_prepare() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 1106, in _prepare_split num_examples, num_bytes = writer.finalize() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 456, in finalize self.write_examples_on_file() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 325, in write_examples_on_file pa_array = pa.array(typed_sequence) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 121, in __arrow_array__ out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type) File "pyarrow/array.pxi", line 305, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.0, 1.16.1 - Platform: macOS 1.15.7 or above - Python version: 3.7.12 and 3.9 - PyArrow version: 3.0.0, 5.0.0, 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3346/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3346/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3345/comments
https://api.github.com/repos/huggingface/datasets/issues/3345/events
https://github.com/huggingface/datasets/issues/3345
1,067,622,951
I_kwDODunzps4_oqIn
3,345
Failed to download species_800 from Google Drive zip file
{ "login": "tianjianjiang", "id": 4812544, "node_id": "MDQ6VXNlcjQ4MTI1NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tianjianjiang", "html_url": "https://github.com/tianjianjiang", "followers_url": "https://api.github.com/users/tianjianjiang/followers", "following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}", "gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}", "starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions", "organizations_url": "https://api.github.com/users/tianjianjiang/orgs", "repos_url": "https://api.github.com/users/tianjianjiang/repos", "events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}", "received_events_url": "https://api.github.com/users/tianjianjiang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthe dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?", "> Hi,\r\n> \r\n> the dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?\r\n\r\nI have tried that many times with both load_dataset() and a browser almost simultaneously. The browser always works for me while load_dataset() fails.", "@mariosasko \r\n> the dataset is downloaded normally on my machine. Maybe the URL was down at the time of your download. Could you try again?\r\n\r\nI've tried yet again just a moment ago. This time I realize that, the step `(... post-processed: Unknown size, total: 20.89 MiB) to /Users/mike/.cache/huggingface/datasets/species800/species_800/1.0.0/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976...` and the one after seem unstable. If I want to retry, I will have to delete it (and probably other cache lock files). It **_sometimes_** works.\r\n\r\nBut I didn't try `download_mode=\"force_redownload\"` yet.\r\n\r\nAnyway, I suppose this isn't really a pressing issue for the time being, so I'm going to close this. Thank you.\r\n\r\n" ]
1,638,302,428,000
1,638,381,195,000
1,638,381,195,000
NONE
null
## Describe the bug One can manually download the zip file on Google Drive, but `load_dataset()` cannot. related: #3248 ## Steps to reproduce the bug ```shell > python Python 3.7.12 (default, Sep 5 2021, 08:34:29) [Clang 11.0.3 (clang-1103.0.32.62)] on darwin Type "help", "copyright", "credits" or "license" for more information. ``` ```python >>> from datasets import load_dataset >>> s800 = load_dataset("species_800") ``` ## Expected results species_800 downloaded. ## Actual results ```shell Downloading: 5.68kB [00:00, 1.22MB/s] Downloading: 2.70kB [00:00, 691kB/s] Downloading and preparing dataset species800/species_800 (download: 17.36 MiB, generated: 3.53 MiB, post-processed: Unknown size, total: 20.89 MiB) to /Users/mike/.cache/huggingface/datasets/species800/species_800/1.0.0/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976... 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/load.py", line 1632, in load_dataset use_auth_token=use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 608, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/mike/.cache/huggingface/modules/datasets_modules/datasets/species_800/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976/species_800.py", line 104, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 197, in download download_func, url_or_urls, map_tuple=True, num_proc=download_config.num_proc, disable_tqdm=False File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 209, in map_nested for obj in utils.tqdm(iterable, disable=disable_tqdm) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 209, in <listcomp> for obj in utils.tqdm(iterable, disable=disable_tqdm) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 143, in _single_map_nested return function(data_struct) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 305, in cached_path use_auth_token=download_config.use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/ ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14,0 1.15.0, 1.16.1 - Platform: macOS Catalina 10.15.7 - Python version: 3.7.12 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3345/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3344
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3344/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3344/comments
https://api.github.com/repos/huggingface/datasets/issues/3344/events
https://github.com/huggingface/datasets/pull/3344
1,067,567,603
PR_kwDODunzps4vNJwd
3,344
Add ArrayXD docs
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,298,411,000
1,638,389,763,000
1,638,387,332,000
CONTRIBUTOR
null
Documents support for dynamic first dimension in `ArrayXD` from #2891, and explain the `ArrayXD` feature in general. Let me know if I'm missing anything @lhoestq :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3344/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3344/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3344", "html_url": "https://github.com/huggingface/datasets/pull/3344", "diff_url": "https://github.com/huggingface/datasets/pull/3344.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3344.patch", "merged_at": 1638387332000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3343
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3343/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3343/comments
https://api.github.com/repos/huggingface/datasets/issues/3343/events
https://github.com/huggingface/datasets/pull/3343
1,067,505,507
PR_kwDODunzps4vM8yB
3,343
Better error message when download fails
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,293,930,000
1,638,358,079,000
1,638,358,078,000
MEMBER
null
From our discussions in https://github.com/huggingface/datasets/issues/3269 and https://github.com/huggingface/datasets/issues/3282 it would be nice to have better messages if a download fails. In particular the error now shows: - the error from the HEAD request if there's one - otherwise the response code of the HEAD request I also added an error to tell users to pass `use_auth_token` when the Hugging Face Hub returns 401 (Unauthorized). While paying around with this I also fixed a minor issue with the `force_download` parameter that was not always taken into account
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3343/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3343/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3343", "html_url": "https://github.com/huggingface/datasets/pull/3343", "diff_url": "https://github.com/huggingface/datasets/pull/3343.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3343.patch", "merged_at": 1638358078000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3340
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3340/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3340/comments
https://api.github.com/repos/huggingface/datasets/issues/3340/events
https://github.com/huggingface/datasets/pull/3340
1,067,292,636
PR_kwDODunzps4vMP6Z
3,340
Fix JSON ClassLabel casting for integers
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,281,994,000
1,638,358,050,000
1,638,358,050,000
MEMBER
null
Loading a JSON dataset with ClassLabel feature types currently fails if the JSON data already has integers. Indeed currently it tries to convert the strings to integers without even checking if the data are not integers already. For example this currently fails: ```python from datasets import load_dataset, Features, ClassLabel path = "data.json" f = Features({"a": ClassLabel(names=["neg", "pos"])}) d = load_dataset("json", data_files=path, features=f) ``` data.json ```json {"a": 0} {"a": 1} ``` I fixed that by adding a line that checks the type of the JSON data before trying to convert them cc @albertvillanova let me know if it sounds good to you
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3340/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3340", "html_url": "https://github.com/huggingface/datasets/pull/3340", "diff_url": "https://github.com/huggingface/datasets/pull/3340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3340.patch", "merged_at": 1638358050000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3333/comments
https://api.github.com/repos/huggingface/datasets/issues/3333/events
https://github.com/huggingface/datasets/issues/3333
1,065,346,919
I_kwDODunzps4_f-dn
3,333
load JSON files, get the errors
{ "login": "yanllearnn", "id": 38966558, "node_id": "MDQ6VXNlcjM4OTY2NTU4", "avatar_url": "https://avatars.githubusercontent.com/u/38966558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanllearnn", "html_url": "https://github.com/yanllearnn", "followers_url": "https://api.github.com/users/yanllearnn/followers", "following_url": "https://api.github.com/users/yanllearnn/following{/other_user}", "gists_url": "https://api.github.com/users/yanllearnn/gists{/gist_id}", "starred_url": "https://api.github.com/users/yanllearnn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanllearnn/subscriptions", "organizations_url": "https://api.github.com/users/yanllearnn/orgs", "repos_url": "https://api.github.com/users/yanllearnn/repos", "events_url": "https://api.github.com/users/yanllearnn/events{/privacy}", "received_events_url": "https://api.github.com/users/yanllearnn/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! The message you're getting is not an error. It simply says that your JSON dataset is being prepared to a location in `/root/.cache/huggingface/datasets`", "> \r\n\r\nbut I want to load local JSON file by command\r\n`python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`\r\n\r\n**squad-retrain-data/train-v2.0.json** is the local JSON file, how to load it and map it to a special structure?", "You can load it with `dataset = datasets.load_dataset('json', data_files=args.dataset)` as you said.\r\nThen if you need to apply additional processing to map it to a special structure, you can use rename columns or use `dataset.map`. For more information, you can check the documentation here: https://huggingface.co/docs/datasets/process.html\r\n\r\nAlso feel free to share your `run.py` code so we can take a look", "```\r\n# Dataset selection\r\n if args.dataset.endswith('.json') or args.dataset.endswith('.jsonl'):\r\n dataset_id = None\r\n # Load from local json/jsonl file\r\n dataset = datasets.load_dataset('json', data_files=args.dataset)\r\n # By default, the \"json\" dataset loader places all examples in the train split,\r\n # so if we want to use a jsonl file for evaluation we need to get the \"train\" split\r\n # from the loaded dataset\r\n eval_split = 'train'\r\n else:\r\n default_datasets = {'qa': ('squad',), 'nli': ('snli',)}\r\n dataset_id = tuple(args.dataset.split(':')) if args.dataset is not None else \\\r\n default_datasets[args.task]\r\n # MNLI has two validation splits (one with matched domains and one with mismatched domains). Most datasets just have one \"validation\" split\r\n eval_split = 'validation_matched' if dataset_id == ('glue', 'mnli') else 'validation'\r\n # Load the raw data\r\n dataset = datasets.load_dataset(*dataset_id)\r\n```\r\n\r\nI want to load JSON squad dataset instead `dataset = datasets.load_dataset('squad')` to retrain the model. \r\n", "If your JSON has the same format as the SQuAD dataset, then you need to pass `field=\"data\"` to `load_dataset`, since the SQuAD format is one big JSON object in which the \"data\" field contains the list of questions and answers.\r\n```python\r\ndataset = datasets.load_dataset('json', data_files=args.dataset, field=\"data\")\r\n```\r\n\r\nLet me know if that helps :)\r\n\r\n", "Yes, code works. but the format is not as expected.\r\n```\r\ndataset = datasets.load_dataset('json', data_files=args.dataset, field=\"data\")\r\n```\r\n```\r\npython3 run.py --do_train --task qa --dataset squad --output_dir ./re_trained_model/\r\n```\r\n************ train_dataset: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n})\r\n\r\n\r\n```\r\npython3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/\r\n```\r\n************ train_dataset: Dataset({\r\n features: ['title', 'paragraphs'],\r\n num_rows: 442\r\n})\r\n\r\nI want the JSON to have the same format as before features. https://github.com/huggingface/datasets/blob/master/datasets/squad_v2/squad_v2.py is the script dealing with **squad** but how can I apply it by using JSON? ", "Ok I see, you have the paragraphs so you just need to process them to extract the questions and answers. I think you can process the SQuAD-like data this way:\r\n```python\r\ndef process_squad(articles):\r\n out = {\r\n \"title\": [],\r\n \"context\": [],\r\n \"question\": [],\r\n \"id\": [],\r\n \"answers\": [],\r\n }\r\n for title, paragraphs in zip(articles[\"title\"], articles[\"paragraphs\"]):\r\n for paragraph in paragraphs:\r\n for qa in paragraph[\"qas\"]:\r\n out[\"title\"].append(title)\r\n out[\"context\"].append(paragraph[\"context\"])\r\n out[\"question\"].append(qa[\"question\"])\r\n out[\"id\"].append(qa[\"id\"])\r\n out[\"answers\"].append({\r\n \"answer_start\": [answer[\"answer_start\"] for answer in qa[\"answers\"]],\r\n \"text\": [answer[\"text\"] for answer in qa[\"answers\"]],\r\n })\r\n return out\r\n\r\ndataset = dataset.map(process_squad, batched=True, remove_columns=[\"paragraphs\"])\r\n```\r\n\r\nI adapted the code from [squad.py](https://github.com/huggingface/datasets/blob/master/datasets/squad/squad.py). The code takes as input a batch of articles (title + paragraphs) and gets all the questions and answers from the JSON structure.\r\n\r\nThe output is a dataset with `features: ['answers', 'context', 'id', 'question', 'title']`\r\n\r\nLet me know if that helps !\r\n", "Yes, this works. But how to get the training output during training the squad by **Trainer** \r\nfor example https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/trainer_qa.py \r\nI want the training inputs, labels, outputs for every epoch and step to produce the training dynamic graph", "I think you may need to implement your own Trainer, from the `QuestionAnsweringTrainer` for example.\r\nThis way you can have the flexibility of saving all the inputs/output used at each step", "does there have any function to be overwritten to do this?", "> does there have any function to be overwritten to do this?\r\n\r\nok, I overwrote the compute_loss, thank you.", "Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs\r\n\r\n```\r\n*********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n ...,\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0],\r\n [1, 1, 1, ..., 0, 0, 0]], device='cuda:0'), 'end_positions': tensor([ 25, 97, 93, 44, 25, 112, 109, 134], device='cuda:0'), 'input_ids': tensor([[ 101, 2054, 2390, ..., 0, 0, 0],\r\n [ 101, 2054, 2515, ..., 0, 0, 0],\r\n [ 101, 2054, 2106, ..., 0, 0, 0],\r\n ...,\r\n [ 101, 2339, 2001, ..., 0, 0, 0],\r\n [ 101, 2054, 2515, ..., 0, 0, 0],\r\n [ 101, 2054, 2003, ..., 0, 0, 0]], device='cuda:0'), 'start_positions': tensor([ 20, 90, 89, 41, 25, 96, 106, 132], device='cuda:0'), 'token_type_ids': tensor([[0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n ...,\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0],\r\n [0, 0, 0, ..., 0, 0, 0]], device='cuda:0')} \r\n```\r\n\r\n```\r\n# This function preprocesses a question answering dataset, tokenizing the question and context text\r\n# and finding the right offsets for the answer spans in the tokenized context (to use as labels).\r\n# Adapted from https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py\r\ndef prepare_train_dataset_qa(examples, tokenizer, max_seq_length=None):\r\n questions = [q.lstrip() for q in examples[\"question\"]]\r\n max_seq_length = tokenizer.model_max_length\r\n # tokenize both questions and the corresponding context\r\n # if the context length is longer than max_length, we split it to several\r\n # chunks of max_length\r\n tokenized_examples = tokenizer(\r\n questions,\r\n examples[\"context\"],\r\n truncation=\"only_second\",\r\n max_length=max_seq_length,\r\n stride=min(max_seq_length // 2, 128),\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\"\r\n )\r\n\r\n # Since one example might give us several features if it has a long context,\r\n # we need a map from a feature to its corresponding example.\r\n sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\r\n # The offset mappings will give us a map from token to character position\r\n # in the original context. This will help us compute the start_positions\r\n # and end_positions to get the final answer string.\r\n offset_mapping = tokenized_examples.pop(\"offset_mapping\")\r\n\r\n tokenized_examples[\"start_positions\"] = []\r\n tokenized_examples[\"end_positions\"] = []\r\n\r\n tokenized_examples[\"example_id\"] = []\r\n\r\n for i, offsets in enumerate(offset_mapping):\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n # We will label features not containing the answer the index of the CLS token.\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n # from the feature idx to sample idx\r\n sample_index = sample_mapping[i]\r\n # get the answer for a feature\r\n answers = examples[\"answers\"][sample_index]\r\n\r\n tokenized_examples[\"example_id\"].append(examples[\"id\"][sample_index])\r\n\r\n if len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start/end character index of the answer in the text.\r\n start_char = answers[\"answer_start\"][0]\r\n end_char = start_char + len(answers[\"text\"][0])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != 1:\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != 1:\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and\r\n offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and \\\r\n offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(\r\n token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n\r\n return tokenized_examples\r\n```" ]
1,638,109,798,000
1,638,351,271,000
1,638,331,068,000
NONE
null
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command `!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/` change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html `dataset = datasets.load_dataset('json', data_files=args.dataset)` Errors: `Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-c1e124ad488911b8/0.0.0/45636811569ec4a6630521c18235dfbbab83b7ab572e3393c5ba68ccabe98264... ` _Originally posted by @yanllearnn in https://github.com/huggingface/datasets/issues/730#issuecomment-981095050_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3333/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3333/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3332/comments
https://api.github.com/repos/huggingface/datasets/issues/3332/events
https://github.com/huggingface/datasets/pull/3332
1,065,345,853
PR_kwDODunzps4vGBig
3,332
Fix error message and add extension fallback
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,109,529,000
1,638,192,855,000
1,638,192,854,000
CONTRIBUTOR
null
Fix the error message raised if `infered_module_name` is `None` in `CommunityDatasetModuleFactoryWithoutScript.get_module` and make `infer_module_for_data_files` more robust. In the linked issue, `infer_module_for_data_files` returns `None` because `json` is the second most common extension due to the suffix ordering. Now, we go from the most common to the least common extension and try to map it or return `None`. Fix #3331
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3332/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3332", "html_url": "https://github.com/huggingface/datasets/pull/3332", "diff_url": "https://github.com/huggingface/datasets/pull/3332.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3332.patch", "merged_at": 1638192854000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3331
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3331/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3331/comments
https://api.github.com/repos/huggingface/datasets/issues/3331/events
https://github.com/huggingface/datasets/issues/3331
1,065,275,896
I_kwDODunzps4_ftH4
3,331
AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path'
{ "login": "luozhouyang", "id": 34032031, "node_id": "MDQ6VXNlcjM0MDMyMDMx", "avatar_url": "https://avatars.githubusercontent.com/u/34032031?v=4", "gravatar_id": "", "url": "https://api.github.com/users/luozhouyang", "html_url": "https://github.com/luozhouyang", "followers_url": "https://api.github.com/users/luozhouyang/followers", "following_url": "https://api.github.com/users/luozhouyang/following{/other_user}", "gists_url": "https://api.github.com/users/luozhouyang/gists{/gist_id}", "starred_url": "https://api.github.com/users/luozhouyang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/luozhouyang/subscriptions", "organizations_url": "https://api.github.com/users/luozhouyang/orgs", "repos_url": "https://api.github.com/users/luozhouyang/repos", "events_url": "https://api.github.com/users/luozhouyang/events{/privacy}", "received_events_url": "https://api.github.com/users/luozhouyang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthe fix was merged and will be available in the next release of `datasets`.\r\nIn the meantime, you can use it by installing `datasets` directly from master as follows:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```" ]
1,638,089,645,000
1,638,193,784,000
1,638,192,854,000
NONE
null
## Describe the bug I add a new question answering dataset to huggingface datasets manually. Here is the link: [luozhouyang/question-answering-datasets](https://huggingface.co/datasets/luozhouyang/question-answering-datasets) But when I load the dataset, an error raised: ```bash AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path' ``` ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("luozhouyang/question-answering-datasets", data_files=["dureader_robust.train.json"]) ``` ## Expected results Load dataset successfully without any error. ## Actual results ```bash Traceback (most recent call last): File "/mnt/home/zhouyang.lzy/github/naivenlp/naivenlp/tests/question_answering_tests/dataset_test.py", line 89, in test_load_dataset_with_hf data_files=["dureader_robust.train.json"], File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1616, in load_dataset **config_kwargs, File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1443, in load_dataset_builder path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1157, in dataset_module_factory raise e1 from None File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1144, in dataset_module_factory download_mode=download_mode, File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 798, in get_module raise FileNotFoundError(f"No data files or dataset script found in {self.path}") AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: linux - Python version: 3.6.13 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3331/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3331/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3330
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3330/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3330/comments
https://api.github.com/repos/huggingface/datasets/issues/3330/events
https://github.com/huggingface/datasets/pull/3330
1,065,176,619
PR_kwDODunzps4vFtF7
3,330
Change TriviaQA license (#3313)
{ "login": "avinashsai", "id": 22453634, "node_id": "MDQ6VXNlcjIyNDUzNjM0", "avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avinashsai", "html_url": "https://github.com/avinashsai", "followers_url": "https://api.github.com/users/avinashsai/followers", "following_url": "https://api.github.com/users/avinashsai/following{/other_user}", "gists_url": "https://api.github.com/users/avinashsai/gists{/gist_id}", "starred_url": "https://api.github.com/users/avinashsai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinashsai/subscriptions", "organizations_url": "https://api.github.com/users/avinashsai/orgs", "repos_url": "https://api.github.com/users/avinashsai/repos", "events_url": "https://api.github.com/users/avinashsai/events{/privacy}", "received_events_url": "https://api.github.com/users/avinashsai/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,070,005,000
1,638,185,061,000
1,638,185,061,000
CONTRIBUTOR
null
Fixes (#3313)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3330/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3330/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3330", "html_url": "https://github.com/huggingface/datasets/pull/3330", "diff_url": "https://github.com/huggingface/datasets/pull/3330.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3330.patch", "merged_at": 1638185061000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3329/comments
https://api.github.com/repos/huggingface/datasets/issues/3329/events
https://github.com/huggingface/datasets/issues/3329
1,065,096,971
I_kwDODunzps4_fBcL
3,329
Map function: Type error on iter #999
{ "login": "josephkready666", "id": 52659318, "node_id": "MDQ6VXNlcjUyNjU5MzE4", "avatar_url": "https://avatars.githubusercontent.com/u/52659318?v=4", "gravatar_id": "", "url": "https://api.github.com/users/josephkready666", "html_url": "https://github.com/josephkready666", "followers_url": "https://api.github.com/users/josephkready666/followers", "following_url": "https://api.github.com/users/josephkready666/following{/other_user}", "gists_url": "https://api.github.com/users/josephkready666/gists{/gist_id}", "starred_url": "https://api.github.com/users/josephkready666/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josephkready666/subscriptions", "organizations_url": "https://api.github.com/users/josephkready666/orgs", "repos_url": "https://api.github.com/users/josephkready666/repos", "events_url": "https://api.github.com/users/josephkready666/events{/privacy}", "received_events_url": "https://api.github.com/users/josephkready666/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi, thanks for reporting.\r\n\r\nIt would be really helpful if you could provide the actual code of the `text_numbers_to_int` function so we can reproduce the error.", "```\r\ndef text_numbers_to_int(text, column=\"\"):\r\n \"\"\"\r\n Convert text numbers to int.\r\n\r\n :param text: text numbers\r\n :return: int\r\n \"\"\"\r\n try:\r\n numbers = find_numbers(text)\r\n if not numbers:\r\n return text\r\n result = \"\"\r\n i, j = 0, 0\r\n while i < len(text):\r\n if j < len(numbers) and i == numbers[j][1]:\r\n n = int(numbers[j][0]) if numbers[j][0] % 1 == 0 else float(numbers[j][0])\r\n result += str(n)\r\n i = numbers[j][2] #end\r\n j += 1\r\n else:\r\n result += text[i]\r\n i += 1\r\n if column:\r\n return{column: result}\r\n else:\r\n return {column: result}\r\n except Exception as e:\r\n print(e)\r\n return {column: result}\r\n```", "Maybe this is because of the `return text` line ? I think it should return a dictionary rather than a string", "Yes that was it, good catch! Thanks" ]
1,638,035,585,000
1,638,218,415,000
1,638,218,415,000
NONE
null
## Describe the bug Using the map function, it throws a type error on iter #999 Here is the code I am calling: ``` dataset = datasets.load_dataset('squad') dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'}) ``` text_numbers_to_int returns the input text with numbers replaced in the format {'context': text} It happens at ` File "C:\Users\lonek\anaconda3\envs\ai\Lib\site-packages\datasets\arrow_writer.py", line 289, in <listcomp> [row[0][col] for row in self.current_examples], type=col_type, try_type=col_try_type, col=col ` The issue is that the list comprehension expects self.current_examples to be type tuple(dict, str), but for some reason 26 out of 1000 of the sefl.current_examples are type tuple(str, str) Here is an example of what self.current_examples should be ({'context': 'Super Bowl 50 was an...merals 50.'}, '') Here is an example of what self.current_examples are when it throws the error: ('The Panthers used th... Marriott.', '')
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3329/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3328/comments
https://api.github.com/repos/huggingface/datasets/issues/3328/events
https://github.com/huggingface/datasets/pull/3328
1,065,015,262
PR_kwDODunzps4vFTpW
3,328
Quick fix error formatting
{ "login": "NouamaneTazi", "id": 29777165, "node_id": "MDQ6VXNlcjI5Nzc3MTY1", "avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NouamaneTazi", "html_url": "https://github.com/NouamaneTazi", "followers_url": "https://api.github.com/users/NouamaneTazi/followers", "following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}", "gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}", "starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions", "organizations_url": "https://api.github.com/users/NouamaneTazi/orgs", "repos_url": "https://api.github.com/users/NouamaneTazi/repos", "events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}", "received_events_url": "https://api.github.com/users/NouamaneTazi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,638,013,668,000
1,638,192,762,000
1,638,192,762,000
CONTRIBUTOR
null
While working on a dataset, I got the error ``` TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types {[type(x) for x in processed_inputs.values()]}. When using `batched=True`, make sure provided `function` returns a `dict` of types like `{allowed_batch_return_types}`. ``` This PR should fix the formatting of this error
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3328/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3328", "html_url": "https://github.com/huggingface/datasets/pull/3328", "diff_url": "https://github.com/huggingface/datasets/pull/3328.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3328.patch", "merged_at": 1638192762000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3327/comments
https://api.github.com/repos/huggingface/datasets/issues/3327/events
https://github.com/huggingface/datasets/issues/3327
1,064,675,888
I_kwDODunzps4_daow
3,327
"Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)"
{ "login": "eliasws", "id": 19492473, "node_id": "MDQ6VXNlcjE5NDkyNDcz", "avatar_url": "https://avatars.githubusercontent.com/u/19492473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eliasws", "html_url": "https://github.com/eliasws", "followers_url": "https://api.github.com/users/eliasws/followers", "following_url": "https://api.github.com/users/eliasws/following{/other_user}", "gists_url": "https://api.github.com/users/eliasws/gists{/gist_id}", "starred_url": "https://api.github.com/users/eliasws/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eliasws/subscriptions", "organizations_url": "https://api.github.com/users/eliasws/orgs", "repos_url": "https://api.github.com/users/eliasws/repos", "events_url": "https://api.github.com/users/eliasws/events{/privacy}", "received_events_url": "https://api.github.com/users/eliasws/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "#3323 " ]
1,637,943,996,000
1,637,945,051,000
1,637,945,051,000
CONTRIBUTOR
null
## Describe the bug Passing a correctly shaped Numpy-Array to get_nearest_examples leads to the Exception "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)" Probably the reason for this is a wrongly converted assertion. 1.15.1: `assert len(query.shape) == 1 or (len(query.shape) == 2 and query.shape[0] == 1)` 1.16.1: ``` if len(query.shape) != 1 or (len(query.shape) == 2 and query.shape[0] != 1): raise ValueError("Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)") ``` ## Steps to reproduce the bug follow the steps described here: https://huggingface.co/course/chapter5/6?fw=tf ```python question_embedding.shape # (1, 768) scores, samples = embeddings_dataset.get_nearest_examples( "embeddings", question_embedding, k=5 # Error ) # "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)" ``` ## Expected results Should work without exception ## Actual results Throws exception ## Environment info - `datasets` version: 1.15.1 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.12 - PyArrow version: 6.0.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3327/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3326/comments
https://api.github.com/repos/huggingface/datasets/issues/3326/events
https://github.com/huggingface/datasets/pull/3326
1,064,664,479
PR_kwDODunzps4vEaYG
3,326
Fix import `datasets` on python 3.10
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,637,943,000,000
1,637,944,283,000
1,637,944,283,000
MEMBER
null
In python 3.10 it's no longer possible to use `functools.wraps` on a method decorated with `classmethod`. To fix this I inverted the order of the `inject_arrow_table_documentation` and `classmethod` decorators Fix #3324
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3326/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3326/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3326", "html_url": "https://github.com/huggingface/datasets/pull/3326", "diff_url": "https://github.com/huggingface/datasets/pull/3326.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3326.patch", "merged_at": 1637944283000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3325/comments
https://api.github.com/repos/huggingface/datasets/issues/3325/events
https://github.com/huggingface/datasets/pull/3325
1,064,663,075
PR_kwDODunzps4vEaGO
3,325
Update conda dependencies
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,637,942,887,000
1,637,943,637,000
1,637,943,636,000
MEMBER
null
Some dependencies minimum versions were outdated. For example `pyarrow` and `huggingface_hub`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3325/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3325", "html_url": "https://github.com/huggingface/datasets/pull/3325", "diff_url": "https://github.com/huggingface/datasets/pull/3325.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3325.patch", "merged_at": 1637943636000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3324/comments
https://api.github.com/repos/huggingface/datasets/issues/3324/events
https://github.com/huggingface/datasets/issues/3324
1,064,661,212
I_kwDODunzps4_dXDc
3,324
Can't import `datasets` in python 3.10
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
1,637,942,774,000
1,637,944,283,000
1,637,944,283,000
MEMBER
null
When importing `datasets` I'm getting this error in python 3.10: ```python Traceback (most recent call last): File "<string>", line 1, in <module> File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 47, in <module> from .arrow_reader import ArrowReader File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_reader.py", line 33, in <module> from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 334, in <module> class InMemoryTable(TableBlock): File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 361, in InMemoryTable def from_pandas(cls, *args, **kwargs): File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 24, in wrapper out = wraps(arrow_table_method)(method) File "/Users/quentinlhoest/.pyenv/versions/3.10.0/lib/python3.10/functools.py", line 61, in update_wrapper wrapper.__wrapped__ = wrapped AttributeError: readonly attribute ``` This makes the conda build fail. I'm opening a PR to fix this and do a patch release 1.16.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3324/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3324/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3323/comments
https://api.github.com/repos/huggingface/datasets/issues/3323/events
https://github.com/huggingface/datasets/pull/3323
1,064,660,452
PR_kwDODunzps4vEZwq
3,323
Fix wrongly converted assert
{ "login": "eliasws", "id": 19492473, "node_id": "MDQ6VXNlcjE5NDkyNDcz", "avatar_url": "https://avatars.githubusercontent.com/u/19492473?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eliasws", "html_url": "https://github.com/eliasws", "followers_url": "https://api.github.com/users/eliasws/followers", "following_url": "https://api.github.com/users/eliasws/following{/other_user}", "gists_url": "https://api.github.com/users/eliasws/gists{/gist_id}", "starred_url": "https://api.github.com/users/eliasws/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eliasws/subscriptions", "organizations_url": "https://api.github.com/users/eliasws/orgs", "repos_url": "https://api.github.com/users/eliasws/repos", "events_url": "https://api.github.com/users/eliasws/events{/privacy}", "received_events_url": "https://api.github.com/users/eliasws/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Closes #3327 " ]
1,637,942,739,000
1,637,945,052,000
1,637,945,051,000
CONTRIBUTOR
null
Seems like this assertion was replaced by an exception but the condition got wrongly converted.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3323/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3323", "html_url": "https://github.com/huggingface/datasets/pull/3323", "diff_url": "https://github.com/huggingface/datasets/pull/3323.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3323.patch", "merged_at": 1637945051000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3322/comments
https://api.github.com/repos/huggingface/datasets/issues/3322/events
https://github.com/huggingface/datasets/pull/3322
1,064,429,705
PR_kwDODunzps4vD1Ct
3,322
Add missing tags to XTREME
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,637,930,225,000
1,638,193,207,000
1,638,193,206,000
CONTRIBUTOR
null
Add missing tags to the XTREME benchmark for better discoverability.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3322/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3322", "html_url": "https://github.com/huggingface/datasets/pull/3322", "diff_url": "https://github.com/huggingface/datasets/pull/3322.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3322.patch", "merged_at": 1638193206000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3321/comments
https://api.github.com/repos/huggingface/datasets/issues/3321/events
https://github.com/huggingface/datasets/pull/3321
1,063,858,386
PR_kwDODunzps4vCBeI
3,321
Update URL of tatoeba subset of xtreme
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "<s>To be more precise: `os.path.join` is replaced on-the-fly by `xjoin` anyway with patching, to extend it to remote files</s>", "Oh actually just ignore what I said: they were used to concatenate URLs, which is not recommended. Let me fix that again by appending using `+`" ]
1,637,865,751,000
1,637,922,630,000
1,637,922,630,000
CONTRIBUTOR
null
Updates the URL of the tatoeba subset of xtreme. Additionally, replaces `os.path.join` with `xjoin` to correctly join the URL segments on Windows. Fix #3320
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3321/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3321", "html_url": "https://github.com/huggingface/datasets/pull/3321", "diff_url": "https://github.com/huggingface/datasets/pull/3321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3321.patch", "merged_at": 1637922629000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3320/comments
https://api.github.com/repos/huggingface/datasets/issues/3320/events
https://github.com/huggingface/datasets/issues/3320
1,063,531,992
I_kwDODunzps4_ZDXY
3,320
Can't get tatoeba.rus dataset
{ "login": "mmg10", "id": 65535131, "node_id": "MDQ6VXNlcjY1NTM1MTMx", "avatar_url": "https://avatars.githubusercontent.com/u/65535131?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mmg10", "html_url": "https://github.com/mmg10", "followers_url": "https://api.github.com/users/mmg10/followers", "following_url": "https://api.github.com/users/mmg10/following{/other_user}", "gists_url": "https://api.github.com/users/mmg10/gists{/gist_id}", "starred_url": "https://api.github.com/users/mmg10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mmg10/subscriptions", "organizations_url": "https://api.github.com/users/mmg10/orgs", "repos_url": "https://api.github.com/users/mmg10/repos", "events_url": "https://api.github.com/users/mmg10/events{/privacy}", "received_events_url": "https://api.github.com/users/mmg10/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,637,843,471,000
1,637,922,629,000
1,637,922,629,000
NONE
null
## Describe the bug It gives an error. > FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/LASER/raw/master/data/tatoeba/v1/tatoeba.rus-eng.rus ## Steps to reproduce the bug ```python data=load_dataset("xtreme","tatoeba.rus", split="validation") ``` ## Solution The library tries to access the **master** branch. In the github repo of facebookresearch, it is in the **main** branch.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3320/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3319/comments
https://api.github.com/repos/huggingface/datasets/issues/3319/events
https://github.com/huggingface/datasets/pull/3319
1,062,749,654
PR_kwDODunzps4u-xdv
3,319
Add push_to_hub docs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Looks good to me! :)\r\n\r\nMaybe we can mention that users can also set the `private` argument if they want to keep their dataset private? It would lead nicely into the next section on Privacy.", "Thanks for your comments, I fixed the capitalization for consistency and added an passage to mention the `private` parameter and to have a nice transition to the Privacy section :)\r\n\r\nI also added the login instruction that was missing before the user can actually upload a dataset." ]
1,637,778,071,000
1,637,851,666,000
1,637,851,666,000
MEMBER
null
Since #3098 it's now possible to upload a dataset on the Hub directly from python using the `push_to_hub` method. I just added a section in the "Upload a dataset to the Hub" tutorial. I kept the section quite simple but let me know if it sounds good to you @LysandreJik @stevhliu :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3319/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3319/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3319", "html_url": "https://github.com/huggingface/datasets/pull/3319", "diff_url": "https://github.com/huggingface/datasets/pull/3319.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3319.patch", "merged_at": 1637851666000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3318/comments
https://api.github.com/repos/huggingface/datasets/issues/3318/events
https://github.com/huggingface/datasets/pull/3318
1,062,369,717
PR_kwDODunzps4u9m-k
3,318
Finish transition to PyArrow 3.0.0
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,637,757,014,000
1,637,768,105,000
1,637,768,104,000
CONTRIBUTOR
null
Finish transition to PyArrow 3.0.0 that was started in #3098.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3318/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3318", "html_url": "https://github.com/huggingface/datasets/pull/3318", "diff_url": "https://github.com/huggingface/datasets/pull/3318.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3318.patch", "merged_at": 1637768104000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3315/comments
https://api.github.com/repos/huggingface/datasets/issues/3315/events
https://github.com/huggingface/datasets/pull/3315
1,061,678,452
PR_kwDODunzps4u7WpU
3,315
Removing query params for dynamic URL caching
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "IMO it makes more sense to have `ignore_url_params` as an attribute of `DownloadConfig` to avoid defining a new argument in `DownloadManger`'s methods.", "@mariosasko that would make sense to me too, but it seems like `DownloadConfig` wasn't intended to be modified from a dataset loading script. @lhoestq wdyt?", "We can expose `DownloadConfig` as a property of `DownloadManager`, and then in the script before the download call we could do: `dl_manager.download_config.ignore_url_params = True`. But yes, let's hear what Quentin thinks.", "Oh indeed that's a great idea. This parameter is similar to others like `download_config.use_etag` that defines the behavior of the download and caching, so it's better if we have it there, and expose the `download_config`", "Implemented it via `dl_manager.download_config.ignore_url_params` now, and also added a usage example above :) " ]
1,637,699,052,000
1,637,851,472,000
1,637,851,471,000
CONTRIBUTOR
null
The main use case for this is to make dynamically generated private URLs (like the ones returned by CommonVoice API) compatible with the datasets' caching logic. Usage example: ```python import datasets class CommonVoice(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo() def _split_generators(self, dl_manager): dl_manager.download_config.ignore_url_params = True HUGE_URL = "https://mozilla-common-voice-datasets.s3.dualstack.us-west-2.amazonaws.com/cv-corpus-7.0-2021-07-21/cv-corpus-7.0-2021-07-21-ab.tar.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQ3GQRTO3IU5JYB5K%2F20211125%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20211125T131423Z&X-Amz-Expires=43200&X-Amz-Security-Token=FwoGZXIvYXdzEL7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDLsZw7Nj0d9h4rgheyKSBJJ6bxo1JdWLXAUhLMrUB8AXfhP8Ge4F8dtjwXmvGJgkIvdMT7P4YOEE1pS3mW8AyKsz7Z7IRVCIGQrOH1AbxGVVcDoCMMswXEOqL3nJFihKLf99%2F6l8iJVZdzftRUNgMhX5Hz0xSIL%2BzRDpH5nYa7C6YpEdOdW81CFVXybx7WUrX13wc8X4ZlUj7zrWcWf5p2VEIU5Utb7YHVi0Y5TQQiZSDoedQl0j4VmMuFkDzoobIO%2BvilgGeE2kIX0E62X423mEGNu4uQV5JsOuLAtv3GVlemsqEH3ZYrXDuxLmnvGj5HfMtySwI4vKv%2BlnnirD29o7hxvtidXiA8JMWhp93aP%2Fw7sod%2BPPbb5EqP%2B4Qb2GJ1myClOKcLEY0cqoy7XWm8NeVljLJojnFJVS5mNFBAzCCTJ%2FidxNsj8fflzkRoAzYaaPBuOTL1dgtZCdslK3FAuEvw0cik7P9A7IYiULV33otSHKMPcVfNHFsWQljs03gDztsIUWxaXvu6ck5vCcGULsHbfe6xoMPm2bR9jtKLONsslPcnzWIf7%2Fch2w%2F%2BjtTCd9IxaH4kytyJ6mIjpV%2FA%2F2h9qeDnDFsCphnMjAzPQn6tqCgTtPcyJ2b8c94ncgUnE4mepx%2FDa%2FanAEsrg9RPdmbdoPswzHn1IClh91IfSN74u95DZUxlPeZrHG5HxVCN3dKO6j%2Ft1xd20L0hEtazDdKOr8%2FYwGMirp8rp%2BII0pYOwQOrYHqH%2FREX2dRJctJtwE86Qj1eU8BAdXuFIkLC4NWXw%3D&X-Amz-Signature=1b8108d29b0e9c2bf6c7246e58ca8d5749a83de0704757ad8e8a44d78194691f&X-Amz-SignedHeaders=host" dl_path = dl_manager.download_and_extract(HUGE_URL) print(dl_path) HUGE_URL += "&some_new_or_changed_param=12345" dl_path = dl_manager.download_and_extract(HUGE_URL) print(dl_path) dl_manager = datasets.DownloadManager(dataset_name="common_voice") CommonVoice()._split_generators(dl_manager) ``` Output: ``` /home/user/.cache/huggingface/datasets/downloads/6ef2a377398ff3309554be040caa78414e6562d623dbd0ce8fc262459a7f8ec6 /home/user/.cache/huggingface/datasets/downloads/6ef2a377398ff3309554be040caa78414e6562d623dbd0ce8fc262459a7f8ec6 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3315/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3315", "html_url": "https://github.com/huggingface/datasets/pull/3315", "diff_url": "https://github.com/huggingface/datasets/pull/3315.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3315.patch", "merged_at": 1637851471000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3314/comments
https://api.github.com/repos/huggingface/datasets/issues/3314/events
https://github.com/huggingface/datasets/pull/3314
1,061,448,227
PR_kwDODunzps4u6mdX
3,314
Adding arg to pass process rank to `map`
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Some commits seem to be there twice (made the mistake of rebasing because I wasn't sure whether the doc had changed), is this an issue @lhoestq ?" ]
1,637,682,921,000
1,637,754,853,000
1,637,754,853,000
MEMBER
null
This PR adds a `with_rank` argument to `map` that gives the user the possibility to pass the rank of each process to their function. This is mostly designed for multi-GPU map (each process can be sent to a different device thanks to the rank). I've also added tests. I'm putting the PR up so you can check the code, I'll add a multi-GPU example to the doc (+ write a bit in the doc for the new arg)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3314/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3314", "html_url": "https://github.com/huggingface/datasets/pull/3314", "diff_url": "https://github.com/huggingface/datasets/pull/3314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3314.patch", "merged_at": 1637754853000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3313/comments
https://api.github.com/repos/huggingface/datasets/issues/3313/events
https://github.com/huggingface/datasets/issues/3313
1,060,933,392
I_kwDODunzps4_PI8Q
3,313
TriviaQA License Mismatch
{ "login": "akhilkedia", "id": 16665267, "node_id": "MDQ6VXNlcjE2NjY1MjY3", "avatar_url": "https://avatars.githubusercontent.com/u/16665267?v=4", "gravatar_id": "", "url": "https://api.github.com/users/akhilkedia", "html_url": "https://github.com/akhilkedia", "followers_url": "https://api.github.com/users/akhilkedia/followers", "following_url": "https://api.github.com/users/akhilkedia/following{/other_user}", "gists_url": "https://api.github.com/users/akhilkedia/gists{/gist_id}", "starred_url": "https://api.github.com/users/akhilkedia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akhilkedia/subscriptions", "organizations_url": "https://api.github.com/users/akhilkedia/orgs", "repos_url": "https://api.github.com/users/akhilkedia/repos", "events_url": "https://api.github.com/users/akhilkedia/events{/privacy}", "received_events_url": "https://api.github.com/users/akhilkedia/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! You're completely right, this must be mentioned in the dataset card.\r\nIf you're interesting in contributing, feel free to open a pull request to mention this in the `trivia_qa` dataset card in the \"Licensing Information\" section at https://github.com/huggingface/datasets/blob/master/datasets/trivia_qa/README.md" ]
1,637,654,415,000
1,638,185,061,000
1,638,185,061,000
NONE
null
## Describe the bug TriviaQA Webpage at http://nlp.cs.washington.edu/triviaqa/ says they do not own the copyright to the data. However, Huggingface datasets at https://huggingface.co/datasets/trivia_qa mentions that the dataset is released under Apache License Is the License Information on HuggingFace correct?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3313/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3312/comments
https://api.github.com/repos/huggingface/datasets/issues/3312/events
https://github.com/huggingface/datasets/pull/3312
1,060,440,346
PR_kwDODunzps4u3duV
3,312
add bl books genre dataset
{ "login": "davanstrien", "id": 8995957, "node_id": "MDQ6VXNlcjg5OTU5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/8995957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davanstrien", "html_url": "https://github.com/davanstrien", "followers_url": "https://api.github.com/users/davanstrien/followers", "following_url": "https://api.github.com/users/davanstrien/following{/other_user}", "gists_url": "https://api.github.com/users/davanstrien/gists{/gist_id}", "starred_url": "https://api.github.com/users/davanstrien/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davanstrien/subscriptions", "organizations_url": "https://api.github.com/users/davanstrien/orgs", "repos_url": "https://api.github.com/users/davanstrien/repos", "events_url": "https://api.github.com/users/davanstrien/events{/privacy}", "received_events_url": "https://api.github.com/users/davanstrien/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "To fix the CI, feel free to run the `make style` command to format the code.\r\n\r\nThen it also looks like the dummy_data.zip archives are all empty, which makes the tests fail. Can you try regenerating them ? They should have one file inside which is a dummy version of the file at https://bl.iro.bl.uk/downloads/36c7cd20-c8a7-4495-acbe-469b9132c6b1?locale=en", "@lhoestq, thanks for that feedback. \r\n\r\nI should have made most of these changes now. The `--auto_generate` flag wasn't working because the file wasn't downloaded with a `.csv` extension. I used `--match_text_files \"*\"` to get around this. Because there is a lot of data that isn't annotated using the default line number for the dummy data causes the `annotated_raw` and the `title_genre_classifiction` configs to fail because they don't generate any examples โ€” bumping the line numbers to `250` fixes this. This does make the dummy data a bit bigger, though. \r\n\r\nThe total directory size for the dataset is now `150kb`. Is this okay, or do you want me to generate the dummy data manually instead? ", "Hi ! yes 150kB is fine :)\r\nFeel free to push your new dummy_data.zip files (I think the current one are still the empty ones)", "@lhoestq I've pushed those dummy files now and added your other suggestions.", "The CI failure is unrelated to this PR, merging :)", "@lhoestq, thanks for all your help with this pull request ๐Ÿ˜€" ]
1,637,603,690,000
1,638,461,429,000
1,638,461,267,000
CONTRIBUTOR
null
First of all thanks for the fantastic library/collection of datasets ๐Ÿค— This pull request adds a dataset of metadata from digitised (mostly 19th Century) books from the British Library The [data](https://bl.iro.bl.uk/concern/datasets/1e1ccb46-65b4-4481-b6f8-b8129d5da053) contains various metadata about the books. In addition, a subset of the data includes 'genre' information which can be used for supervised text classification tasks. I hope that this offers easier access to a dataset for doing text classification on GLAM (galleries, libraries, archives and museums) data. I have tried to create three configurations that provide both an 'easy' version of the dataset if you want to use it for training a genre classification model and a more 'raw' version of the data for other potential use cases for the data. I am open to suggestions if this doesn't make sense. Similarly, for some of the arrow datatypes, I have had to fall back to strings since there are missing values for some fields/rows but I may have missed a more elegant way of dealing with it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3312/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3312/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3312", "html_url": "https://github.com/huggingface/datasets/pull/3312", "diff_url": "https://github.com/huggingface/datasets/pull/3312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3312.patch", "merged_at": 1638461267000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3310/comments
https://api.github.com/repos/huggingface/datasets/issues/3310/events
https://github.com/huggingface/datasets/issues/3310
1,060,098,104
I_kwDODunzps4_L9A4
3,310
Fatal error condition occurred in aws-c-io
{ "login": "Crabzmatic", "id": 31850219, "node_id": "MDQ6VXNlcjMxODUwMjE5", "avatar_url": "https://avatars.githubusercontent.com/u/31850219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Crabzmatic", "html_url": "https://github.com/Crabzmatic", "followers_url": "https://api.github.com/users/Crabzmatic/followers", "following_url": "https://api.github.com/users/Crabzmatic/following{/other_user}", "gists_url": "https://api.github.com/users/Crabzmatic/gists{/gist_id}", "starred_url": "https://api.github.com/users/Crabzmatic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Crabzmatic/subscriptions", "organizations_url": "https://api.github.com/users/Crabzmatic/orgs", "repos_url": "https://api.github.com/users/Crabzmatic/repos", "events_url": "https://api.github.com/users/Crabzmatic/events{/privacy}", "received_events_url": "https://api.github.com/users/Crabzmatic/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! Are you having this issue only with this specific dataset, or it also happens with other ones like `squad` ?", "@lhoestq It happens also on `squad`. It successfully downloads the whole dataset and then crashes on: \r\n\r\n```\r\nFatal error condition occurred in D:\\bld\\aws-c-io_1633633258269\\work\\source\\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS\r\nExiting Application\r\n```\r\n\r\nI tested it on Ubuntu and its working OK. Didn't test on non-preview version of Windows 11, `Windows-10-10.0.22504-SP0` is a preview version, not sure if this is causing it.", "I see the same error in Windows-10.0.19042 as of a few days ago:\r\n\r\n`Fatal error condition occurred in D:\\bld\\aws-c-io_1633633258269\\work\\source\\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS`\r\n\r\npython 3.8.12 h7840368_2_cpython conda-forge\r\nboto3 1.20.11 pyhd8ed1ab_0 conda-forge\r\nbotocore 1.23.11 pyhd8ed1ab_0 conda-forge\r\n\r\n...but I am not using `datasets` (although I might take a look now that I know about it!)\r\n\r\nThe error has occurred a few times over the last two days, but not consistently enough for me to get it with DEBUG. If there is any interest I can report back here, but it seems not unique to `datasets`.", "I'm not sure what `datasets` has to do with a crash that seems related to `aws-c-io`, could it be an issue with your environment ?", "> I'm not sure what `datasets` has to do with a crash that seems related to `aws-c-io`, could it be an issue with your environment ?\r\n\r\nAgreed, this issue is not likely a bug in datasets, since I get the identical error without datasets installed.", "Will close this issue. Bug in `aws-c-io` shouldn't be in `datasets` repo. Nevertheless, it can be useful to know that it happens. Thanks @leehaust @lhoestq ", "I have also had this issue since a few days, when running scripts using PyCharm in particular, but it does not seem to affect the script from running, only reporting this error at the end of the run.", "I also get this issue, It appears after my script has finished running. I get the following error message\r\n```\r\nFatal error condition occurred in /home/conda/feedstock_root/build_artifacts/aws-c-io_1637179816120/work/source/event_loop.c:72: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS\r\nExiting Application\r\n################################################################################\r\nStack trace:\r\n################################################################################\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_backtrace_print+0x59) [0x2aabe0479579]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_fatal_assert+0x48) [0x2aabe04696c8]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././././libaws-c-io.so.1.0.0(+0x13ad3) [0x2aabe0624ad3]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_ref_count_release+0x1d) [0x2aabe047b60d]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././././libaws-c-io.so.1.0.0(+0x113ca) [0x2aabe06223ca]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-c-common.so.1(aws_ref_count_release+0x1d) [0x2aabe047b60d]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../../././libaws-crt-cpp.so(_ZN3Aws3Crt2Io15ClientBootstrapD1Ev+0x3a) [0x2aabe041cf5a]\r\n/home/user_name/conda_envs/env_name/lib/python3.7/site-packages/pyarrow/../../.././libaws-cpp-sdk-core.so(+0x5f570) [0x2aabe00eb570]\r\n/lib64/libc.so.6(+0x39ce9) [0x2aaaab835ce9]\r\n/lib64/libc.so.6(+0x39d37) [0x2aaaab835d37]\r\n/lib64/libc.so.6(__libc_start_main+0xfc) [0x2aaaab81e55c]\r\npython(+0x1c721d) [0x55555571b21d]\r\nAborted\r\n```\r\nI don't get this issue when running my code in a container, and it seems more relevant to PyArrow but thought a more complete stack trace might be helpful to someone\r\n" ]
1,637,584,074,000
1,638,799,560,000
1,638,224,557,000
NONE
null
## Describe the bug Fatal error when using the library ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('wikiann', 'en') ``` ## Expected results No fatal errors ## Actual results ``` Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\source\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS Exiting Application ``` ## Environment info - `datasets` version: 1.15.2.dev0 - Platform: Windows-10-10.0.22504-SP0 - Python version: 3.8.12 - PyArrow version: 6.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3310/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3310/timeline
null
null
null
false

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
1
Add dataset card