url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.07B
node_id
stringlengths
18
32
number
int64
1
3.39k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
1 value
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,639B
updated_at
int64
1,587B
1,639B
closed_at
int64
1,587B
1,639B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/2539
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2539/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2539/comments
https://api.github.com/repos/huggingface/datasets/issues/2539/events
https://github.com/huggingface/datasets/pull/2539
927,952,429
MDExOlB1bGxSZXF1ZXN0Njc2MDI5MDY5
2,539
remove wi_locness dataset due to licensing issues
{ "login": "aseifert", "id": 4944799, "node_id": "MDQ6VXNlcjQ5NDQ3OTk=", "avatar_url": "https://avatars.githubusercontent.com/u/4944799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aseifert", "html_url": "https://github.com/aseifert", "followers_url": "https://api.github.com/users/aseifert/followers", "following_url": "https://api.github.com/users/aseifert/following{/other_user}", "gists_url": "https://api.github.com/users/aseifert/gists{/gist_id}", "starred_url": "https://api.github.com/users/aseifert/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aseifert/subscriptions", "organizations_url": "https://api.github.com/users/aseifert/orgs", "repos_url": "https://api.github.com/users/aseifert/repos", "events_url": "https://api.github.com/users/aseifert/events{/privacy}", "received_events_url": "https://api.github.com/users/aseifert/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! I'm sorry to hear that.\r\nThough we are not redistributing the dataset, we just provide a python script that downloads and process the dataset from its original source hosted at https://www.cl.cam.ac.uk\r\n\r\nTherefore I'm not sure what's the issue with licensing. What do you mean exactly ?", "I think that the main issue is that the licesenses of the data are not made clear in the huggingface hub – other people wrongly assumed that the data was license-free, which resulted in commercial use, which is against the licenses.\r\nIs it possible to add the licenses from the original download to huggingface? that would help clear any confusion (licenses can be found here: https://www.cl.cam.ac.uk/research/nl/bea2019st/data/wi+locness_v2.1.bea19.tar.gz)", "Thanks for the clarification @SimonHFL \r\nYou're completely right, we need to show the licenses.\r\nI just added them here: https://huggingface.co/datasets/wi_locness#licensing-information", "Hi guys, I'm one of the authors of this dataset. \r\n\r\nTo clarify, we're happy for you to keep the data in the repo on 2 conditions:\r\n1. You don't host the data yourself.\r\n2. You make it clear that anyone who downloads the data via HuggingFace should read and abide by the license. \r\n\r\nI think you've now met these conditions, so we're all good, but I just wanted to make it clear in case there are any issues in the future. Thanks again to @aseifert for bringing this to our attention! :)", "Thanks for your message @chrisjbryant :)\r\nI'm closing this PR then.\r\n\r\nAnd thanks for reporting @aseifert" ]
1,624,433,732,000
1,624,632,762,000
1,624,632,762,000
CONTRIBUTOR
null
It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2539/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2539/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2539", "html_url": "https://github.com/huggingface/datasets/pull/2539", "diff_url": "https://github.com/huggingface/datasets/pull/2539.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2539.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2537
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2537/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2537/comments
https://api.github.com/repos/huggingface/datasets/issues/2537/events
https://github.com/huggingface/datasets/pull/2537
927,472,659
MDExOlB1bGxSZXF1ZXN0Njc1NjI1OTY3
2,537
Add Parquet loader + from_parquet and to_parquet
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "`pyarrow` 1.0.0 doesn't support some types in parquet, we'll have to bump its minimum version.\r\n\r\nAlso I still need to add dummy data to test the parquet builder.", "I had to bump the minimum pyarrow version to 3.0.0 to properly support parquet.\r\n\r\nEverything is ready for review now :)\r\nI reused pretty much the same tests we had for CSV", "Done !\r\nNow we're still allowing pyarrow>=1.0.0, but when users want to use parquet features they're asked to update to pyarrow>=3.0.0" ]
1,624,382,903,000
1,625,070,663,000
1,625,070,658,000
MEMBER
null
Continuation of #2247 I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`. As usual, the data are converted to arrow in a batched way to avoid loading everything in memory.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2537/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2537", "html_url": "https://github.com/huggingface/datasets/pull/2537", "diff_url": "https://github.com/huggingface/datasets/pull/2537.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2537.patch", "merged_at": 1625070658000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2535
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2535/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2535/comments
https://api.github.com/repos/huggingface/datasets/issues/2535/events
https://github.com/huggingface/datasets/pull/2535
927,334,349
MDExOlB1bGxSZXF1ZXN0Njc1NTA3MTAw
2,535
Improve Features docs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,624,374,207,000
1,624,455,643,000
1,624,455,643,000
MEMBER
null
- Fix rendering and cross-references in Features docs - Add docstrings to Features methods
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2535/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2535/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2535", "html_url": "https://github.com/huggingface/datasets/pull/2535", "diff_url": "https://github.com/huggingface/datasets/pull/2535.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2535.patch", "merged_at": 1624455643000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2534
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2534/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2534/comments
https://api.github.com/repos/huggingface/datasets/issues/2534/events
https://github.com/huggingface/datasets/pull/2534
927,201,435
MDExOlB1bGxSZXF1ZXN0Njc1MzkzODg0
2,534
Sync with transformers disabling NOTSET
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Nice thanks ! I think there are other places with\r\n```python\r\nnot_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n```\r\n\r\nCould you replace them as well ?", "Sure @lhoestq! I was not sure if this change should only be circumscribed to `http_get`..." ]
1,624,366,461,000
1,624,545,767,000
1,624,545,767,000
MEMBER
null
Close #2528.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2534/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2534/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2534", "html_url": "https://github.com/huggingface/datasets/pull/2534", "diff_url": "https://github.com/huggingface/datasets/pull/2534.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2534.patch", "merged_at": 1624545767000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2533
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2533/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2533/comments
https://api.github.com/repos/huggingface/datasets/issues/2533/events
https://github.com/huggingface/datasets/pull/2533
927,193,264
MDExOlB1bGxSZXF1ZXN0Njc1Mzg2OTMw
2,533
Add task template for automatic speech recognition
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@SBrandeis @lhoestq i've integrated your suggestions, so this is ready for another review :)", "Merging if it's good for you @lewtun :)" ]
1,624,365,902,000
1,624,464,886,000
1,624,463,817,000
MEMBER
null
This PR adds a task template for automatic speech recognition. In this task, the input is a path to an audio file which the model consumes to produce a transcription. Usage: ```python from datasets import load_dataset from datasets.tasks import AutomaticSpeechRecognition ds = load_dataset("timit_asr", split="train[:10]") # Dataset({ # features: ['file', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], # num_rows: 10 # }) task = AutomaticSpeechRecognition(audio_file_column="file", transcription_column="text") ds.prepare_for_task(task) # Dataset({ # features: ['audio_file', 'transcription'], # num_rows: 10 # }) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2533/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2533/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2533", "html_url": "https://github.com/huggingface/datasets/pull/2533", "diff_url": "https://github.com/huggingface/datasets/pull/2533.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2533.patch", "merged_at": 1624463817000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2532
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2532/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2532/comments
https://api.github.com/repos/huggingface/datasets/issues/2532/events
https://github.com/huggingface/datasets/issues/2532
927,063,196
MDU6SXNzdWU5MjcwNjMxOTY=
2,532
Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task
{ "login": "jerryIsHere", "id": 50871412, "node_id": "MDQ6VXNlcjUwODcxNDEy", "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerryIsHere", "html_url": "https://github.com/jerryIsHere", "followers_url": "https://api.github.com/users/jerryIsHere/followers", "following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}", "gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions", "organizations_url": "https://api.github.com/users/jerryIsHere/orgs", "repos_url": "https://api.github.com/users/jerryIsHere/repos", "events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}", "received_events_url": "https://api.github.com/users/jerryIsHere/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?", "> Hi @jerryIsHere, thanks for reporting the issue. But are you sure this is a bug in HuggingFace **Datasets**?\r\n\r\nOh, I am sorry\r\nI would reopen the post on huggingface/transformers" ]
1,624,356,498,000
1,624,425,445,000
1,624,425,445,000
CONTRIBUTOR
null
[This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner). The pipeline works fine with most instance in different languages, but unfortunately, [the Japanese Kana ligature (a form of abbreviation? I don't know Japanese well)](https://en.wikipedia.org/wiki/Kana_ligature) break the alignment of `return_offsets_mapping`: ![image](https://user-images.githubusercontent.com/50871412/122904371-db192700-d382-11eb-8917-1775db76db69.png) Without the try catch block, it riase `ValueError: NumPy boolean array indexing assignment cannot assign 88 input values to the 87 output values where the mask is true`, example shown here [(another colab notebook)](https://colab.research.google.com/drive/1MmOqf3ppzzdKKyMWkn0bJy6DqzOO0SSm?usp=sharing) It is clear that the normalizer is the process that break the alignment, as it is observed that `tokenizer._tokenizer.normalizer.normalize_str('ヿ')` return 'コト'. One workaround is to include `tokenizer._tokenizer.normalizer.normalize_str` before the tokenizer preprocessing pipeline, which is also provided in the [first colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) with the name `udposTestDatasetWorkaround`. I guess similar logics should be included inside the tokenizer and the offsets_mapping generation process such that user don't need to include them in their code. But I don't understand the code of tokenizer well that I think I am not able to do this. p.s. **I am using my own dataset building script in the provided example, but the script should be equivalent to the changes made by this [update](https://github.com/huggingface/datasets/pull/2466)** `get_dataset `is just a simple wrapping for `load_dataset` and the `tokenizer` is just `XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-large")`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2532/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2532/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2531/comments
https://api.github.com/repos/huggingface/datasets/issues/2531/events
https://github.com/huggingface/datasets/pull/2531
927,017,924
MDExOlB1bGxSZXF1ZXN0Njc1MjM2MDYz
2,531
Fix dev version
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,624,353,430,000
1,624,355,230,000
1,624,355,229,000
MEMBER
null
The dev version that ends in `.dev0` should be greater than the current version. However it happens that `1.8.0 > 1.8.0.dev0` for example. Therefore we need to use `1.8.1.dev0` for example in this case. I updated the dev version to use `1.8.1.dev0`, and I also added a comment in the setup.py in the release steps about this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2531/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2531", "html_url": "https://github.com/huggingface/datasets/pull/2531", "diff_url": "https://github.com/huggingface/datasets/pull/2531.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2531.patch", "merged_at": 1624355229000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2530
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2530/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2530/comments
https://api.github.com/repos/huggingface/datasets/issues/2530/events
https://github.com/huggingface/datasets/pull/2530
927,013,773
MDExOlB1bGxSZXF1ZXN0Njc1MjMyNDk0
2,530
Fixed label parsing in the ProductReviews dataset
{ "login": "yavuzKomecoglu", "id": 5150963, "node_id": "MDQ6VXNlcjUxNTA5NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yavuzKomecoglu", "html_url": "https://github.com/yavuzKomecoglu", "followers_url": "https://api.github.com/users/yavuzKomecoglu/followers", "following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}", "gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}", "starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions", "organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs", "repos_url": "https://api.github.com/users/yavuzKomecoglu/repos", "events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}", "received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq, can you please review this PR?\r\nWhat exactly is the problem in the test case? Should it matter?", "Hi ! Thanks for fixing this :)\r\n\r\nThe CI fails for two reasons:\r\n- the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in the yaml tags:\r\n```yaml\r\npretty_name: Turkish Product Reviews\r\n```\r\n- The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file", "> Hi ! Thanks for fixing this :)\r\n> \r\n> The CI fails for two reasons:\r\n> \r\n> * the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in the yaml tags:\r\n> \r\n> \r\n> ```yaml\r\n> pretty_name: Turkish Product Reviews\r\n> ```\r\n> \r\n> * The test that runs the turkish_product_reviews.py file on the dummy_data.zip data returned 0 examples. Indeed it looks like you changed dummy_data.zip file and now it is an empty zip file. I think you can fix that by reverting your change to the dummy_data.zip file\r\n\r\nMany thanks for the quick feedback.\r\nI made the relevant fixes but still got the error :(", "> Thanks !\r\n> The CI was failing because of the dataset card that was missing some sections. I fixed that.\r\n> \r\n> It's all good now\r\n\r\nSuper. Thanks for the support." ]
1,624,353,165,000
1,624,366,520,000
1,624,366,360,000
CONTRIBUTOR
null
Fixed issue with parsing dataset labels.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2530/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2530/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2530", "html_url": "https://github.com/huggingface/datasets/pull/2530", "diff_url": "https://github.com/huggingface/datasets/pull/2530.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2530.patch", "merged_at": 1624366360000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2529
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2529/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2529/comments
https://api.github.com/repos/huggingface/datasets/issues/2529/events
https://github.com/huggingface/datasets/pull/2529
926,378,812
MDExOlB1bGxSZXF1ZXN0Njc0NjkxNjA5
2,529
Add summarization template
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Nice thanks !\r\n> Could you just move the test outside of the BaseDatasetTest class please ? Otherwise it will unnecessarily be run twice.\r\n\r\nsure, on it! thanks for the explanations about the `self._to` method :)", "@lhoestq i've moved all the task template tests outside of `BaseDatasetTest` and collected them in their dedicated test case. (at some point i'll revisit this so we can just use `pytest` natively, but the PR is already getting out-of-scope :))" ]
1,624,291,711,000
1,624,458,131,000
1,624,455,010,000
MEMBER
null
This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between "extractive" or "abstractive" summarization - both can be handled with this template. Usage: ```python from datasets import load_dataset from datasets.tasks import Summarization ds = load_dataset("xsum", split="train") # Dataset({ # features: ['document', 'summary', 'id'], # num_rows: 204045 # }) summarization = Summarization(text_column="document", summary_column="summary") ds.prepare_for_task(summarization) # Dataset({ # features: ['text', 'summary'], # num_rows: 204045 # }) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2529/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2529/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2529", "html_url": "https://github.com/huggingface/datasets/pull/2529", "diff_url": "https://github.com/huggingface/datasets/pull/2529.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2529.patch", "merged_at": 1624455010000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2528
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2528/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2528/comments
https://api.github.com/repos/huggingface/datasets/issues/2528/events
https://github.com/huggingface/datasets/issues/2528
926,314,656
MDU6SXNzdWU5MjYzMTQ2NTY=
2,528
Logging cannot be set to NOTSET similar to transformers
{ "login": "joshzwiebel", "id": 34662010, "node_id": "MDQ6VXNlcjM0NjYyMDEw", "avatar_url": "https://avatars.githubusercontent.com/u/34662010?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshzwiebel", "html_url": "https://github.com/joshzwiebel", "followers_url": "https://api.github.com/users/joshzwiebel/followers", "following_url": "https://api.github.com/users/joshzwiebel/following{/other_user}", "gists_url": "https://api.github.com/users/joshzwiebel/gists{/gist_id}", "starred_url": "https://api.github.com/users/joshzwiebel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joshzwiebel/subscriptions", "organizations_url": "https://api.github.com/users/joshzwiebel/orgs", "repos_url": "https://api.github.com/users/joshzwiebel/repos", "events_url": "https://api.github.com/users/joshzwiebel/events{/privacy}", "received_events_url": "https://api.github.com/users/joshzwiebel/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @joshzwiebel, thanks for reporting. We are going to align with `transformers`." ]
1,624,287,894,000
1,624,545,767,000
1,624,545,767,000
NONE
null
## Describe the bug In the transformers library you can set the verbosity level to logging.NOTSET to work around the usage of tqdm and IPywidgets, however in Datasets this is no longer possible. This is because transformers set the verbosity level of tqdm with [this](https://github.com/huggingface/transformers/blob/b53bc55ba9bb10d5ee279eab51a2f0acc5af2a6b/src/transformers/file_utils.py#L1449) `disable=bool(logging.get_verbosity() == logging.NOTSET)` and datasets accomplishes this like [so](https://github.com/huggingface/datasets/blob/83554e410e1ab8c6f705cfbb2df7953638ad3ac1/src/datasets/utils/file_utils.py#L493) `not_verbose = bool(logger.getEffectiveLevel() > WARNING)` ## Steps to reproduce the bug ```python import datasets import logging datasets.logging.get_verbosity = lambda : logging.NOTSET datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy") ``` ## Expected results The code should download and load the dataset as normal without displaying progress bars ## Actual results ```ImportError Traceback (most recent call last) <ipython-input-4-aec65c0509c6> in <module> ----> 1 datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy") ~/venv/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs) 713 dataset=True, 714 return_resolved_file_path=True, --> 715 use_auth_token=use_auth_token, 716 ) 717 # Set the base path for downloads as the parent of the script location ~/venv/lib/python3.7/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs) 350 file_path = hf_bucket_url(path, filename=name, dataset=False) 351 try: --> 352 local_path = cached_path(file_path, download_config=download_config) 353 except FileNotFoundError: 354 raise FileNotFoundError( ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 289 use_etag=download_config.use_etag, 290 max_retries=download_config.max_retries, --> 291 use_auth_token=download_config.use_auth_token, 292 ) 293 elif os.path.exists(url_or_filename): ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 668 headers=headers, 669 cookies=cookies, --> 670 max_retries=max_retries, 671 ) 672 ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries) 493 initial=resume_size, 494 desc="Downloading", --> 495 disable=not_verbose, 496 ) 497 for chunk in response.iter_content(chunk_size=1024): ~/venv/lib/python3.7/site-packages/tqdm/notebook.py in __init__(self, *args, **kwargs) 217 total = self.total * unit_scale if self.total else self.total 218 self.container = self.status_printer( --> 219 self.fp, total, self.desc, self.ncols) 220 self.sp = self.display 221 ~/venv/lib/python3.7/site-packages/tqdm/notebook.py in status_printer(_, total, desc, ncols) 95 if IProgress is None: # #187 #451 #558 #872 96 raise ImportError( ---> 97 "IProgress not found. Please update jupyter and ipywidgets." 98 " See https://ipywidgets.readthedocs.io/en/stable" 99 "/user_install.html") ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.95-42.163.amzn2.x86_64-x86_64-with-debian-10.8 - Python version: 3.7.10 - PyArrow version: 3.0.0 I am running this code on Deepnote and which important to this issue **does not** support IPywidgets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2528/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2527
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2527/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2527/comments
https://api.github.com/repos/huggingface/datasets/issues/2527/events
https://github.com/huggingface/datasets/pull/2527
926,031,525
MDExOlB1bGxSZXF1ZXN0Njc0MzkzNjQ5
2,527
Replace bad `n>1M` size tag
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,624,268,555,000
1,624,288,010,000
1,624,288,009,000
MEMBER
null
Some datasets were still using the old `n>1M` tag which has been replaced with tags `1M<n<10M`, etc. This resulted in unexpected results when searching for datasets bigger than 1M on the hub, since it was only showing the ones with the tag `n>1M`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2527/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2527/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2527", "html_url": "https://github.com/huggingface/datasets/pull/2527", "diff_url": "https://github.com/huggingface/datasets/pull/2527.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2527.patch", "merged_at": 1624288009000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2525
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2525/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2525/comments
https://api.github.com/repos/huggingface/datasets/issues/2525/events
https://github.com/huggingface/datasets/pull/2525
925,896,358
MDExOlB1bGxSZXF1ZXN0Njc0Mjc5MTgy
2,525
Use scikit-learn package rather than sklearn in setup.py
{ "login": "lesteve", "id": 1680079, "node_id": "MDQ6VXNlcjE2ODAwNzk=", "avatar_url": "https://avatars.githubusercontent.com/u/1680079?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lesteve", "html_url": "https://github.com/lesteve", "followers_url": "https://api.github.com/users/lesteve/followers", "following_url": "https://api.github.com/users/lesteve/following{/other_user}", "gists_url": "https://api.github.com/users/lesteve/gists{/gist_id}", "starred_url": "https://api.github.com/users/lesteve/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lesteve/subscriptions", "organizations_url": "https://api.github.com/users/lesteve/orgs", "repos_url": "https://api.github.com/users/lesteve/repos", "events_url": "https://api.github.com/users/lesteve/events{/privacy}", "received_events_url": "https://api.github.com/users/lesteve/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,624,259,065,000
1,624,269,673,000
1,624,265,853,000
CONTRIBUTOR
null
The sklearn package is an historical thing and should probably not be used by anyone, see https://github.com/scikit-learn/scikit-learn/issues/8215#issuecomment-344679114 for some caveats. Note: this affects only TESTS_REQUIRE so I guess only developers not end users.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2525/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2525", "html_url": "https://github.com/huggingface/datasets/pull/2525", "diff_url": "https://github.com/huggingface/datasets/pull/2525.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2525.patch", "merged_at": 1624265853000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2524/comments
https://api.github.com/repos/huggingface/datasets/issues/2524/events
https://github.com/huggingface/datasets/pull/2524
925,610,934
MDExOlB1bGxSZXF1ZXN0Njc0MDQzNzk1
2,524
Raise FileNotFoundError in WindowsFileLock
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Could you clarify what it fixes exactly and give more details please ? Especially why this is related to the windows hanging error ?", "This has already been merged, but I'll clarify the idea of this PR. Before this merge, FileLock was the only component affected by the max path limit on Windows (that came to my notice) because of its infinite loop that would suppress errors. So instead of suppressing the `FileNotFoundError` that is thrown by `os.open` if the file name is longer than the max allowed path length, this PR reraises it to notify the user." ]
1,624,199,111,000
1,624,874,182,000
1,624,870,059,000
CONTRIBUTOR
null
Closes #2443
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2524/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2524", "html_url": "https://github.com/huggingface/datasets/pull/2524", "diff_url": "https://github.com/huggingface/datasets/pull/2524.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2524.patch", "merged_at": 1624870059000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2523
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2523/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2523/comments
https://api.github.com/repos/huggingface/datasets/issues/2523/events
https://github.com/huggingface/datasets/issues/2523
925,421,008
MDU6SXNzdWU5MjU0MjEwMDg=
2,523
Fr
{ "login": "aDrIaNo34500", "id": 71971234, "node_id": "MDQ6VXNlcjcxOTcxMjM0", "avatar_url": "https://avatars.githubusercontent.com/u/71971234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aDrIaNo34500", "html_url": "https://github.com/aDrIaNo34500", "followers_url": "https://api.github.com/users/aDrIaNo34500/followers", "following_url": "https://api.github.com/users/aDrIaNo34500/following{/other_user}", "gists_url": "https://api.github.com/users/aDrIaNo34500/gists{/gist_id}", "starred_url": "https://api.github.com/users/aDrIaNo34500/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aDrIaNo34500/subscriptions", "organizations_url": "https://api.github.com/users/aDrIaNo34500/orgs", "repos_url": "https://api.github.com/users/aDrIaNo34500/repos", "events_url": "https://api.github.com/users/aDrIaNo34500/events{/privacy}", "received_events_url": "https://api.github.com/users/aDrIaNo34500/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,624,118,192,000
1,624,128,503,000
1,624,128,503,000
NONE
null
__Originally posted by @lewtun in https://github.com/huggingface/datasets/pull/2469__
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2523/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2523/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2521
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2521/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2521/comments
https://api.github.com/repos/huggingface/datasets/issues/2521/events
https://github.com/huggingface/datasets/pull/2521
925,030,685
MDExOlB1bGxSZXF1ZXN0NjczNTgxNzQ4
2,521
Insert text classification template for Emotion dataset
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,624,031,779,000
1,624,267,351,000
1,624,267,351,000
MEMBER
null
This PR includes a template and updated `dataset_infos.json` for the `emotion` dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2521/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2521/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2521", "html_url": "https://github.com/huggingface/datasets/pull/2521", "diff_url": "https://github.com/huggingface/datasets/pull/2521.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2521.patch", "merged_at": 1624267351000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2519
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2519/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2519/comments
https://api.github.com/repos/huggingface/datasets/issues/2519/events
https://github.com/huggingface/datasets/pull/2519
924,903,240
MDExOlB1bGxSZXF1ZXN0NjczNDcyMzYy
2,519
Improve performance of pandas arrow extractor
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Looks like this change\r\n```\r\npa_table[pa_table.column_names[0]].to_pandas(types_mapper=pandas_types_mapper)\r\n```\r\ndoesn't return a Series with the correct type.\r\nThis is related to https://issues.apache.org/jira/browse/ARROW-9664\r\n\r\nSince the types_mapper isn't taken into account, the ArrayXD types are not converted to the correct pandas extension dtype", "@lhoestq I think I found a workaround... 😉 ", "For some reason the benchmarks are not run Oo", "Anyway, merging.\r\nWe'll see on master how much speed ups we got" ]
1,624,022,681,000
1,624,266,366,000
1,624,266,366,000
MEMBER
null
While reviewing PR #2505, I noticed that pandas arrow extractor could be refactored to be faster.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2519/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2519/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2519", "html_url": "https://github.com/huggingface/datasets/pull/2519", "diff_url": "https://github.com/huggingface/datasets/pull/2519.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2519.patch", "merged_at": 1624266366000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2518/comments
https://api.github.com/repos/huggingface/datasets/issues/2518/events
https://github.com/huggingface/datasets/pull/2518
924,654,100
MDExOlB1bGxSZXF1ZXN0NjczMjU5Nzg1
2,518
Add task templates for tydiqa and xquad
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Just tested TydiQA and it works fine :)" ]
1,624,003,594,000
1,624,028,477,000
1,624,027,833,000
MEMBER
null
This PR adds question-answering templates to the remaining datasets that are linked to a model on the Hub. Notes: * I could not test the tydiqa implementation since I don't have enough disk space 😢 . But I am confident the template works :) * there exist other datasets like `fquad` and `mlqa` which are candidates for question-answering templates, but some work is needed to handle the ordering of nested column described in #2434
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2518/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2518/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2518", "html_url": "https://github.com/huggingface/datasets/pull/2518", "diff_url": "https://github.com/huggingface/datasets/pull/2518.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2518.patch", "merged_at": 1624027833000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2517/comments
https://api.github.com/repos/huggingface/datasets/issues/2517/events
https://github.com/huggingface/datasets/pull/2517
924,643,345
MDExOlB1bGxSZXF1ZXN0NjczMjUwODk1
2,517
Fix typo in MatthewsCorrelation class name
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,624,002,786,000
1,624,005,835,000
1,624,005,835,000
MEMBER
null
Close #2513.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2517/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2517/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2517", "html_url": "https://github.com/huggingface/datasets/pull/2517", "diff_url": "https://github.com/huggingface/datasets/pull/2517.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2517.patch", "merged_at": 1624005835000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2515/comments
https://api.github.com/repos/huggingface/datasets/issues/2515/events
https://github.com/huggingface/datasets/pull/2515
924,435,447
MDExOlB1bGxSZXF1ZXN0NjczMDc3NTIx
2,515
CRD3 dataset card
{ "login": "wilsonyhlee", "id": 1937386, "node_id": "MDQ6VXNlcjE5MzczODY=", "avatar_url": "https://avatars.githubusercontent.com/u/1937386?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wilsonyhlee", "html_url": "https://github.com/wilsonyhlee", "followers_url": "https://api.github.com/users/wilsonyhlee/followers", "following_url": "https://api.github.com/users/wilsonyhlee/following{/other_user}", "gists_url": "https://api.github.com/users/wilsonyhlee/gists{/gist_id}", "starred_url": "https://api.github.com/users/wilsonyhlee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wilsonyhlee/subscriptions", "organizations_url": "https://api.github.com/users/wilsonyhlee/orgs", "repos_url": "https://api.github.com/users/wilsonyhlee/repos", "events_url": "https://api.github.com/users/wilsonyhlee/events{/privacy}", "received_events_url": "https://api.github.com/users/wilsonyhlee/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,623,975,847,000
1,624,270,724,000
1,624,270,724,000
CONTRIBUTOR
null
This PR adds additional information to the CRD3 dataset card.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2515/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2515/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2515", "html_url": "https://github.com/huggingface/datasets/pull/2515", "diff_url": "https://github.com/huggingface/datasets/pull/2515.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2515.patch", "merged_at": 1624270724000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2513
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2513/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2513/comments
https://api.github.com/repos/huggingface/datasets/issues/2513/events
https://github.com/huggingface/datasets/issues/2513
924,174,413
MDU6SXNzdWU5MjQxNzQ0MTM=
2,513
Corelation should be Correlation
{ "login": "colbym-MM", "id": 71514164, "node_id": "MDQ6VXNlcjcxNTE0MTY0", "avatar_url": "https://avatars.githubusercontent.com/u/71514164?v=4", "gravatar_id": "", "url": "https://api.github.com/users/colbym-MM", "html_url": "https://github.com/colbym-MM", "followers_url": "https://api.github.com/users/colbym-MM/followers", "following_url": "https://api.github.com/users/colbym-MM/following{/other_user}", "gists_url": "https://api.github.com/users/colbym-MM/gists{/gist_id}", "starred_url": "https://api.github.com/users/colbym-MM/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/colbym-MM/subscriptions", "organizations_url": "https://api.github.com/users/colbym-MM/orgs", "repos_url": "https://api.github.com/users/colbym-MM/repos", "events_url": "https://api.github.com/users/colbym-MM/events{/privacy}", "received_events_url": "https://api.github.com/users/colbym-MM/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @colbym-MM, thanks for reporting. We are fixing it." ]
1,623,950,928,000
1,624,005,835,000
1,624,005,835,000
NONE
null
https://github.com/huggingface/datasets/blob/0e87e1d053220e8ecddfa679bcd89a4c7bc5af62/metrics/matthews_correlation/matthews_correlation.py#L66
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2513/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2513/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2512
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2512/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2512/comments
https://api.github.com/repos/huggingface/datasets/issues/2512/events
https://github.com/huggingface/datasets/issues/2512
924,069,353
MDU6SXNzdWU5MjQwNjkzNTM=
2,512
seqeval metric does not work with a recent version of sklearn: classification_report() got an unexpected keyword argument 'output_dict'
{ "login": "avidale", "id": 8642136, "node_id": "MDQ6VXNlcjg2NDIxMzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8642136?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avidale", "html_url": "https://github.com/avidale", "followers_url": "https://api.github.com/users/avidale/followers", "following_url": "https://api.github.com/users/avidale/following{/other_user}", "gists_url": "https://api.github.com/users/avidale/gists{/gist_id}", "starred_url": "https://api.github.com/users/avidale/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avidale/subscriptions", "organizations_url": "https://api.github.com/users/avidale/orgs", "repos_url": "https://api.github.com/users/avidale/repos", "events_url": "https://api.github.com/users/avidale/events{/privacy}", "received_events_url": "https://api.github.com/users/avidale/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Sorry, I was using an old version of sequeval" ]
1,623,944,162,000
1,623,944,767,000
1,623,944,767,000
NONE
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric seqeval = load_metric("seqeval") seqeval.compute(predictions=[['A']], references=[['A']]) ``` ## Expected results The function computes a dict with metrics ## Actual results ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-39-69a57f5cf06f> in <module> 1 from datasets import load_dataset, load_metric 2 seqeval = load_metric("seqeval") ----> 3 seqeval.compute(predictions=[['A']], references=[['A']]) ~/p3/lib/python3.7/site-packages/datasets/metric.py in compute(self, *args, **kwargs) 396 references = self.data["references"] 397 with temp_seed(self.seed): --> 398 output = self._compute(predictions=predictions, references=references, **kwargs) 399 400 if self.buf_writer is not None: ~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/81eda1ff004361d4fa48754a446ec69bb7aa9cf4d14c7215f407d1475941c5ff/seqeval.py in _compute(self, predictions, references, suffix) 95 96 def _compute(self, predictions, references, suffix=False): ---> 97 report = classification_report(y_true=references, y_pred=predictions, suffix=suffix, output_dict=True) 98 report.pop("macro avg") 99 report.pop("weighted avg") TypeError: classification_report() got an unexpected keyword argument 'output_dict' ``` ## Environment info sklearn=0.24 datasets=1.1.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2512/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2512/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2511
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2511/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2511/comments
https://api.github.com/repos/huggingface/datasets/issues/2511/events
https://github.com/huggingface/datasets/issues/2511
923,762,133
MDU6SXNzdWU5MjM3NjIxMzM=
2,511
Add C4
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Update on this: I'm computing the checksums of the data files. It will be available soon", "Added in #2575 :)" ]
1,623,925,864,000
1,625,488,618,000
1,625,488,617,000
MEMBER
null
## Adding a Dataset - **Name:** *C4* - **Description:** *https://github.com/allenai/allennlp/discussions/5056* - **Paper:** *https://arxiv.org/abs/1910.10683* - **Data:** *https://huggingface.co/datasets/allenai/c4* - **Motivation:** *Used a lot for pretraining* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Should fix https://github.com/huggingface/datasets/issues/1710
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2511/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2510
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2510/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2510/comments
https://api.github.com/repos/huggingface/datasets/issues/2510/events
https://github.com/huggingface/datasets/pull/2510
923,735,485
MDExOlB1bGxSZXF1ZXN0NjcyNDY3MzY3
2,510
Add align_labels_with_mapping to DatasetDict
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,623,924,215,000
1,623,926,725,000
1,623,926,724,000
MEMBER
null
https://github.com/huggingface/datasets/pull/2457 added the `Dataset.align_labels_with_mapping` method. In this PR I also added `DatasetDict.align_labels_with_mapping`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2510/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2510", "html_url": "https://github.com/huggingface/datasets/pull/2510", "diff_url": "https://github.com/huggingface/datasets/pull/2510.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2510.patch", "merged_at": 1623926724000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2509
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2509/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2509/comments
https://api.github.com/repos/huggingface/datasets/issues/2509/events
https://github.com/huggingface/datasets/pull/2509
922,846,035
MDExOlB1bGxSZXF1ZXN0NjcxNjcyMzU5
2,509
Fix fingerprint when moving cache dir
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Windows, why are you doing this to me ?", "Thanks @lhoestq, I'm starting reviewing this PR.", "Yea issues on windows are about long paths, not long filenames.\r\nWe can make sure the lock filenames are not too long, but not for the paths", "Took your suggestions into account @albertvillanova :)" ]
1,623,861,909,000
1,624,287,904,000
1,624,287,903,000
MEMBER
null
The fingerprint of a dataset changes if the cache directory is moved. I fixed that by setting the fingerprint to be the hash of: - the relative cache dir (dataset_name/version/config_id) - the requested split Close #2496 I had to fix an issue with the filelock filename that was too long (>255). It prevented the tests to run on my machine. I just added `hash_filename_if_too_long` in case this happens, to not get filenames longer than 255. We usually have long filenames for filelocks because they are named after the path that is being locked. In case the path is a cache directory that has long directory names, then the filelock filename could en up being very long.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2509/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2509", "html_url": "https://github.com/huggingface/datasets/pull/2509", "diff_url": "https://github.com/huggingface/datasets/pull/2509.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2509.patch", "merged_at": 1624287903000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2507/comments
https://api.github.com/repos/huggingface/datasets/issues/2507/events
https://github.com/huggingface/datasets/pull/2507
921,441,962
MDExOlB1bGxSZXF1ZXN0NjcwNDQ0MDgz
2,507
Rearrange JSON field names to match passed features schema field names
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": 1622477586000, "updated_at": 1626099120000, "due_on": 1625727600000, "closed_at": 1625809807000 }
[]
1,623,766,202,000
1,623,840,469,000
1,623,840,469,000
MEMBER
null
This PR depends on PR #2453 (which must be merged first). Close #2366.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2507/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2507", "html_url": "https://github.com/huggingface/datasets/pull/2507", "diff_url": "https://github.com/huggingface/datasets/pull/2507.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2507.patch", "merged_at": 1623840469000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2506
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2506/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2506/comments
https://api.github.com/repos/huggingface/datasets/issues/2506/events
https://github.com/huggingface/datasets/pull/2506
921,435,598
MDExOlB1bGxSZXF1ZXN0NjcwNDM4NTgx
2,506
Add course banner
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,623,765,834,000
1,623,774,336,000
1,623,774,335,000
MEMBER
null
This PR adds a course banner similar to the one you can now see in the [Transformers repo](https://github.com/huggingface/transformers) that links to the course. Let me know if placement seems right to you or not, I can move it just below the badges too.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2506/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2506/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2506", "html_url": "https://github.com/huggingface/datasets/pull/2506", "diff_url": "https://github.com/huggingface/datasets/pull/2506.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2506.patch", "merged_at": 1623774335000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2505
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2505/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2505/comments
https://api.github.com/repos/huggingface/datasets/issues/2505/events
https://github.com/huggingface/datasets/pull/2505
921,234,797
MDExOlB1bGxSZXF1ZXN0NjcwMjY2NjQy
2,505
Make numpy arrow extractor faster
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Looks like we have a nice speed up in some benchmarks. For example:\r\n- `read_formatted numpy 5000`: 4.584777 sec -> 0.487113 sec\r\n- `read_formatted torch 5000`: 4.565676 sec -> 1.289514 sec", "Can we convert this draft to PR @lhoestq ?", "Ready for review ! cc @vblagoje", "@lhoestq I tried the branch and it works for me. Although performance trace now shows a speedup, the overall pre-training speed up is minimal. But that's on my plate to explore further. ", "Thanks for investigating @vblagoje \r\n\r\n@albertvillanova , do you have any comments on this PR ? Otherwise I think we can merge it" ]
1,623,751,892,000
1,624,874,019,000
1,624,874,018,000
MEMBER
null
I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498 This could make the numpy/torch/tf/jax formatting faster
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2505/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2505/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2505", "html_url": "https://github.com/huggingface/datasets/pull/2505", "diff_url": "https://github.com/huggingface/datasets/pull/2505.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2505.patch", "merged_at": 1624874018000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2502/comments
https://api.github.com/repos/huggingface/datasets/issues/2502/events
https://github.com/huggingface/datasets/pull/2502
920,623,572
MDExOlB1bGxSZXF1ZXN0NjY5NzQ1MDA5
2,502
JAX integration
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,623,691,463,000
1,624,292,150,000
1,624,292,149,000
MEMBER
null
Hi ! I just added the "jax" formatting, as we already have for pytorch, tensorflow, numpy (and also pandas and arrow). It does pretty much the same thing as the pytorch formatter except it creates jax.numpy.ndarray objects. ```python from datasets import Dataset d = Dataset.from_dict({"foo": [[0., 1., 2.]]}) d = d.with_format("jax") d[0] # {'foo': DeviceArray([0., 1., 2.], dtype=float32)} ``` A few details: - The default integer precision for jax depends on the jax configuration `jax_enable_x64` (see [here](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#double-64bit-precision)), I took that into account. Unless `jax_enable_x64` is specified, it is int32 by default - AFAIK it's not possible to do a full conversion from arrow data to jax data. We are doing arrow -> numpy -> jax but the numpy -> jax part doesn't do zero copy unfortutanely (see [here](https://github.com/google/jax/issues/4486)) - the env var for disabling JAX is `USE_JAX`. However I noticed that in `transformers` it is `USE_FLAX`. This is not an issue though IMO I also updated `convert_to_python_objects` to allow users to pass jax.numpy.ndarray objects to build a dataset. Since the `convert_to_python_objects` method became slow because it's the time when pytorch, tf (and now jax) are imported, I fixed it by checking the `sys.modules` to avoid unecessary import of pytorch, tf or jax. Close #2495
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2502/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2502/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2502", "html_url": "https://github.com/huggingface/datasets/pull/2502", "diff_url": "https://github.com/huggingface/datasets/pull/2502.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2502.patch", "merged_at": 1624292148000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2501/comments
https://api.github.com/repos/huggingface/datasets/issues/2501/events
https://github.com/huggingface/datasets/pull/2501
920,579,634
MDExOlB1bGxSZXF1ZXN0NjY5NzA3Nzc0
2,501
Add Zenodo metadata file with license
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": 1622477586000, "updated_at": 1626099120000, "due_on": 1625727600000, "closed_at": 1625809807000 }
[]
1,623,688,092,000
1,623,689,382,000
1,623,689,382,000
MEMBER
null
This Zenodo metadata file fixes the name of the `Datasets` license appearing in the DOI as `"Apache-2.0"`, which otherwise by default is `"other-open"`. Close #2472.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2501/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2501/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2501", "html_url": "https://github.com/huggingface/datasets/pull/2501", "diff_url": "https://github.com/huggingface/datasets/pull/2501.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2501.patch", "merged_at": 1623689382000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2500/comments
https://api.github.com/repos/huggingface/datasets/issues/2500/events
https://github.com/huggingface/datasets/pull/2500
920,471,411
MDExOlB1bGxSZXF1ZXN0NjY5NjE2MjQ1
2,500
Add load_dataset_builder
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @mariosasko, thanks for taking on this issue.\r\n\r\nJust a few logistic suggestions, as you are one of our most active contributors ❤️ :\r\n- When you start working on an issue, you can self-assign it to you by commenting on the issue page with the keyword: `#self-assign`; we have implemented a GitHub Action to take care of that... 😉 \r\n- When you are still working on your Pull Request, instead of using the `[WIP]` in the PR name, you can instead create a *draft* pull request: use the drop-down (on the right of the *Create Pull Request* button) and select **Create Draft Pull Request**, then click **Draft Pull Request**.\r\n\r\nI hope you find these hints useful. 🤗 ", "@albertvillanova Thanks for the tips. When creating this PR, it slipped my mind that this should be a draft. GH has an option to convert already created PRs to draft PRs, but this requires write access for the repo, so maybe you can help.", "Ready for the review!\r\n\r\nOne additional change. I've modified the `camelcase_to_snakecase`/`snakecase_to_camelcase` conversion functions to fix conversion of the names with 2 or more underscores (e.g. `camelcase_to_snakecase(\"__DummyDataset__\")` would return `___dummy_dataset__`; notice one extra underscore at the beginning). The implementation is based on the [inflection](https://pypi.org/project/inflection/) library.\r\n", "Thank you for adding this feature, @mariosasko - this is really awesome!\r\n\r\nTried with:\r\n```\r\npython -c \"from datasets import load_dataset_builder; b = load_dataset_builder('openwebtext-10k'); print(b.cache_dir)\"\r\nUsing the latest cached version of the module from /home/stas/.cache/huggingface/modules/datasets_modules/datasets\r\n/openwebtext-10k/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b (last modified on Wed May 12 \r\n20:22:53 2021) \r\n\r\nsince it couldn't be found locally at openwebtext-10k/openwebtext-10k.py \r\n\r\nor remotely (FileNotFoundError).\r\n\r\n/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\r\n```\r\n\r\nThe logger message (edited by me to add new lines to point the issues out) is a bit confusing to the user - that is what does `FileNotFoundError` refer to? \r\n\r\n1. May be replace `FileNotFoundError` with where it was looking for a file online. But then the remote file is there - it's found \r\n2. I'm not sure why it says \"since it couldn't be found locally\" - as it is locally found at the cache folder and again what does \" locally at openwebtext-10k/openwebtext-10k.py\" mean - i.e. where does it look for it? Is it `./openwebtext-10k/openwebtext-10k.py` it's looking for? or in some specific dir?\r\n\r\nIf the cached version always supersedes any other versions perhaps this is what it should say?\r\n```\r\nfound cached version at xxx, not looking for a local at yyy, not downloading remote at zzz\r\n```", "Hi ! Thanks for the comments\r\n\r\nRegarding your last message:\r\nYou must pass `stas/openwebtext-10k` as in `load_dataset` instead of `openwebtext-10k`. Otherwise it doesn't know how to retrieve the builder from the HF Hub.\r\n\r\nWhen you specify a dataset name without a slash, it tries to load a canonical dataset or it looks locally at ./openwebtext-10k/openwebtext-10k.py\r\nHere since `openwebtext-10k` is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.\r\nAs a fallback it managed to find the dataset script in your cache and it used this one.", "Oh, I see, so I actually used an incorrect input. so it was a user error. Correcting it:\r\n\r\n```\r\npython -c \"from datasets import load_dataset_builder; b = load_dataset_builder('stas/openwebtext-10k'); print(b.cache_dir)\"\r\n/home/stas/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b\r\n```\r\n\r\nNow there is no logger message. Got it!\r\n\r\nOK, I'm not sure the magical recovery it did in first place is most beneficial in the long run. I'd have rather it failed and said: \"incorrect input there is no such dataset as 'openwebtext-10k' at <this path> or <this url>\" - because if it doesn't fail I may leave it in the code and it'll fail later when another user tries to use my code and won't have the cache. Does it make sense? Giving me `this url` allows me to go to the datasets hub and realize that the dataset is missing the username qualifier.\r\n\r\n> Here since openwebtext-10k is not a canonical dataset and doesn't exist locally at ./openwebtext-10k/openwebtext-10k.py: it raised a FileNotFoundError.\r\n\r\nExcept it slapped the exception name to ` remotely (FileNotFoundError).` which makes no sense.\r\n\r\nPlus for the local it's not clear where is it looking relatively too when it gets `FileNotFoundError` - perhaps it'd help to use absolute path and use it in the message?\r\n\r\n---------------\r\n\r\nFinally, the logger format is not set up so the user gets a warning w/o knowing it's a warning. As you can see it's missing the WARNING pre-amble in https://github.com/huggingface/datasets/pull/2500#issuecomment-874250500\r\n\r\ni.e. I had no idea it was warning me of something, I was just trying to make sense of the message that's why I started the discussion and otherwise I'd have completely missed the point of me making an error." ]
1,623,680,865,000
1,625,789,296,000
1,625,481,958,000
CONTRIBUTOR
null
Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself. TODOs: - [x] Add docstring and entry in the docs - [x] Add tests Closes #2484
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2500/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2500/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2500", "html_url": "https://github.com/huggingface/datasets/pull/2500", "diff_url": "https://github.com/huggingface/datasets/pull/2500.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2500.patch", "merged_at": 1625481957000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2497/comments
https://api.github.com/repos/huggingface/datasets/issues/2497/events
https://github.com/huggingface/datasets/pull/2497
920,250,382
MDExOlB1bGxSZXF1ZXN0NjY5NDI3OTU3
2,497
Use default cast for sliced list arrays if pyarrow >= 4
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": 1622477586000, "updated_at": 1626099120000, "due_on": 1625727600000, "closed_at": 1625809807000 }
[ "I believe we don't use PyArrow >= 4.0.0 because of some segfault issues:\r\nhttps://github.com/huggingface/datasets/blob/1206ffbcd42dda415f6bfb3d5040708f50413c93/setup.py#L78\r\nCan you confirm @lhoestq ?", "@SBrandeis pyarrow version 4.0.1 has fixed that issue: #2489 😉 " ]
1,623,664,967,000
1,623,780,378,000
1,623,680,677,000
MEMBER
null
From pyarrow version 4, it is supported to cast sliced lists. This PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4. In relation with PR #2461 and #2490. cc: @lhoestq, @abhi1thakur, @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2497/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2497/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2497", "html_url": "https://github.com/huggingface/datasets/pull/2497", "diff_url": "https://github.com/huggingface/datasets/pull/2497.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2497.patch", "merged_at": 1623680677000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2496/comments
https://api.github.com/repos/huggingface/datasets/issues/2496/events
https://github.com/huggingface/datasets/issues/2496
920,216,314
MDU6SXNzdWU5MjAyMTYzMTQ=
2,496
Dataset fingerprint changes after moving the cache directory, which prevent cache reload when using `map`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
1,623,662,426,000
1,624,287,903,000
1,624,287,903,000
MEMBER
null
`Dataset.map` uses the dataset fingerprint (a hash) for caching. However the fingerprint seems to change when someone moves the cache directory of the dataset. This is because it uses the default fingerprint generation: 1. the dataset path is used to get the fingerprint 2. the modification times of the arrow file is also used to get the fingerprint To fix that we could set the fingerprint of the dataset to be a hash of (<dataset_name>, <config_name>, <version>, <script_hash>), i.e. a hash of the the cache path relative to the cache directory.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2496/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2496/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2495/comments
https://api.github.com/repos/huggingface/datasets/issues/2495/events
https://github.com/huggingface/datasets/issues/2495
920,170,030
MDU6SXNzdWU5MjAxNzAwMzA=
2,495
JAX formatting
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
1,623,659,527,000
1,624,292,149,000
1,624,292,149,000
MEMBER
null
We already support pytorch, tensorflow, numpy, pandas and arrow dataset formatting. Let's add jax as well
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2495/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2495/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2493
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2493/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2493/comments
https://api.github.com/repos/huggingface/datasets/issues/2493/events
https://github.com/huggingface/datasets/pull/2493
919,833,281
MDExOlB1bGxSZXF1ZXN0NjY5MDc4OTcw
2,493
add tensorflow-macos support
{ "login": "slayerjain", "id": 12831254, "node_id": "MDQ6VXNlcjEyODMxMjU0", "avatar_url": "https://avatars.githubusercontent.com/u/12831254?v=4", "gravatar_id": "", "url": "https://api.github.com/users/slayerjain", "html_url": "https://github.com/slayerjain", "followers_url": "https://api.github.com/users/slayerjain/followers", "following_url": "https://api.github.com/users/slayerjain/following{/other_user}", "gists_url": "https://api.github.com/users/slayerjain/gists{/gist_id}", "starred_url": "https://api.github.com/users/slayerjain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/slayerjain/subscriptions", "organizations_url": "https://api.github.com/users/slayerjain/orgs", "repos_url": "https://api.github.com/users/slayerjain/repos", "events_url": "https://api.github.com/users/slayerjain/events{/privacy}", "received_events_url": "https://api.github.com/users/slayerjain/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@albertvillanova done!" ]
1,623,601,208,000
1,623,747,186,000
1,623,747,186,000
CONTRIBUTOR
null
ref - https://github.com/huggingface/datasets/issues/2068
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2493/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2493/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2493", "html_url": "https://github.com/huggingface/datasets/pull/2493", "diff_url": "https://github.com/huggingface/datasets/pull/2493.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2493.patch", "merged_at": 1623747186000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2492/comments
https://api.github.com/repos/huggingface/datasets/issues/2492/events
https://github.com/huggingface/datasets/pull/2492
919,718,102
MDExOlB1bGxSZXF1ZXN0NjY4OTkxODk4
2,492
Eduge
{ "login": "enod", "id": 6023883, "node_id": "MDQ6VXNlcjYwMjM4ODM=", "avatar_url": "https://avatars.githubusercontent.com/u/6023883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enod", "html_url": "https://github.com/enod", "followers_url": "https://api.github.com/users/enod/followers", "following_url": "https://api.github.com/users/enod/following{/other_user}", "gists_url": "https://api.github.com/users/enod/gists{/gist_id}", "starred_url": "https://api.github.com/users/enod/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/enod/subscriptions", "organizations_url": "https://api.github.com/users/enod/orgs", "repos_url": "https://api.github.com/users/enod/repos", "events_url": "https://api.github.com/users/enod/events{/privacy}", "received_events_url": "https://api.github.com/users/enod/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,623,561,059,000
1,624,355,344,000
1,623,840,106,000
CONTRIBUTOR
null
Hi, awesome folks behind the huggingface! Here is my PR for the text classification dataset in Mongolian. Please do let me know in case you have anything to clarify. Thanks & Regards, Enod
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2492/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2492/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2492", "html_url": "https://github.com/huggingface/datasets/pull/2492", "diff_url": "https://github.com/huggingface/datasets/pull/2492.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2492.patch", "merged_at": 1623840106000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2491/comments
https://api.github.com/repos/huggingface/datasets/issues/2491/events
https://github.com/huggingface/datasets/pull/2491
919,714,506
MDExOlB1bGxSZXF1ZXN0NjY4OTg5MTUw
2,491
add eduge classification dataset
{ "login": "enod", "id": 6023883, "node_id": "MDQ6VXNlcjYwMjM4ODM=", "avatar_url": "https://avatars.githubusercontent.com/u/6023883?v=4", "gravatar_id": "", "url": "https://api.github.com/users/enod", "html_url": "https://github.com/enod", "followers_url": "https://api.github.com/users/enod/followers", "following_url": "https://api.github.com/users/enod/following{/other_user}", "gists_url": "https://api.github.com/users/enod/gists{/gist_id}", "starred_url": "https://api.github.com/users/enod/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/enod/subscriptions", "organizations_url": "https://api.github.com/users/enod/orgs", "repos_url": "https://api.github.com/users/enod/repos", "events_url": "https://api.github.com/users/enod/events{/privacy}", "received_events_url": "https://api.github.com/users/enod/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Closing this PR as I'll submit a new one - bug free" ]
1,623,559,021,000
1,623,560,808,000
1,623,560,798,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2491/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2491/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2491", "html_url": "https://github.com/huggingface/datasets/pull/2491", "diff_url": "https://github.com/huggingface/datasets/pull/2491.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2491.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2490/comments
https://api.github.com/repos/huggingface/datasets/issues/2490/events
https://github.com/huggingface/datasets/pull/2490
919,571,385
MDExOlB1bGxSZXF1ZXN0NjY4ODc4NDA3
2,490
Allow latest pyarrow version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": 1622477586000, "updated_at": 1626099120000, "due_on": 1625727600000, "closed_at": 1625809807000 }
[ "i need some help with this" ]
1,623,507,454,000
1,625,590,492,000
1,623,657,203,000
MEMBER
null
Allow latest pyarrow version, once that version 4.0.1 fixes the segfault bug introduced in version 4.0.0. Close #2489.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2490/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2490/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2490", "html_url": "https://github.com/huggingface/datasets/pull/2490", "diff_url": "https://github.com/huggingface/datasets/pull/2490.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2490.patch", "merged_at": 1623657203000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2489
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2489/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2489/comments
https://api.github.com/repos/huggingface/datasets/issues/2489/events
https://github.com/huggingface/datasets/issues/2489
919,569,749
MDU6SXNzdWU5MTk1Njk3NDk=
2,489
Allow latest pyarrow version once segfault bug is fixed
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,623,506,992,000
1,623,657,203,000
1,623,657,203,000
MEMBER
null
As pointed out by @symeneses (see https://github.com/huggingface/datasets/pull/2268#issuecomment-860048613), pyarrow has fixed the segfault bug present in version 4.0.0 (see https://issues.apache.org/jira/browse/ARROW-12568): - it was fixed on 3 May 2021 - version 4.0.1 was released on 19 May 2021 with the bug fix
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2489/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2488/comments
https://api.github.com/repos/huggingface/datasets/issues/2488/events
https://github.com/huggingface/datasets/pull/2488
919,500,756
MDExOlB1bGxSZXF1ZXN0NjY4ODIwNDA1
2,488
Set configurable downloaded datasets path
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": 1622477586000, "updated_at": 1626099120000, "due_on": 1625727600000, "closed_at": 1625809807000 }
[]
1,623,488,943,000
1,623,662,007,000
1,623,659,347,000
MEMBER
null
Part of #2480.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2488/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2488", "html_url": "https://github.com/huggingface/datasets/pull/2488", "diff_url": "https://github.com/huggingface/datasets/pull/2488.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2488.patch", "merged_at": 1623659347000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2487/comments
https://api.github.com/repos/huggingface/datasets/issues/2487/events
https://github.com/huggingface/datasets/pull/2487
919,452,407
MDExOlB1bGxSZXF1ZXN0NjY4Nzc5Mjk0
2,487
Set configurable extracted datasets path
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": 1622477586000, "updated_at": 1626099120000, "due_on": 1625727600000, "closed_at": 1625809807000 }
[ "Let me push a small fix... 😉 ", "Thanks !" ]
1,623,476,849,000
1,623,663,017,000
1,623,661,376,000
MEMBER
null
Part of #2480.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2487/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2487", "html_url": "https://github.com/huggingface/datasets/pull/2487", "diff_url": "https://github.com/huggingface/datasets/pull/2487.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2487.patch", "merged_at": 1623661376000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2484/comments
https://api.github.com/repos/huggingface/datasets/issues/2484/events
https://github.com/huggingface/datasets/issues/2484
919,092,635
MDU6SXNzdWU5MTkwOTI2MzU=
2,484
Implement loading a dataset builder
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "#self-assign" ]
1,623,437,242,000
1,625,481,957,000
1,625,481,957,000
MEMBER
null
As discussed with @stas00 and @lhoestq, this would allow things like: ```python from datasets import load_dataset_builder dataset_name = "openwebtext" builder = load_dataset_builder(dataset_name) print(builder.cache_dir) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2484/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2484/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2483/comments
https://api.github.com/repos/huggingface/datasets/issues/2483/events
https://github.com/huggingface/datasets/pull/2483
918,871,712
MDExOlB1bGxSZXF1ZXN0NjY4MjU1Mjg1
2,483
Use gc.collect only when needed to avoid slow downs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I continue thinking that the origin of the issue has to do with tqdm (and not with Arrow): this issue only arises for version 4.50.0 (and later) of tqdm, not for previous versions of tqdm.\r\n\r\nMy guess is that tqdm made a change from version 4.50.0 that does not properly release the iterable. ", "FR" ]
1,623,424,170,000
1,624,044,306,000
1,623,425,496,000
MEMBER
null
In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482) However calling gc.collect too often causes significant slow downs (the CI run time doubled). So I just moved the gc.collect call to the exact place where it's actually needed: when post-processing a dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2483/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2483", "html_url": "https://github.com/huggingface/datasets/pull/2483", "diff_url": "https://github.com/huggingface/datasets/pull/2483.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2483.patch", "merged_at": 1623425495000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2482/comments
https://api.github.com/repos/huggingface/datasets/issues/2482/events
https://github.com/huggingface/datasets/pull/2482
918,846,027
MDExOlB1bGxSZXF1ZXN0NjY4MjMyMzI5
2,482
Allow to use tqdm>=4.50.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,623,422,961,000
1,623,424,311,000
1,623,424,310,000
MEMBER
null
We used to have permission errors on windows whith the latest versions of tqdm (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/6365/workflows/24f7c960-3176-43a5-9652-7830a23a981e/jobs/39232)) They were due to open arrow files not properly closed by pyarrow. Since https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 gc.collect is called each time we don't need an arrow file to make sure that the files are closed. close https://github.com/huggingface/datasets/issues/2471 cc @lewtun
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2482/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2482/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2482", "html_url": "https://github.com/huggingface/datasets/pull/2482", "diff_url": "https://github.com/huggingface/datasets/pull/2482.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2482.patch", "merged_at": 1623424310000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2481/comments
https://api.github.com/repos/huggingface/datasets/issues/2481/events
https://github.com/huggingface/datasets/issues/2481
918,680,168
MDU6SXNzdWU5MTg2ODAxNjg=
2,481
Delete extracted files to save disk space
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/6", "html_url": "https://github.com/huggingface/datasets/milestone/6", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels", "id": 6836458, "node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==", "number": 6, "title": "1.10", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 29, "state": "closed", "created_at": 1623178113000, "updated_at": 1626881809000, "due_on": 1628146800000, "closed_at": 1626881809000 }
[ "My suggestion for this would be to have this enabled by default.\r\n\r\nPlus I don't know if there should be a dedicated issue to that is another functionality. But I propose layered building rather than all at once. That is:\r\n\r\n1. uncompress a handful of files via a generator enough to generate one arrow file\r\n2. process arrow file 1\r\n3. delete all the files that went in and aren't needed anymore.\r\n\r\nrinse and repeat.\r\n\r\n1. This way much less disc space will be required - e.g. on JZ we won't be running into inode limitation, also it'd help with the collaborative hub training project\r\n2. The user doesn't need to go and manually clean up all the huge files that were left after pre-processing\r\n3. It would already include deleting temp files this issue is talking about\r\n\r\nI wonder if the new streaming API would be of help, except here the streaming would be into arrow files as the destination, rather than dataloaders." ]
1,623,414,112,000
1,626,685,698,000
1,626,685,698,000
MEMBER
null
As discussed with @stas00 and @lhoestq, allowing the deletion of extracted files would save a great amount of disk space to typical user.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2481/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2481/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2479/comments
https://api.github.com/repos/huggingface/datasets/issues/2479/events
https://github.com/huggingface/datasets/pull/2479
918,672,431
MDExOlB1bGxSZXF1ZXN0NjY4MDc3NTI4
2,479
❌ load_datasets ❌
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,623,413,676,000
1,623,422,785,000
1,623,422,785,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2479/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2479", "html_url": "https://github.com/huggingface/datasets/pull/2479", "diff_url": "https://github.com/huggingface/datasets/pull/2479.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2479.patch", "merged_at": 1623422784000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2477/comments
https://api.github.com/repos/huggingface/datasets/issues/2477/events
https://github.com/huggingface/datasets/pull/2477
918,334,431
MDExOlB1bGxSZXF1ZXN0NjY3NzczMTY0
2,477
Fix docs custom stable version
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": 1622477586000, "updated_at": 1626099120000, "due_on": 1625727600000, "closed_at": 1625809807000 }
[ "I see that @lhoestq overlooked this PR with his commit 07e2b05. 😢 \r\n\r\nI'm adding a script so that this issue does not happen again.\r\n", "For the moment, the script only includes `update_custom_js`, but in a follow-up PR I will include all the required steps to make a package release.", "I think we just need to clarify the release process in setup.py instead of adding a script that does the replacement", "@lhoestq I really think we should implement a script that performs the release (instead of doing it manually as it is done now), as it is already the case in `transformers`. I will do it in a next PR.\r\n\r\nFor the moment, this PR includes one of the steps of the release script." ]
1,623,396,363,000
1,623,662,060,000
1,623,658,818,000
MEMBER
null
Currently docs default version is 1.5.0. This PR fixes this and sets the latest version instead.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2477/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2477", "html_url": "https://github.com/huggingface/datasets/pull/2477", "diff_url": "https://github.com/huggingface/datasets/pull/2477.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2477.patch", "merged_at": 1623658818000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2476
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2476/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2476/comments
https://api.github.com/repos/huggingface/datasets/issues/2476/events
https://github.com/huggingface/datasets/pull/2476
917,686,662
MDExOlB1bGxSZXF1ZXN0NjY3MTg3OTk1
2,476
Add TimeDial
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\nI've pushed the updated README and tags. Let me know if anything is missing/needs some improvement!\r\n\r\n~PS. I don't know why it's not triggering the build~" ]
1,623,349,987,000
1,627,649,874,000
1,627,649,874,000
CONTRIBUTOR
null
Dataset: https://github.com/google-research-datasets/TimeDial To-Do: Update README.md and add YAML tags
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2476/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2476/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2476", "html_url": "https://github.com/huggingface/datasets/pull/2476", "diff_url": "https://github.com/huggingface/datasets/pull/2476.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2476.patch", "merged_at": 1627649874000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2475/comments
https://api.github.com/repos/huggingface/datasets/issues/2475/events
https://github.com/huggingface/datasets/issues/2475
917,650,882
MDU6SXNzdWU5MTc2NTA4ODI=
2,475
Issue in timit_asr database
{ "login": "hrahamim", "id": 85702107, "node_id": "MDQ6VXNlcjg1NzAyMTA3", "avatar_url": "https://avatars.githubusercontent.com/u/85702107?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hrahamim", "html_url": "https://github.com/hrahamim", "followers_url": "https://api.github.com/users/hrahamim/followers", "following_url": "https://api.github.com/users/hrahamim/following{/other_user}", "gists_url": "https://api.github.com/users/hrahamim/gists{/gist_id}", "starred_url": "https://api.github.com/users/hrahamim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hrahamim/subscriptions", "organizations_url": "https://api.github.com/users/hrahamim/orgs", "repos_url": "https://api.github.com/users/hrahamim/repos", "events_url": "https://api.github.com/users/hrahamim/events{/privacy}", "received_events_url": "https://api.github.com/users/hrahamim/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "This bug was fixed in #1995. Upgrading datasets to version 1.6 fixes the issue!", "Indeed was a fixed bug.\r\nWorks on version 1.8\r\nThanks " ]
1,623,348,329,000
1,623,572,030,000
1,623,571,993,000
NONE
null
## Describe the bug I am trying to load the timit_asr dataset however only the first record is shown (duplicated over all the rows). I am using the next code line dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10)) The above code result with the same sentence duplicated ten times. It also happens when I use the dataset viewer at Streamlit . ## Steps to reproduce the bug from datasets import load_dataset dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10)) data = dataset.to_pandas() # Sample code to reproduce the bug ``` ## Expected results table with different row information ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.4.1 (also occur in the latest version) - Platform: Linux-4.15.0-143-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.8.1+cu102 (False) - Tensorflow version (GPU?): 1.15.3 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2475/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2473/comments
https://api.github.com/repos/huggingface/datasets/issues/2473/events
https://github.com/huggingface/datasets/pull/2473
917,538,629
MDExOlB1bGxSZXF1ZXN0NjY3MDU5MjI5
2,473
Add Disfl-QA
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Sounds great! It'll make things easier for the user while accessing the dataset. I'll make some changes to the current file then.", "I've updated with the suggested changes. Updated the README, YAML tags as well (not sure of Size category tag as I couldn't pass the path of `dataset_infos.json` for this dataset)\r\n" ]
1,623,341,880,000
1,627,559,779,000
1,627,559,778,000
CONTRIBUTOR
null
Dataset: https://github.com/google-research-datasets/disfl-qa To-Do: Update README.md and add YAML tags
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2473/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2473/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2473", "html_url": "https://github.com/huggingface/datasets/pull/2473", "diff_url": "https://github.com/huggingface/datasets/pull/2473.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2473.patch", "merged_at": 1627559778000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2472/comments
https://api.github.com/repos/huggingface/datasets/issues/2472/events
https://github.com/huggingface/datasets/issues/2472
917,463,821
MDU6SXNzdWU5MTc0NjM4MjE=
2,472
Fix automatic generation of Zenodo DOI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": 1622477586000, "updated_at": 1626099120000, "due_on": 1625727600000, "closed_at": 1625809807000 }
[ "I have received a reply from Zenodo support:\r\n> We are currently investigating and fixing this issue related to GitHub releases. As soon as we have solved it we will reach back to you.", "Other repo maintainers had the same problem with Zenodo. \r\n\r\nThere is an open issue on their GitHub repo: zenodo/zenodo#2181", "I have received the following request from Zenodo support:\r\n> Could you send us the link to the repository as well as the release tag?\r\n\r\nMy reply:\r\n> Sure, here it is:\r\n> - Link to the repository: https://github.com/huggingface/datasets\r\n> - Link to the repository at the release tag: https://github.com/huggingface/datasets/releases/tag/1.8.0\r\n> - Release tag: 1.8.0", "Zenodo issue has been fixed. The 1.8.0 release DOI can be found here: https://zenodo.org/record/4946100#.YMd6vKj7RPY" ]
1,623,338,146,000
1,623,689,382,000
1,623,689,382,000
MEMBER
null
After the last release of Datasets (1.8.0), the automatic generation of the Zenodo DOI failed: it appears in yellow as "Received", instead of in green as "Published". I have contacted Zenodo support to fix this issue. TODO: - [x] Check with Zenodo to fix the issue - [x] Check BibTeX entry is right
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2472/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2472/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2471/comments
https://api.github.com/repos/huggingface/datasets/issues/2471/events
https://github.com/huggingface/datasets/issues/2471
917,067,165
MDU6SXNzdWU5MTcwNjcxNjU=
2,471
Fix PermissionError on Windows when using tqdm >=4.50.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": 1622477586000, "updated_at": 1626099120000, "due_on": 1625727600000, "closed_at": 1625809807000 }
[]
1,623,313,909,000
1,623,424,310,000
1,623,424,310,000
MEMBER
null
See: https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111 ``` PermissionError: [WinError 32] The process cannot access the file because it is being used by another process ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2471/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2470/comments
https://api.github.com/repos/huggingface/datasets/issues/2470/events
https://github.com/huggingface/datasets/issues/2470
916,724,260
MDU6SXNzdWU5MTY3MjQyNjA=
2,470
Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`.
{ "login": "mbforbes", "id": 1170062, "node_id": "MDQ6VXNlcjExNzAwNjI=", "avatar_url": "https://avatars.githubusercontent.com/u/1170062?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mbforbes", "html_url": "https://github.com/mbforbes", "followers_url": "https://api.github.com/users/mbforbes/followers", "following_url": "https://api.github.com/users/mbforbes/following{/other_user}", "gists_url": "https://api.github.com/users/mbforbes/gists{/gist_id}", "starred_url": "https://api.github.com/users/mbforbes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mbforbes/subscriptions", "organizations_url": "https://api.github.com/users/mbforbes/orgs", "repos_url": "https://api.github.com/users/mbforbes/repos", "events_url": "https://api.github.com/users/mbforbes/events{/privacy}", "received_events_url": "https://api.github.com/users/mbforbes/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! It looks like the issue comes from pyarrow. What version of pyarrow are you using ? How did you install it ?", "Thank you for the quick reply! I have `pyarrow==4.0.0`, and I am installing with `pip`. It's not one of my explicit dependencies, so I assume it came along with something else.", "Could you trying reinstalling pyarrow with pip ?\r\nI'm not sure why it would check in your multicurtural-sc directory for source files.", "Sure! I tried reinstalling to get latest. pip was mad because it looks like Datasets currently wants <4.0.0 (which is interesting, because apparently I ended up with 4.0.0 already?), but I gave it a shot anyway:\r\n\r\n```bash\r\n$ pip install --upgrade --force-reinstall pyarrow\r\nCollecting pyarrow\r\n Downloading pyarrow-4.0.1-cp39-cp39-manylinux2014_x86_64.whl (21.9 MB)\r\n |████████████████████████████████| 21.9 MB 23.8 MB/s\r\nCollecting numpy>=1.16.6\r\n Using cached numpy-1.20.3-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (15.4 MB)\r\nInstalling collected packages: numpy, pyarrow\r\n Attempting uninstall: numpy\r\n Found existing installation: numpy 1.20.3\r\n Uninstalling numpy-1.20.3:\r\n Successfully uninstalled numpy-1.20.3\r\n Attempting uninstall: pyarrow\r\n Found existing installation: pyarrow 3.0.0\r\n Uninstalling pyarrow-3.0.0:\r\n Successfully uninstalled pyarrow-3.0.0\r\nERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\ndatasets 1.8.0 requires pyarrow<4.0.0,>=1.0.0, but you have pyarrow 4.0.1 which is incompatible.\r\nSuccessfully installed numpy-1.20.3 pyarrow-4.0.1\r\n```\r\n\r\nTrying it, the same issue:\r\n\r\n![image](https://user-images.githubusercontent.com/1170062/121730226-3f470b80-caa4-11eb-85a5-684c44c816da.png)\r\n\r\nI tried installing `\"pyarrow<4.0.0\"`, which gave me 3.0.0. Running, still, same issue.\r\n\r\nI agree it's weird that pyarrow is checking the source code directory for its files. (There is no `pyarrow/` directory there.) To me, that makes it seem like an issue with how pyarrow is called.\r\n\r\nOut of curiosity, I tried running this with fewer workers to see when the error arises:\r\n\r\n- 1: ✅\r\n- 2: ✅\r\n- 4: ✅\r\n- 8: ✅\r\n- 10: ✅\r\n- 11: ❌ 🤔\r\n- 12: ❌\r\n- 16: ❌\r\n- 32: ❌\r\n\r\nchecking my datasets:\r\n\r\n```python\r\n>>> datasets\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text'],\r\n num_rows: 389290\r\n })\r\n validation.sc: Dataset({\r\n features: ['text'],\r\n num_rows: 10 # 🤔\r\n })\r\n validation.wvs: Dataset({\r\n features: ['text'],\r\n num_rows: 93928\r\n })\r\n})\r\n```\r\n\r\nNew hypothesis: crash if `num_proc` > length of a dataset? 😅\r\n\r\nIf so, this might be totally my fault, as the caller. Could be a docs fix, or maybe this library could do a check to limit `num_proc` for this case?", "Good catch ! Not sure why it could raise such a weird issue from pyarrow though\r\nWe should definitely reduce num_proc to the length of the dataset if needed and log a warning.", "This has been fixed in #2566, thanks @connor-mccarthy !\r\nWe'll make a new release soon that includes the fix ;)" ]
1,623,278,422,000
1,625,132,094,000
1,625,130,673,000
NONE
null
## Describe the bug Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`. I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any tips greatly appreciated, I'm happy to provide more info if it would helps us diagnose. ## Steps to reproduce the bug ```python # this function will be applied with map() def tokenize_function(examples): return tokenizer( examples["text"], padding=PaddingStrategy.DO_NOT_PAD, truncation=True, ) # data_files is a Dict[str, str] mapping name -> path datasets = load_dataset("text", data_files={...}) # this is where the error happens if num_proc = 16, # but is fine if num_proc = 1 tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=num_workers, ) ``` ## Expected results The `map()` function succeeds with `num_proc` > 1. ## Actual results ![image](https://user-images.githubusercontent.com/1170062/121404271-a6cc5200-c910-11eb-8e27-5c893bd04042.png) ![image](https://user-images.githubusercontent.com/1170062/121404362-be0b3f80-c910-11eb-9117-658943029aef.png) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.6.2 - Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.31 - Python version: 3.9.5 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes, but I think N/A for this issue - Using distributed or parallel set-up in script?: Multi-GPU on one machine, but I think also N/A for this issue
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2470/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2469/comments
https://api.github.com/repos/huggingface/datasets/issues/2469/events
https://github.com/huggingface/datasets/pull/2469
916,440,418
MDExOlB1bGxSZXF1ZXN0NjY2MTA1OTk1
2,469
Bump tqdm version
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "i tried both the latest version of `tqdm` and the version required by `autonlp` - no luck with windows 😞 \r\n\r\nit's very weird that a progress bar would trigger these kind of errors, so i'll have a look to see if it's something unique to `datasets`", "Closing since this is now fixed in #2482 " ]
1,623,259,480,000
1,623,423,822,000
1,623,423,816,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2469/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2469", "html_url": "https://github.com/huggingface/datasets/pull/2469", "diff_url": "https://github.com/huggingface/datasets/pull/2469.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2469.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2468/comments
https://api.github.com/repos/huggingface/datasets/issues/2468/events
https://github.com/huggingface/datasets/pull/2468
916,427,320
MDExOlB1bGxSZXF1ZXN0NjY2MDk0ODI5
2,468
Implement ClassLabel encoding in JSON loader
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": 1622477586000, "updated_at": 1626099120000, "due_on": 1625727600000, "closed_at": 1625809807000 }
[ "No, nevermind @lhoestq. Thanks to you for your reviews!" ]
1,623,258,534,000
1,624,894,794,000
1,624,892,735,000
MEMBER
null
Close #2365.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2468/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2468", "html_url": "https://github.com/huggingface/datasets/pull/2468", "diff_url": "https://github.com/huggingface/datasets/pull/2468.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2468.patch", "merged_at": 1624892734000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2466/comments
https://api.github.com/repos/huggingface/datasets/issues/2466/events
https://github.com/huggingface/datasets/pull/2466
915,914,098
MDExOlB1bGxSZXF1ZXN0NjY1NjY1MjQy
2,466
change udpos features structure
{ "login": "jerryIsHere", "id": 50871412, "node_id": "MDQ6VXNlcjUwODcxNDEy", "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerryIsHere", "html_url": "https://github.com/jerryIsHere", "followers_url": "https://api.github.com/users/jerryIsHere/followers", "following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}", "gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions", "organizations_url": "https://api.github.com/users/jerryIsHere/orgs", "repos_url": "https://api.github.com/users/jerryIsHere/repos", "events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}", "received_events_url": "https://api.github.com/users/jerryIsHere/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Let's add the tags in another PR. Thanks again !", "Close #2061 , close #2444." ]
1,623,225,811,000
1,624,017,309,000
1,623,840,097,000
CONTRIBUTOR
null
The structure is change such that each example is a sentence The change is done for issues: #2061 #2444 Close #2061 , close #2444.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2466/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2466", "html_url": "https://github.com/huggingface/datasets/pull/2466", "diff_url": "https://github.com/huggingface/datasets/pull/2466.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2466.patch", "merged_at": 1623840097000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2465/comments
https://api.github.com/repos/huggingface/datasets/issues/2465/events
https://github.com/huggingface/datasets/pull/2465
915,525,071
MDExOlB1bGxSZXF1ZXN0NjY1MzMxMDMz
2,465
adding masahaner dataset
{ "login": "dadelani", "id": 23586676, "node_id": "MDQ6VXNlcjIzNTg2Njc2", "avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dadelani", "html_url": "https://github.com/dadelani", "followers_url": "https://api.github.com/users/dadelani/followers", "following_url": "https://api.github.com/users/dadelani/following{/other_user}", "gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}", "starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dadelani/subscriptions", "organizations_url": "https://api.github.com/users/dadelani/orgs", "repos_url": "https://api.github.com/users/dadelani/repos", "events_url": "https://api.github.com/users/dadelani/events{/privacy}", "received_events_url": "https://api.github.com/users/dadelani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thank you for the review. ", "Thanks a lot for the corrections and comments. \r\n\r\nI have resolved point 2. The make style still throws some errors, please see below\r\n\r\nblack --line-length 119 --target-version py36 tests src benchmarks datasets/**/*.py metrics\r\n/bin/sh: 1: black: not found\r\nMakefile:13: recipe for target 'style' failed\r\nmake: *** [style] Error 127\r\n\r\nCan you help to resolve this?", "Thank you very much @lhoestq for the help. " ]
1,623,187,225,000
1,623,682,745,000
1,623,682,745,000
CONTRIBUTOR
null
Adding Masakhane dataset https://github.com/masakhane-io/masakhane-ner @lhoestq , can you please review
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2465/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2465/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2465", "html_url": "https://github.com/huggingface/datasets/pull/2465", "diff_url": "https://github.com/huggingface/datasets/pull/2465.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2465.patch", "merged_at": 1623682745000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2464/comments
https://api.github.com/repos/huggingface/datasets/issues/2464/events
https://github.com/huggingface/datasets/pull/2464
915,485,601
MDExOlB1bGxSZXF1ZXN0NjY1Mjk1MDE5
2,464
fix: adjusting indexing for the labels.
{ "login": "drugilsberg", "id": 5406908, "node_id": "MDQ6VXNlcjU0MDY5MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/5406908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/drugilsberg", "html_url": "https://github.com/drugilsberg", "followers_url": "https://api.github.com/users/drugilsberg/followers", "following_url": "https://api.github.com/users/drugilsberg/following{/other_user}", "gists_url": "https://api.github.com/users/drugilsberg/gists{/gist_id}", "starred_url": "https://api.github.com/users/drugilsberg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/drugilsberg/subscriptions", "organizations_url": "https://api.github.com/users/drugilsberg/orgs", "repos_url": "https://api.github.com/users/drugilsberg/repos", "events_url": "https://api.github.com/users/drugilsberg/events{/privacy}", "received_events_url": "https://api.github.com/users/drugilsberg/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Good catch ! Thanks for fixing it\r\n\r\nMy pleasure🙏" ]
1,623,185,245,000
1,623,233,746,000
1,623,229,828,000
CONTRIBUTOR
null
The labels index were mismatching the actual ones used in the dataset. Specifically `0` is used for `SUPPORTS` and `1` is used for `REFUTES` After this change, the `README.md` now reflects the content of `dataset_infos.json`. Signed-off-by: Matteo Manica <drugilsberg@gmail.com>
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2464/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2464", "html_url": "https://github.com/huggingface/datasets/pull/2464", "diff_url": "https://github.com/huggingface/datasets/pull/2464.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2464.patch", "merged_at": 1623229828000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2463/comments
https://api.github.com/repos/huggingface/datasets/issues/2463/events
https://github.com/huggingface/datasets/pull/2463
915,454,788
MDExOlB1bGxSZXF1ZXN0NjY1MjY3NTA2
2,463
Fix proto_qa download link
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,623,183,796,000
1,623,329,396,000
1,623,313,870,000
CONTRIBUTOR
null
Fixes #2459 Instead of updating the path, this PR fixes a commit hash as suggested by @lhoestq.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2463/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2463/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2463", "html_url": "https://github.com/huggingface/datasets/pull/2463", "diff_url": "https://github.com/huggingface/datasets/pull/2463.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2463.patch", "merged_at": 1623313869000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2461/comments
https://api.github.com/repos/huggingface/datasets/issues/2461/events
https://github.com/huggingface/datasets/pull/2461
915,286,150
MDExOlB1bGxSZXF1ZXN0NjY1MTE3MTY4
2,461
Support sliced list arrays in cast
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,623,173,927,000
1,623,174,984,000
1,623,174,983,000
MEMBER
null
There is this issue in pyarrow: ```python import pyarrow as pa arr = pa.array([[i * 10] for i in range(4)]) arr.cast(pa.list_(pa.int32())) # works arr = arr.slice(1) arr.cast(pa.list_(pa.int32())) # fails # ArrowNotImplementedError("Casting sliced lists (non-zero offset) not yet implemented") ``` However in `Dataset.cast` we slice tables to cast their types (it's memory intensive), so we have the same issue. Because of this it is currently not possible to cast a Dataset with a Sequence feature type (unless the table is small enough to not be sliced). In this PR I fixed this by resetting the offset of `pyarrow.ListArray` arrays to zero in the table before casting. I used `pyarrow.compute.subtract` function to update the offsets of the ListArray. cc @abhi1thakur @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2461/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2461/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2461", "html_url": "https://github.com/huggingface/datasets/pull/2461", "diff_url": "https://github.com/huggingface/datasets/pull/2461.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2461.patch", "merged_at": 1623174983000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2460
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2460/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2460/comments
https://api.github.com/repos/huggingface/datasets/issues/2460/events
https://github.com/huggingface/datasets/pull/2460
915,268,536
MDExOlB1bGxSZXF1ZXN0NjY1MTAyMjA4
2,460
Revert default in-memory for small datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/4", "html_url": "https://github.com/huggingface/datasets/milestone/4", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/4/labels", "id": 6680642, "node_id": "MDk6TWlsZXN0b25lNjY4MDY0Mg==", "number": 4, "title": "1.8", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 2, "state": "closed", "created_at": 1618937356000, "updated_at": 1623178297000, "due_on": 1623135600000, "closed_at": 1623178264000 }
[ "Thank you for this welcome change guys!" ]
1,623,172,463,000
1,623,175,454,000
1,623,174,943,000
MEMBER
null
Close #2458
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2460/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2460/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2460", "html_url": "https://github.com/huggingface/datasets/pull/2460", "diff_url": "https://github.com/huggingface/datasets/pull/2460.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2460.patch", "merged_at": 1623174943000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2459
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2459/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2459/comments
https://api.github.com/repos/huggingface/datasets/issues/2459/events
https://github.com/huggingface/datasets/issues/2459
915,222,015
MDU6SXNzdWU5MTUyMjIwMTU=
2,459
`Proto_qa` hosting seems to be broken
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "@VictorSanh , I think @mariosasko is already working on it. " ]
1,623,168,992,000
1,623,313,869,000
1,623,313,869,000
MEMBER
null
## Describe the bug The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now. @zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py` ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("proto_qa") ``` ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset use_auth_token=use_auth_token, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 630, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/hf/.cache/huggingface/modules/datasets_modules/datasets/proto_qa/445346efaad5c5f200ecda4aa7f0fb50ff1b55edde3003be424a2112c3e8102e/proto_qa.py", line 131, in _split_generators train_fpath = dl_manager.download(_URLs[self.config.name]["train"]) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 199, in download num_proc=download_config.num_proc, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 195, in map_nested return function(data_struct) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 218, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/iesl/protoqa-data/master/data/train/protoqa_train.jsonl ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2459/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2458/comments
https://api.github.com/repos/huggingface/datasets/issues/2458/events
https://github.com/huggingface/datasets/issues/2458
915,199,693
MDU6SXNzdWU5MTUxOTk2OTM=
2,458
Revert default in-memory for small datasets
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/4", "html_url": "https://github.com/huggingface/datasets/milestone/4", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/4/labels", "id": 6680642, "node_id": "MDk6TWlsZXN0b25lNjY4MDY0Mg==", "number": 4, "title": "1.8", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 2, "state": "closed", "created_at": 1618937356000, "updated_at": 1623178297000, "due_on": 1623135600000, "closed_at": 1623178264000 }
[ "cc: @krandiash (pinged in reverted PR)." ]
1,623,167,501,000
1,623,178,631,000
1,623,174,943,000
MEMBER
null
Users are reporting issues and confusion about setting default in-memory to True for small datasets. We see 2 clear use cases of Datasets: - the "canonical" way, where you can work with very large datasets, as they are memory-mapped and cached (after every transformation) - some edge cases (speed benchmarks, interactive/exploratory analysis,...), where default in-memory can explicitly be enabled, and no caching will be done After discussing with @lhoestq we have agreed to: - revert this feature (implemented in #2182) - explain in the docs how to optimize speed/performance by setting default in-memory cc: @stas00 https://github.com/huggingface/datasets/pull/2409#issuecomment-856210552
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2458/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2458/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2457/comments
https://api.github.com/repos/huggingface/datasets/issues/2457/events
https://github.com/huggingface/datasets/pull/2457
915,079,441
MDExOlB1bGxSZXF1ZXN0NjY0OTQwMzQ0
2,457
Add align_labels_with_mapping function
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq i think this is ready for another review 🙂 ", "@lhoestq thanks for the feedback - it's now integrated :) \r\n\r\ni also added a comment about sorting the input label IDs", "Created the PR here: https://github.com/huggingface/datasets/pull/2510", "> Thanks ! Looks all good now :)\r\n> \r\n> We will also need to have the `DatasetDict.align_labels_with_mapping` method. Let me quickly add it\r\n\r\nthanks a lot! i always forget about `DatasetDict` - will be happy when it's just one \"dataset\" object :)" ]
1,623,160,440,000
1,623,925,026,000
1,623,923,812,000
MEMBER
null
This PR adds a helper function to align the `label2id` mapping between a `datasets.Dataset` and a classifier (e.g. a transformer with a `PretrainedConfig.label2id` dict), with the alignment performed on the dataset itself. This will help us with the Hub evaluation, where we won't know in advance whether a model that is fine-tuned on say MNLI has the same mappings as the MNLI dataset we load from `datasets`. An example where this is needed is if we naively try to evaluate `microsoft/deberta-base-mnli` on `mnli` because the model config has the following mappings: ```python "id2label": { "0": "CONTRADICTION", "1": "NEUTRAL", "2": "ENTAILMENT" }, "label2id": { "CONTRADICTION": 0, "ENTAILMENT": 2, "NEUTRAL": 1 } ``` while the `mnli` dataset has the `contradiction` and `neutral` labels swapped: ```python id2label = {0: 'entailment', 1: 'neutral', 2: 'contradiction'} label2id = {'contradiction': 2, 'entailment': 0, 'neutral': 1} ``` As a result, we get a much lower accuracy during evaluation: ```python from datasets import load_dataset from transformers.trainer_utils import EvalPrediction from transformers import AutoModelForSequenceClassification, Trainer # load dataset for evaluation mnli = load_dataset("glue", "mnli", split="test") # load model model_ckpt = "microsoft/deberta-base-mnli" model = AutoModelForSequenceClassification.from_pretrained(checkpoint) # preprocess, create trainer ... mnli_enc = ... trainer = Trainer(model, args=args, tokenizer=tokenizer) # generate preds preds = trainer.predict(mnli_enc) # preds.label_ids misalinged with model.config => returns wrong accuracy (too low)! compute_metrics(EvalPrediction(preds.predictions, preds.label_ids)) ``` The fix is to use the helper function before running the evaluation to make sure the label IDs are aligned: ```python mnli_enc_aligned = mnli_enc.align_labels_with_mapping(label2id=config.label2id, label_column="label") # preds now aligned and everyone is happy :) preds = trainer.predict(mnli_enc_aligned) ``` cc @thomwolf @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2457/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2457/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2457", "html_url": "https://github.com/huggingface/datasets/pull/2457", "diff_url": "https://github.com/huggingface/datasets/pull/2457.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2457.patch", "merged_at": 1623923812000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2456/comments
https://api.github.com/repos/huggingface/datasets/issues/2456/events
https://github.com/huggingface/datasets/pull/2456
914,709,293
MDExOlB1bGxSZXF1ZXN0NjY0NjAwOTk1
2,456
Fix cross-reference typos in documentation
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,623,145,514,000
1,623,174,097,000
1,623,174,096,000
MEMBER
null
Fix some minor typos in docs that avoid the creation of cross-reference links.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2456/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2456", "html_url": "https://github.com/huggingface/datasets/pull/2456", "diff_url": "https://github.com/huggingface/datasets/pull/2456.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2456.patch", "merged_at": 1623174096000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2455
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2455/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2455/comments
https://api.github.com/repos/huggingface/datasets/issues/2455/events
https://github.com/huggingface/datasets/pull/2455
914,177,468
MDExOlB1bGxSZXF1ZXN0NjY0MTEzNjg2
2,455
Update version in xor_tydi_qa.py
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for updating the version\r\n\r\n> Should I revert to the old dummy/1.0.0 or delete it and keep only dummy/1.1.0?\r\n\r\nFeel free to delete the old dummy data files\r\n" ]
1,623,119,025,000
1,623,684,925,000
1,623,684,925,000
CONTRIBUTOR
null
Fix #2449 @lhoestq Should I revert to the old `dummy/1.0.0` or delete it and keep only `dummy/1.1.0`?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2455/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2455/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2455", "html_url": "https://github.com/huggingface/datasets/pull/2455", "diff_url": "https://github.com/huggingface/datasets/pull/2455.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2455.patch", "merged_at": 1623684925000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2454/comments
https://api.github.com/repos/huggingface/datasets/issues/2454/events
https://github.com/huggingface/datasets/pull/2454
913,883,631
MDExOlB1bGxSZXF1ZXN0NjYzODUyODU1
2,454
Rename config and environment variable for in memory max size
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thank you for the rename, @albertvillanova!" ]
1,623,093,668,000
1,623,098,626,000
1,623,098,626,000
MEMBER
null
As discussed in #2409, both config and environment variable have been renamed. cc: @stas00, huggingface/transformers#12056
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2454/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2454", "html_url": "https://github.com/huggingface/datasets/pull/2454", "diff_url": "https://github.com/huggingface/datasets/pull/2454.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2454.patch", "merged_at": 1623098626000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2453
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2453/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2453/comments
https://api.github.com/repos/huggingface/datasets/issues/2453/events
https://github.com/huggingface/datasets/pull/2453
913,729,258
MDExOlB1bGxSZXF1ZXN0NjYzNzE3NTk2
2,453
Keep original features order
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": 1622477586000, "updated_at": 1626099120000, "due_on": 1625727600000, "closed_at": 1625809807000 }
[ "The arrow writer was supposing that the columns were always in the sorted order. I just pushed a fix to reorder the arrays accordingly to the schema. It was failing for many datasets like squad", "and obviously it broke everything", "Feel free to revert my commit. I can investigate this in the coming days", "@lhoestq I do not understand when you say:\r\n> It was failing for many datasets like squad\r\n\r\nAll the tests were green after my last commit.", "> All the tests were green after my last commit.\r\n\r\nYes but loading the actual squad dataset was failing :/\r\n" ]
1,623,083,198,000
1,623,780,336,000
1,623,771,828,000
MEMBER
null
When loading a Dataset from a JSON file whose column names are not sorted alphabetically, we should get the same column name order, whether we pass features (in the same order as in the file) or not. I found this issue while working on #2366.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2453/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2453/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2453", "html_url": "https://github.com/huggingface/datasets/pull/2453", "diff_url": "https://github.com/huggingface/datasets/pull/2453.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2453.patch", "merged_at": 1623771828000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2452
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2452/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2452/comments
https://api.github.com/repos/huggingface/datasets/issues/2452/events
https://github.com/huggingface/datasets/issues/2452
913,603,877
MDU6SXNzdWU5MTM2MDM4Nzc=
2,452
MRPC test set differences between torch and tensorflow datasets
{ "login": "FredericOdermatt", "id": 50372080, "node_id": "MDQ6VXNlcjUwMzcyMDgw", "avatar_url": "https://avatars.githubusercontent.com/u/50372080?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FredericOdermatt", "html_url": "https://github.com/FredericOdermatt", "followers_url": "https://api.github.com/users/FredericOdermatt/followers", "following_url": "https://api.github.com/users/FredericOdermatt/following{/other_user}", "gists_url": "https://api.github.com/users/FredericOdermatt/gists{/gist_id}", "starred_url": "https://api.github.com/users/FredericOdermatt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FredericOdermatt/subscriptions", "organizations_url": "https://api.github.com/users/FredericOdermatt/orgs", "repos_url": "https://api.github.com/users/FredericOdermatt/repos", "events_url": "https://api.github.com/users/FredericOdermatt/events{/privacy}", "received_events_url": "https://api.github.com/users/FredericOdermatt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Realized that `tensorflow_datasets` is not provided by Huggingface and should therefore raise the issue there." ]
1,623,075,626,000
1,623,076,472,000
1,623,076,472,000
NONE
null
## Describe the bug When using `load_dataset("glue", "mrpc")` to load the MRPC dataset, the test set includes the labels. When using `tensorflow_datasets.load('glue/{}'.format('mrpc'))` to load the dataset the test set does not contain the labels. There should be consistency between torch and tensorflow ways of importing the GLUE datasets. ## Steps to reproduce the bug Minimal working code ```python from datasets import load_dataset import tensorflow as tf import tensorflow_datasets # torch dataset = load_dataset("glue", "mrpc") # tf data = tensorflow_datasets.load('glue/{}'.format('mrpc')) data = list(data['test'].as_numpy_iterator()) for i in range(40,50): tf_sentence1 = data[i]['sentence1'].decode("utf-8") tf_sentence2 = data[i]['sentence2'].decode("utf-8") tf_label = data[i]['label'] index = data[i]['idx'] print('Index {}'.format(index)) torch_sentence1 = dataset['test']['sentence1'][index] torch_sentence2 = dataset['test']['sentence2'][index] torch_label = dataset['test']['label'][index] print('Tensorflow: \n\tSentence1 {}\n\tSentence2 {}\n\tLabel {}'.format(tf_sentence1, tf_sentence2, tf_label)) print('Torch: \n\tSentence1 {}\n\tSentence2 {}\n\tLabel {}'.format(torch_sentence1, torch_sentence2, torch_label)) ``` Sample output ``` Index 954 Tensorflow: Sentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws . Sentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws . Label -1 Torch: Sentence1 Sabri Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate yesterday on charges of violating U.S. arms-control laws . Sentence2 The elder Yakou , an Iraqi native who is a legal U.S. resident , appeared before a federal magistrate Wednesday on charges of violating U.S. arms control laws . Label 1 Index 711 Tensorflow: Sentence1 Others keep records sealed for as little as five years or as much as 30 . Sentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years . Label -1 Torch: Sentence1 Others keep records sealed for as little as five years or as much as 30 . Sentence2 Some states make them available immediately ; others keep them sealed for as much as 30 years . Label 0 ``` ## Expected results I would expect the datasets to be independent of whether I am working with torch or tensorflow. ## Actual results Test set labels are provided in the `datasets.load_datasets()` for MRPC. However MRPC is the only task where the test set labels are not -1. ## Environment info - `datasets` version: 1.7.0 - Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2452/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2451
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2451/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2451/comments
https://api.github.com/repos/huggingface/datasets/issues/2451/events
https://github.com/huggingface/datasets/pull/2451
913,263,340
MDExOlB1bGxSZXF1ZXN0NjYzMzIwNDY1
2,451
Mention that there are no answers in adversarial_qa test set
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,623,053,637,000
1,623,054,854,000
1,623,054,853,000
MEMBER
null
As mention in issue https://github.com/huggingface/datasets/issues/2447, there are no answers in the test set
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2451/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2451/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2451", "html_url": "https://github.com/huggingface/datasets/pull/2451", "diff_url": "https://github.com/huggingface/datasets/pull/2451.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2451.patch", "merged_at": 1623054853000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2450
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2450/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2450/comments
https://api.github.com/repos/huggingface/datasets/issues/2450/events
https://github.com/huggingface/datasets/issues/2450
912,890,291
MDU6SXNzdWU5MTI4OTAyOTE=
2,450
BLUE file not found
{ "login": "mirfan899", "id": 3822565, "node_id": "MDQ6VXNlcjM4MjI1NjU=", "avatar_url": "https://avatars.githubusercontent.com/u/3822565?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mirfan899", "html_url": "https://github.com/mirfan899", "followers_url": "https://api.github.com/users/mirfan899/followers", "following_url": "https://api.github.com/users/mirfan899/following{/other_user}", "gists_url": "https://api.github.com/users/mirfan899/gists{/gist_id}", "starred_url": "https://api.github.com/users/mirfan899/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mirfan899/subscriptions", "organizations_url": "https://api.github.com/users/mirfan899/orgs", "repos_url": "https://api.github.com/users/mirfan899/repos", "events_url": "https://api.github.com/users/mirfan899/events{/privacy}", "received_events_url": "https://api.github.com/users/mirfan899/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! The `blue` metric doesn't exist, but the `bleu` metric does.\r\nYou can get the full list of metrics [here](https://github.com/huggingface/datasets/tree/master/metrics) or by running\r\n```python\r\nfrom datasets import list_metrics\r\n\r\nprint(list_metrics())\r\n```", "Ah, my mistake. Thanks for correcting" ]
1,622,998,914,000
1,623,062,775,000
1,623,062,775,000
NONE
null
Hi, I'm having the following issue when I try to load the `blue` metric. ```shell import datasets metric = datasets.load_metric('blue') Traceback (most recent call last): File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.7.0/metrics/blue/blue.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 332, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/metrics/blue/blue.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<input>", line 1, in <module> File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 605, in load_metric dataset=False, File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 343, in prepare_module combined_path, github_file_path FileNotFoundError: Couldn't find file locally at blue/blue.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.7.0/metrics/blue/blue.py. The file is also not present on the master branch on github. ``` Here is dataset installed version info ```shell pip freeze | grep datasets datasets==1.7.0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2450/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2450/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2449
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2449/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2449/comments
https://api.github.com/repos/huggingface/datasets/issues/2449/events
https://github.com/huggingface/datasets/pull/2449
912,751,752
MDExOlB1bGxSZXF1ZXN0NjYyODg1ODUz
2,449
Update `xor_tydi_qa` url to v1.1
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Just noticed while \r\n```load_dataset('local_path/datastes/xor_tydi_qa')``` works,\r\n```load_dataset('xor_tydi_qa')``` \r\noutputs an error: \r\n`\r\nFileNotFoundError: Couldn't find file at https://nlp.cs.washington.edu/xorqa/XORQA_site/data/xor_dev_retrieve_eng_span.jsonl\r\n`\r\n(the old url)\r\n\r\nI tired clearing the cache `.cache/huggingface/modules` and `.cache/huggingface/datasets`, didn't work.\r\n\r\nAnyone know how to fix this? Thanks.", "It seems like the error is not on your end. By default, the lib tries to download the version of the dataset script that matches the version of the lib, and that version of the script is, in your case, broken because the old URL no longer works. Once this PR gets merged, you can wait for the new release or set `script_version` to `\"master\"` in `load_dataset` to get the fixed version of the script.", "@mariosasko Thanks! It works now.\r\n\r\nPasting the docstring here for reference.\r\n```\r\n script_version (:class:`~utils.Version` or :obj:`str`, optional): Version of the dataset script to load:\r\n\r\n - For canonical datasets in the `huggingface/datasets` library like \"squad\", the default version of the module is the local version fo the lib.\r\n You can specify a different version from your local version of the lib (e.g. \"master\" or \"1.2.0\") but it might cause compatibility issues.\r\n - For community provided datasets like \"lhoestq/squad\" that have their own git repository on the Datasets Hub, the default version \"main\" corresponds to the \"main\" branch.\r\n You can specify a different version that the default \"main\" by using a commit sha or a git tag of the dataset repository.\r\n```\r\nBranch name didn't work, but commit sha works.", "Regarding the issue you mentioned about the `--ignore_verifications` flag, I think we should actually change the current behavior of the `--save_infos` flag to make it ignore the verifications as well, so that you don't need to specific `--ignore_verifications` in this case.", "@lhoestq I realized I forgot to change this:\r\n\r\nhttps://github.com/huggingface/datasets/blob/fdbf5a97d3393f4a91e4cddcabe364029508f7ce/datasets/xor_tydi_qa/xor_tydi_qa.py#L72-L73\r\n\r\nWhat should I do?", "Oh indeed. Please open a PR to change this. This should be 1.1.0" ]
1,622,972,698,000
1,623,078,981,000
1,623,054,664,000
CONTRIBUTOR
null
The dataset is updated and the old url no longer works. So I updated it. I faced a bug while trying to fix this. Documenting the solution here. Maybe we can add it to the doc (`CONTRIBUTING.md` and `ADD_NEW_DATASET.md`). > And to make the command work without the ExpectedMoreDownloadedFiles error, you just need to use the --ignore_verifications flag. https://github.com/huggingface/datasets/issues/2076#issuecomment-803904366
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2449/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2449", "html_url": "https://github.com/huggingface/datasets/pull/2449", "diff_url": "https://github.com/huggingface/datasets/pull/2449.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2449.patch", "merged_at": 1623054663000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2448/comments
https://api.github.com/repos/huggingface/datasets/issues/2448/events
https://github.com/huggingface/datasets/pull/2448
912,360,109
MDExOlB1bGxSZXF1ZXN0NjYyNTI2NjA3
2,448
Fix flores download link
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,622,914,224,000
1,623,182,578,000
1,623,053,905,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2448/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2448/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2448", "html_url": "https://github.com/huggingface/datasets/pull/2448", "diff_url": "https://github.com/huggingface/datasets/pull/2448.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2448.patch", "merged_at": 1623053905000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2447
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2447/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2447/comments
https://api.github.com/repos/huggingface/datasets/issues/2447/events
https://github.com/huggingface/datasets/issues/2447
912,299,527
MDU6SXNzdWU5MTIyOTk1Mjc=
2,447
dataset adversarial_qa has no answers in the "test" set
{ "login": "bjascob", "id": 22728060, "node_id": "MDQ6VXNlcjIyNzI4MDYw", "avatar_url": "https://avatars.githubusercontent.com/u/22728060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bjascob", "html_url": "https://github.com/bjascob", "followers_url": "https://api.github.com/users/bjascob/followers", "following_url": "https://api.github.com/users/bjascob/following{/other_user}", "gists_url": "https://api.github.com/users/bjascob/gists{/gist_id}", "starred_url": "https://api.github.com/users/bjascob/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bjascob/subscriptions", "organizations_url": "https://api.github.com/users/bjascob/orgs", "repos_url": "https://api.github.com/users/bjascob/repos", "events_url": "https://api.github.com/users/bjascob/events{/privacy}", "received_events_url": "https://api.github.com/users/bjascob/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! I'm pretty sure that the answers are not made available for the test set on purpose because it is part of the DynaBench benchmark, for which you can submit your predictions on the website.\r\nIn any case we should mention this in the dataset card of this dataset.", "Makes sense, but not intuitive for someone searching through the datasets. Thanks for adding the note to clarify." ]
1,622,905,058,000
1,623,064,387,000
1,623,064,387,000
NONE
null
## Describe the bug When loading the adversarial_qa dataset the 'test' portion has no answers. Only the 'train' and 'validation' portions do. This occurs with all four of the configs ('adversarialQA', 'dbidaf', 'dbert', 'droberta') ## Steps to reproduce the bug ``` from datasets import load_dataset examples = load_dataset('adversarial_qa', 'adversarialQA', script_version="master")['test'] print('Loaded {:,} examples'.format(len(examples))) has_answers = 0 for e in examples: if e['answers']['text']: has_answers += 1 print('{:,} have answers'.format(has_answers)) >>> Loaded 3,000 examples >>> 0 have answers examples = load_dataset('adversarial_qa', 'adversarialQA', script_version="master")['validation'] <...code above...> >>> Loaded 3,000 examples >>> 3,000 have answers ``` ## Expected results If 'test' is a valid dataset, it should have answers. Also note that all of the 'train' and 'validation' sets have answers, there are no "no answer" questions with this set (not sure if this is correct or not). ## Environment info - `datasets` version: 1.7.0 - Platform: Linux-5.8.0-53-generic-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyArrow version: 1.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2447/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2447/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2446
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2446/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2446/comments
https://api.github.com/repos/huggingface/datasets/issues/2446/events
https://github.com/huggingface/datasets/issues/2446
911,635,399
MDU6SXNzdWU5MTE2MzUzOTk=
2,446
`yelp_polarity` is broken
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "repos_url": "https://api.github.com/users/JetRunner/repos", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "```\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py\", line 332, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"/home/sasha/nlp-viewer/run.py\", line 233, in <module>\r\n configs = get_confs(option)\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py\", line 604, in wrapped_func\r\n return get_or_create_cached_value()\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py\", line 588, in get_or_create_cached_value\r\n return_value = func(*args, **kwargs)\r\nFile \"/home/sasha/nlp-viewer/run.py\", line 148, in get_confs\r\n builder_cls = nlp.load.import_main_class(module_path[0], dataset=True)\r\nFile \"/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/datasets/load.py\", line 85, in import_main_class\r\n module = importlib.import_module(module_path)\r\nFile \"/usr/lib/python3.7/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\nFile \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\nFile \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\nFile \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\nFile \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\nFile \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\nFile \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\nFile \"/home/sasha/.cache/huggingface/modules/datasets_modules/datasets/yelp_polarity/a770787b2526bdcbfc29ac2d9beb8e820fbc15a03afd3ebc4fb9d8529de57544/yelp_polarity.py\", line 36, in <module>\r\n from datasets.tasks import TextClassification\r\n```", "Solved by updating the `nlpviewer`" ]
1,622,821,469,000
1,622,833,007,000
1,622,833,007,000
MEMBER
null
![image](https://user-images.githubusercontent.com/22514219/120828150-c4a35b00-c58e-11eb-8083-a537cee4dbb3.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2446/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2446/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2445
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2445/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2445/comments
https://api.github.com/repos/huggingface/datasets/issues/2445/events
https://github.com/huggingface/datasets/pull/2445
911,577,578
MDExOlB1bGxSZXF1ZXN0NjYxODMzMTky
2,445
Fix broken URLs for bn_hate_speech and covid_tweets_japanese
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks ! To fix the CI you just have to rename the dummy data file in the dummy_data.zip files", "thanks for the tip with the dummy data - all fixed now!" ]
1,622,818,415,000
1,622,828,386,000
1,622,828,385,000
MEMBER
null
Closes #2388
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2445/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2445", "html_url": "https://github.com/huggingface/datasets/pull/2445", "diff_url": "https://github.com/huggingface/datasets/pull/2445.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2445.patch", "merged_at": 1622828385000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2444/comments
https://api.github.com/repos/huggingface/datasets/issues/2444/events
https://github.com/huggingface/datasets/issues/2444
911,297,139
MDU6SXNzdWU5MTEyOTcxMzk=
2,444
Sentence Boundaries missing in Dataset: xtreme / udpos
{ "login": "jerryIsHere", "id": 50871412, "node_id": "MDQ6VXNlcjUwODcxNDEy", "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerryIsHere", "html_url": "https://github.com/jerryIsHere", "followers_url": "https://api.github.com/users/jerryIsHere/followers", "following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}", "gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}", "starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions", "organizations_url": "https://api.github.com/users/jerryIsHere/orgs", "repos_url": "https://api.github.com/users/jerryIsHere/repos", "events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}", "received_events_url": "https://api.github.com/users/jerryIsHere/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nThis is a known issue. More info on this issue can be found in #2061. If you are looking for an open-source contribution, there are step-by-step instructions in the linked issue that you can follow to fix it.", "Closed by #2466." ]
1,622,797,826,000
1,624,017,223,000
1,624,017,223,000
CONTRIBUTOR
null
I was browsing through annotation guidelines, as suggested by the datasets introduction. The guidlines saids "There must be exactly one blank line after every sentence, including the last sentence in the file. Empty sentences are not allowed." in the [Sentence Boundaries and Comments section](https://universaldependencies.org/format.html#sentence-boundaries-and-comments) But the sentence boundaries seems not to be represented by huggingface datasets features well. I found out that multiple sentence are concatenated together as a 1D array, without any delimiter. PAN-x, which is another token classification subset from xtreme do represent the sentence boundary using a 2D array. You may compare in PAN-x.en and udpos.English in the explorer: https://huggingface.co/datasets/viewer/?dataset=xtreme
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2444/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2443
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2443/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2443/comments
https://api.github.com/repos/huggingface/datasets/issues/2443/events
https://github.com/huggingface/datasets/issues/2443
909,983,574
MDU6SXNzdWU5MDk5ODM1NzQ=
2,443
Some tests hang on Windows
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! That would be nice indeed to at least have a warning, since we don't handle the max path length limit.\r\nAlso if we could have an error instead of an infinite loop I'm sure windows users will appreciate that", "Unfortunately, I know this problem very well... 😅 \r\n\r\nI remember having proposed to throw an error instead of hanging in an infinite loop #2220: 60c7d1b6b71469599a27147a08100f594e7a3f84, 8c8ab60018b00463edf1eca500e434ff061546fc \r\nbut @lhoestq told me:\r\n> Note that the filelock module comes from this project that hasn't changed in years - while still being used by ten of thousands of projects:\r\nhttps://github.com/benediktschmitt/py-filelock\r\n> \r\n> Unless we have proper tests for this, I wouldn't recommend to change it\r\n\r\nI opened an Issue requesting a warning/error at startup for that case: #2224", "@albertvillanova Thanks for additional info on this issue.\r\n\r\nYes, I think the best option is to throw an error instead of suppressing it in a loop. I've considered 2 more options, but I don't really like them:\r\n1. create a temporary file with a filename longer than 255 characters on import; if this fails, long paths are not enabled and raise a warning. I'm not sure about this approach because I don't like the idea of creating a temporary file on import for this purpose.\r\n2. check if long paths are enabled with [this code](https://stackoverflow.com/a/46546731/14095927). As mentioned in the comment, this code relies on an undocumented function and Win10-specific." ]
1,622,680,050,000
1,624,870,059,000
1,624,870,059,000
CONTRIBUTOR
null
Currently, several tests hang on Windows if the max path limit of 260 characters is not disabled. This happens due to the changes introduced by #2223 that cause an infinite loop in `WindowsFileLock` described in #2220. This can be very tricky to debug, so I think now is a good time to address these issues/PRs. IMO throwing an error is too harsh, but maybe we can emit a warning in the top-level `__init__.py ` on startup if long paths are not enabled.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2443/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2443/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2442/comments
https://api.github.com/repos/huggingface/datasets/issues/2442/events
https://github.com/huggingface/datasets/pull/2442
909,677,029
MDExOlB1bGxSZXF1ZXN0NjYwMjE1ODY1
2,442
add english language tags for ~100 datasets
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Fixing the tags of all the datasets is out of scope for this PR so I'm merging even though the CI fails because of the missing tags" ]
1,622,651,096,000
1,622,800,300,000
1,622,800,299,000
MEMBER
null
As discussed on Slack, I have manually checked for ~100 datasets that they have at least one subset in English. This information was missing so adding into the READMEs. Note that I didn't check all the subsets so it's possible that some of the datasets have subsets in other languages than English...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2442/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2442/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2442", "html_url": "https://github.com/huggingface/datasets/pull/2442", "diff_url": "https://github.com/huggingface/datasets/pull/2442.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2442.patch", "merged_at": 1622800299000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2441
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2441/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2441/comments
https://api.github.com/repos/huggingface/datasets/issues/2441/events
https://github.com/huggingface/datasets/issues/2441
908,554,713
MDU6SXNzdWU5MDg1NTQ3MTM=
2,441
DuplicatedKeysError on personal dataset
{ "login": "lucaguarro", "id": 22605313, "node_id": "MDQ6VXNlcjIyNjA1MzEz", "avatar_url": "https://avatars.githubusercontent.com/u/22605313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucaguarro", "html_url": "https://github.com/lucaguarro", "followers_url": "https://api.github.com/users/lucaguarro/followers", "following_url": "https://api.github.com/users/lucaguarro/following{/other_user}", "gists_url": "https://api.github.com/users/lucaguarro/gists{/gist_id}", "starred_url": "https://api.github.com/users/lucaguarro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucaguarro/subscriptions", "organizations_url": "https://api.github.com/users/lucaguarro/orgs", "repos_url": "https://api.github.com/users/lucaguarro/repos", "events_url": "https://api.github.com/users/lucaguarro/events{/privacy}", "received_events_url": "https://api.github.com/users/lucaguarro/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! In your dataset script you must be yielding examples like\r\n```python\r\nfor line in file:\r\n ...\r\n yield key, {...}\r\n```\r\n\r\nSince `datasets` 1.7.0 we enforce the keys to be unique.\r\nHowever it looks like your examples generator creates duplicate keys: at least two examples have key 0.\r\n\r\nYou can fix that by making sure that your keys are unique.\r\n\r\nFor example if you use a counter to define the key of each example, make sure that your counter is not reset to 0 in during examples generation (between two open files for examples).\r\n\r\nLet me know if you have other questions :)", "Yup, I indeed was generating duplicate keys. Fixed it and now it's working." ]
1,622,570,381,000
1,622,850,603,000
1,622,850,603,000
NONE
null
## Describe the bug Ever since today, I have been getting a DuplicatedKeysError while trying to load my dataset from my own script. Error returned when running this line: `dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py')` Note that my script was working fine with earlier versions of the Datasets library. Cannot say with 100% certainty if I have been doing something wrong with my dataset script this whole time or if this is simply a bug with the new version of datasets. ## Steps to reproduce the bug I cannot provide code to reproduce the error as I am working with my own dataset. I can however provide my script if requested. ## Expected results For my data to be loaded. ## Actual results **DuplicatedKeysError** exception is raised ``` Downloading and preparing dataset good_reads_practice_dataset/main_domain (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/good_reads_practice_dataset/main_domain/1.1.0/64ff7c3fee2693afdddea75002eb6887d4fedc3d812ae3622128c8504ab21655... --------------------------------------------------------------------------- DuplicatedKeysError Traceback (most recent call last) <ipython-input-6-c342ea0dae9d> in <module>() ----> 1 dataset = load_dataset('/content/drive/MyDrive/Thesis/Datasets/book_preprocessing/goodreads_maharjan_trimmed_and_nered/goodreadsnered.py') 5 frames /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs) 749 try_from_hf_gcs=try_from_hf_gcs, 750 base_path=base_path, --> 751 use_auth_token=use_auth_token, 752 ) 753 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 573 if not downloaded_from_gcs: 574 self._download_and_prepare( --> 575 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 576 ) 577 # Sync info /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 650 try: 651 # Prepare split will record examples associated to the split --> 652 self._prepare_split(split_generator, **prepare_split_kwargs) 653 except OSError as e: 654 raise OSError( /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator) 990 writer.write(example, key) 991 finally: --> 992 num_examples, num_bytes = writer.finalize() 993 994 split_generator.split_info.num_examples = num_examples /usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in finalize(self, close_stream) 407 # In case current_examples < writer_batch_size, but user uses finalize() 408 if self._check_duplicates: --> 409 self.check_duplicate_keys() 410 # Re-intializing to empty list for next batch 411 self.hkey_record = [] /usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self) 347 for hash, key in self.hkey_record: 348 if hash in tmp_record: --> 349 raise DuplicatedKeysError(key) 350 else: 351 tmp_record.add(hash) DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 0 Keys should be unique and deterministic in nature ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.7.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.7.9 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2441/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2441/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2440
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2440/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2440/comments
https://api.github.com/repos/huggingface/datasets/issues/2440/events
https://github.com/huggingface/datasets/issues/2440
908,521,954
MDU6SXNzdWU5MDg1MjE5NTQ=
2,440
Remove `extended` field from dataset tagger
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "The tagger also doesn't insert the value for the `size_categories` field automatically, so this should be fixed too", "Thanks for reporting. Indeed the `extended` tag doesn't exist. Not sure why we had that in the tagger.\r\nThe repo of the tagger is here if someone wants to give this a try: https://github.com/huggingface/datasets-tagging\r\nOtherwise I can probably fix it next week", "I've opened a PR on `datasets-tagging` to fix the issue 🚀 ", "thanks ! this is fixed now" ]
1,622,567,922,000
1,623,229,591,000
1,623,229,590,000
MEMBER
null
## Describe the bug While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included: ``` dataset_name = 'arcd' @pytest.mark.parametrize("dataset_name", get_changed_datasets(repo_path)) def test_changed_dataset_card(dataset_name): card_path = repo_path / "datasets" / dataset_name / "README.md" assert card_path.exists() error_messages = [] try: ReadMe.from_readme(card_path) except Exception as readme_error: error_messages.append(f"The following issues have been found in the dataset cards:\nREADME:\n{readme_error}") try: DatasetMetadata.from_readme(card_path) except Exception as metadata_error: error_messages.append( f"The following issues have been found in the dataset cards:\nYAML tags:\n{metadata_error}" ) if error_messages: > raise ValueError("\n".join(error_messages)) E ValueError: The following issues have been found in the dataset cards: E YAML tags: E __init__() got an unexpected keyword argument 'extended' tests/test_dataset_cards.py:70: ValueError ``` Consider either removing this tag from the tagger or including it as part of the validation step in the CI. cc @yjernite
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2440/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/2440/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2439/comments
https://api.github.com/repos/huggingface/datasets/issues/2439/events
https://github.com/huggingface/datasets/pull/2439
908,511,983
MDExOlB1bGxSZXF1ZXN0NjU5MTkzMDE3
2,439
Better error message when trying to access elements of a DatasetDict without specifying the split
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,622,567,072,000
1,623,773,003,000
1,623,056,075,000
MEMBER
null
As mentioned in #2437 it'd be nice to to have an indication to the users when they try to access an element of a DatasetDict without specifying the split name. cc @thomwolf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2439/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2439/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2439", "html_url": "https://github.com/huggingface/datasets/pull/2439", "diff_url": "https://github.com/huggingface/datasets/pull/2439.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2439.patch", "merged_at": 1623056075000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2438
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2438/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2438/comments
https://api.github.com/repos/huggingface/datasets/issues/2438/events
https://github.com/huggingface/datasets/pull/2438
908,461,914
MDExOlB1bGxSZXF1ZXN0NjU5MTQ5Njg0
2,438
Fix NQ features loading: reorder fields of features to match nested fields order in arrow data
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,622,563,770,000
1,622,797,351,000
1,622,797,351,000
MEMBER
null
As mentioned in #2401, there is an issue when loading the features of `natural_questions` since the order of the nested fields in the features don't match. The order is important since it matters for the underlying arrow schema. To fix that I re-order the features based on the arrow schema: ```python inferred_features = Features.from_arrow_schema(arrow_table.schema) self.info.features = self.info.features.reorder_fields_as(inferred_features) assert self.info.features.type == inferred_features.type ``` The re-ordering is a recursive function. It takes into account that the `Sequence` feature type is a struct of list and not a list of struct. Now it's possible to load `natural_questions` again :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2438/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2438/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2438", "html_url": "https://github.com/huggingface/datasets/pull/2438", "diff_url": "https://github.com/huggingface/datasets/pull/2438.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2438.patch", "merged_at": 1622797350000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2437
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2437/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2437/comments
https://api.github.com/repos/huggingface/datasets/issues/2437/events
https://github.com/huggingface/datasets/pull/2437
908,108,882
MDExOlB1bGxSZXF1ZXN0NjU4ODUwNTkw
2,437
Better error message when using the wrong load_from_disk
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "We also have other cases where people are lost between Dataset and DatasetDict, maybe let's gather and solve them all here?\r\n\r\nFor instance, I remember that some people thought they would request a single element of a split but are calling this on a DatasetDict. Maybe here also a better error message when the split requested in not in the dict? pointing to the list of split and the fact that this is a datasetdict containing several datasets?", "Good idea, let me add a better error message for this case too", "As a digression from the topic of this PR, IMHO I think that the difference between Dataset and DatasetDict is an additional abstraction complexity that confuses \"typical\" end users. I think a user expects a \"Dataset\" (whatever it contains multiple or a single split) and maybe it could be interesting to try to simplify the user-facing API as much as possible to hide this complexity from the end user.\r\n\r\nI don't know your opinion about this, but it might be worth discussing...\r\n\r\nFor example, I really like the line of the solution of using the function `load_from_disk`, which hides the previous mentioned complexity and handles under the hood whether Dataset/DatasetDict instances should be created...", "I totally agree, I just haven't found a solution that doesn't imply major breaking changes x)", "Yes I would also like to find a better solution. Do we have any solution actually? (even implying breaking changes)\r\n\r\nHere is a proposal for discussion and refined (and potential abandon if it's not good enough):\r\n- let's consider that a DatasetDict is also a Dataset with the various split concatenated one after the other\r\n- let's disallow the use of integers in split names (probably not a very big breaking change)\r\n- when you index with integers you access the examples progressively in split after the other is finished (in a deterministic order)\r\n- when you index with strings/split name you have the same behavior as now (full backward compat)\r\n- let's then also have all the methods of a Dataset on the DatasetDict", "The end goal would be to merge both `Dataset` and `DatasetDict` object in a single object that would be (pretty much totally) backward compatible with both.", "I like the direction :) I think it can make sense to concatenate them.\r\n\r\nThere are a few things that I we could discuss if we want to merge Dataset and DatasetDict:\r\n1. what happens if you index by a string ? Does it return the column or the split ? We could disallow conflicts between column names and split names to avoid ambiguities. It can be surprising to be able to get a column or a split using the same indexing feature\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(...)\r\ndataset[\"train\"]\r\ndataset[\"input_ids\"]\r\n```\r\n2. what happens when you iterate over the object ? I guess it should iterate over the examples as a Dataset object, but a DatasetDict used to iterate over the splits as they are the dictionary keys. This is a breaking change that we can discuss.\r\n\r\nMoreover regarding your points:\r\n- integers are not allowed as split names already\r\n- it's definitely doable to have all the methods. Maybe some of them like `train_test_split` that is currently only available for Dataset can be tweaked to work for a split dataset", "Instead of suggesting the use of `Dataset.load_from_disk` and `DatasetDict.load_from_disk`, the error message now suggests to use `datasets.load_from_disk` directly", "Merging the error message improvement, feel free to continue the discussion here or in a github issue" ]
1,622,540,602,000
1,623,175,430,000
1,623,175,430,000
MEMBER
null
As mentioned in #2424, the error message when one tries to use `Dataset.load_from_disk` to load a DatasetDict object (or _vice versa_) can be improved. I added a suggestion in the error message to let users know that they should use the other one.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2437/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2437", "html_url": "https://github.com/huggingface/datasets/pull/2437", "diff_url": "https://github.com/huggingface/datasets/pull/2437.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2437.patch", "merged_at": 1623175429000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2436
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2436/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2436/comments
https://api.github.com/repos/huggingface/datasets/issues/2436/events
https://github.com/huggingface/datasets/pull/2436
908,100,211
MDExOlB1bGxSZXF1ZXN0NjU4ODQzMzQy
2,436
Update DatasetMetadata and ReadMe
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,622,539,957,000
1,623,677,007,000
1,623,677,006,000
CONTRIBUTOR
null
This PR contains the changes discussed in #2395. **Edit**: In addition to those changes, I'll be updating the `ReadMe` as follows: Currently, `Section` has separate parsing and validation error lists. In `.validate()`, we add these lists to the final lists and throw errors. One way to make `ReadMe` consistent with `DatasetMetadata` and add a separate `.validate()` method is to throw separate parsing and validation errors. This way, we don't have to throw validation errors, but only parsing errors in `__init__ ()`. We can have an option in `__init__()` to suppress parsing errors so that an object is created for validation. Doing this will allow the user to get all the errors in one go. In `test_dataset_cards` , we are already catching error messages and appending to a list. This can be done for `ReadMe()` for parsing errors, and `ReadMe(...,suppress_errors=True); readme.validate()` for validation, separately. **Edit 2**: The only parsing issue we have as of now is multiple headings at the same level with the same name. I assume this will happen very rarely, but it is still better to throw an error than silently pick one of them. It should be okay to separate it this way. Wdyt @lhoestq ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2436/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2436/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2436", "html_url": "https://github.com/huggingface/datasets/pull/2436", "diff_url": "https://github.com/huggingface/datasets/pull/2436.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2436.patch", "merged_at": 1623677006000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2435
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2435/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2435/comments
https://api.github.com/repos/huggingface/datasets/issues/2435/events
https://github.com/huggingface/datasets/pull/2435
907,505,531
MDExOlB1bGxSZXF1ZXN0NjU4MzQzNDE2
2,435
Insert Extractive QA templates for SQuAD-like datasets
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "hi @lhoestq @SBrandeis i've now added the missing YAML tags, so this PR should be good to go :)", "urgh, the windows tests are failing because of encoding issues 😢 \r\n\r\n```\r\ndataset_name = 'squad_kor_v1'\r\n\r\n @pytest.mark.parametrize(\"dataset_name\", get_changed_datasets(repo_path))\r\n def test_changed_dataset_card(dataset_name):\r\n card_path = repo_path / \"datasets\" / dataset_name / \"README.md\"\r\n assert card_path.exists()\r\n error_messages = []\r\n try:\r\n ReadMe.from_readme(card_path)\r\n except Exception as readme_error:\r\n error_messages.append(f\"The following issues have been found in the dataset cards:\\nREADME:\\n{readme_error}\")\r\n try:\r\n DatasetMetadata.from_readme(card_path)\r\n except Exception as metadata_error:\r\n error_messages.append(\r\n f\"The following issues have been found in the dataset cards:\\nYAML tags:\\n{metadata_error}\"\r\n )\r\n \r\n if error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE README:\r\nE 'charmap' codec can't decode byte 0x90 in position 2283: character maps to <undefined>\r\nE The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE 'charmap' codec can't decode byte 0x90 in position 2283: character maps to <undefined>\r\n```", "Seems like the encoding issues on windows is also being tackled in #2418 - will see if this solves the problem in the current PR" ]
1,622,470,151,000
1,622,730,870,000
1,622,730,747,000
MEMBER
null
This PR adds task templates for 9 SQuAD-like templates with the following properties: * 1 config * A schema that matches the `squad` one (i.e. same column names, especially for the nested `answers` column because the current implementation does not support casting with mismatched columns. see #2434) * Less than 20GB (my laptop can't handle more right now) The aim of this PR is to provide a few datasets to experiment with the task template integration in other libraries / services. PR #2429 should be merged before this one. cc @abhi1thakur
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2435/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2435/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2435", "html_url": "https://github.com/huggingface/datasets/pull/2435", "diff_url": "https://github.com/huggingface/datasets/pull/2435.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2435.patch", "merged_at": 1622730747000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2433
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2433/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2433/comments
https://api.github.com/repos/huggingface/datasets/issues/2433/events
https://github.com/huggingface/datasets/pull/2433
907,488,711
MDExOlB1bGxSZXF1ZXN0NjU4MzI5MDQ4
2,433
Fix DuplicatedKeysError in adversarial_qa
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,622,468,927,000
1,622,537,531,000
1,622,537,531,000
CONTRIBUTOR
null
Fixes #2431
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2433/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2433/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2433", "html_url": "https://github.com/huggingface/datasets/pull/2433", "diff_url": "https://github.com/huggingface/datasets/pull/2433.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2433.patch", "merged_at": 1622537530000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2432
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2432/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2432/comments
https://api.github.com/repos/huggingface/datasets/issues/2432/events
https://github.com/huggingface/datasets/pull/2432
907,462,881
MDExOlB1bGxSZXF1ZXN0NjU4MzA3MTE1
2,432
Fix CI six installation on linux
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,622,466,936,000
1,622,467,027,000
1,622,467,026,000
MEMBER
null
For some reason we end up with this error in the linux CI when running pip install .[tests] ``` pip._vendor.resolvelib.resolvers.InconsistentCandidate: Provided candidate AlreadyInstalledCandidate(six 1.16.0 (/usr/local/lib/python3.6/site-packages)) does not satisfy SpecifierRequirement('six>1.9'), SpecifierRequirement('six>1.9'), SpecifierRequirement('six>=1.11'), SpecifierRequirement('six~=1.15'), SpecifierRequirement('six'), SpecifierRequirement('six>=1.5.2'), SpecifierRequirement('six>=1.9.0'), SpecifierRequirement('six>=1.11.0'), SpecifierRequirement('six'), SpecifierRequirement('six>=1.6.1'), SpecifierRequirement('six>=1.9'), SpecifierRequirement('six>=1.5'), SpecifierRequirement('six<2.0'), SpecifierRequirement('six<2.0'), SpecifierRequirement('six'), SpecifierRequirement('six'), SpecifierRequirement('six~=1.15.0'), SpecifierRequirement('six'), SpecifierRequirement('six<2.0,>=1.6.1'), SpecifierRequirement('six'), SpecifierRequirement('six>=1.5.2'), SpecifierRequirement('six>=1.9.0') ``` example CI failure here: https://app.circleci.com/pipelines/github/huggingface/datasets/6200/workflows/b64fdec9-f9e6-431c-acd7-e9f2c440c568/jobs/38247 The main version requirement comes from tensorflow: `six~=1.15.0` So I pinned the six version to this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2432/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2432/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2432", "html_url": "https://github.com/huggingface/datasets/pull/2432", "diff_url": "https://github.com/huggingface/datasets/pull/2432.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2432.patch", "merged_at": 1622467026000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2431
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2431/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2431/comments
https://api.github.com/repos/huggingface/datasets/issues/2431/events
https://github.com/huggingface/datasets/issues/2431
907,413,691
MDU6SXNzdWU5MDc0MTM2OTE=
2,431
DuplicatedKeysError when trying to load adversarial_qa
{ "login": "hanss0n", "id": 21348833, "node_id": "MDQ6VXNlcjIxMzQ4ODMz", "avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hanss0n", "html_url": "https://github.com/hanss0n", "followers_url": "https://api.github.com/users/hanss0n/followers", "following_url": "https://api.github.com/users/hanss0n/following{/other_user}", "gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}", "starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions", "organizations_url": "https://api.github.com/users/hanss0n/orgs", "repos_url": "https://api.github.com/users/hanss0n/repos", "events_url": "https://api.github.com/users/hanss0n/events{/privacy}", "received_events_url": "https://api.github.com/users/hanss0n/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting !\r\n#2433 fixed the issue, thanks @mariosasko :)\r\n\r\nWe'll do a patch release soon of the library.\r\nIn the meantime, you can use the fixed version of adversarial_qa by adding `script_version=\"master\"` in `load_dataset`" ]
1,622,463,079,000
1,622,537,643,000
1,622,537,531,000
NONE
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python dataset = load_dataset('adversarial_qa', 'adversarialQA') ``` ## Expected results The dataset should be loaded into memory ## Actual results >DuplicatedKeysError: FAILURE TO GENERATE DATASET ! >Found duplicate Key: 4d3cb5677211ee32895ca9c66dad04d7152254d4 >Keys should be unique and deterministic in nature > > >During handling of the above exception, another exception occurred: > >DuplicatedKeysError Traceback (most recent call last) > >/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self) > 347 for hash, key in self.hkey_record: > 348 if hash in tmp_record: >--> 349 raise DuplicatedKeysError(key) > 350 else: > 351 tmp_record.add(hash) > >DuplicatedKeysError: FAILURE TO GENERATE DATASET ! >Found duplicate Key: 4d3cb5677211ee32895ca9c66dad04d7152254d4 >Keys should be unique and deterministic in nature ## Environment info - `datasets` version: 1.7.0 - Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2431/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2431/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2430
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2430/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2430/comments
https://api.github.com/repos/huggingface/datasets/issues/2430/events
https://github.com/huggingface/datasets/pull/2430
907,322,595
MDExOlB1bGxSZXF1ZXN0NjU4MTg3Njkw
2,430
Add version-specific BibTeX
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Maybe we should only keep one citation ?\r\ncc @thomwolf @yjernite ", "For info:\r\n- The one automatically generated by Zenodo is version-specific, and a new one will be generated after each release.\r\n- Zenodo has also generated a project-specific DOI (they call it *Concept DOI* as opposed to *Version DOI*), but currently this only redirects to the DOI page of the latest version.\r\n- All the information automatically generated by Zenodo can be corrected/customized if necessary.\r\n - If we decide to correct/update metadata, take into account that there are the following fields (among others): Authors, Contributors, Title, Description, Keywords, Additional Notes, License,...\r\n\r\nAccording to Zenodo: https://help.zenodo.org/#versioning\r\n> **Which DOI should I use in citations?**\r\n> \r\n> You should normally always use the DOI for the specific version of your record in citations. This is to ensure that other researchers can access the exact research artefact you used for reproducibility. By default, Zenodo uses the specific version to generate citations.\r\n> \r\n> You can use the Concept DOI representing all versions in citations when it is desirable to cite an evolving research artifact, without being specific about the version.", "Thanks for the details ! As zenodo says we should probably just show the versioned DOI. And we can remove the old citation.", "I have removed the old citation.\r\n\r\nWhat about the new one? Should we customize it? I have fixed some author names (replaced nickname with first and family names). Note that the list of authors is created automatically by Zenodo from this list: https://github.com/huggingface/datasets/graphs/contributors\r\nI do not know if this default automatic list of authors is what we want to show in the citation..." ]
1,622,455,542,000
1,623,138,802,000
1,623,138,802,000
MEMBER
null
As pointed out by @lhoestq in #2411, after the creation of the Zenodo DOI for Datasets, a new BibTeX entry is created with each release. This PR adds a version-specific BibTeX entry, besides the existing one which is generic for the project. See version-specific BibTeX entry here: https://zenodo.org/record/4817769/export/hx#.YLSyd6j7RPY
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2430/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2430/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2430", "html_url": "https://github.com/huggingface/datasets/pull/2430", "diff_url": "https://github.com/huggingface/datasets/pull/2430.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2430.patch", "merged_at": 1623138802000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2429
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2429/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2429/comments
https://api.github.com/repos/huggingface/datasets/issues/2429/events
https://github.com/huggingface/datasets/pull/2429
907,321,665
MDExOlB1bGxSZXF1ZXN0NjU4MTg2ODc0
2,429
Rename QuestionAnswering template to QuestionAnsweringExtractive
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> I like having \"extractive\" in the name to make things explicit. However this creates an inconsistency with transformers.\r\n> \r\n> See\r\n> https://huggingface.co/transformers/task_summary.html#extractive-question-answering\r\n> \r\n> But this is minor IMO and I'm ok with this renaming\r\n\r\nyes i chose this convention because it allows us to match the `QuestionAnsweringXxx` naming and i think it's better to have `task_name-subtask_name` should auto-complete ever become part of the Hub :)" ]
1,622,455,482,000
1,622,476,646,000
1,622,476,644,000
MEMBER
null
Following the discussion with @thomwolf in #2255, this PR renames the QA template to distinguish extractive vs abstractive QA. The abstractive template will be added in a future PR.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2429/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2429/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2429", "html_url": "https://github.com/huggingface/datasets/pull/2429", "diff_url": "https://github.com/huggingface/datasets/pull/2429.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2429.patch", "merged_at": 1622476644000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2428
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2428/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2428/comments
https://api.github.com/repos/huggingface/datasets/issues/2428/events
https://github.com/huggingface/datasets/pull/2428
907,169,746
MDExOlB1bGxSZXF1ZXN0NjU4MDU2MjI3
2,428
Add copyright info for wiki_lingua dataset
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Build fails but this change should not be the reason...", "rebased on master", "rebased on master" ]
1,622,445,772,000
1,622,802,153,000
1,622,802,153,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2428/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2428/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2428", "html_url": "https://github.com/huggingface/datasets/pull/2428", "diff_url": "https://github.com/huggingface/datasets/pull/2428.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2428.patch", "merged_at": 1622802153000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2427
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2427/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2427/comments
https://api.github.com/repos/huggingface/datasets/issues/2427/events
https://github.com/huggingface/datasets/pull/2427
907,162,923
MDExOlB1bGxSZXF1ZXN0NjU4MDUwMjAx
2,427
Add copyright info to MLSUM dataset
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Build fails but this change should not be the reason...", "rebased on master" ]
1,622,445,357,000
1,622,800,430,000
1,622,800,430,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2427/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2427/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2427", "html_url": "https://github.com/huggingface/datasets/pull/2427", "diff_url": "https://github.com/huggingface/datasets/pull/2427.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2427.patch", "merged_at": 1622800429000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2426
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2426/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2426/comments
https://api.github.com/repos/huggingface/datasets/issues/2426/events
https://github.com/huggingface/datasets/issues/2426
906,473,546
MDU6SXNzdWU5MDY0NzM1NDY=
2,426
Saving Graph/Structured Data in Datasets
{ "login": "gsh199449", "id": 3295342, "node_id": "MDQ6VXNlcjMyOTUzNDI=", "avatar_url": "https://avatars.githubusercontent.com/u/3295342?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gsh199449", "html_url": "https://github.com/gsh199449", "followers_url": "https://api.github.com/users/gsh199449/followers", "following_url": "https://api.github.com/users/gsh199449/following{/other_user}", "gists_url": "https://api.github.com/users/gsh199449/gists{/gist_id}", "starred_url": "https://api.github.com/users/gsh199449/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gsh199449/subscriptions", "organizations_url": "https://api.github.com/users/gsh199449/orgs", "repos_url": "https://api.github.com/users/gsh199449/repos", "events_url": "https://api.github.com/users/gsh199449/events{/privacy}", "received_events_url": "https://api.github.com/users/gsh199449/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "It should probably work out of the box to save structured data. If you want to show an example we can help you.", "An example of a toy dataset is like:\r\n```json\r\n[\r\n {\r\n \"name\": \"mike\",\r\n \"friends\": [\r\n \"tom\",\r\n \"lily\"\r\n ],\r\n \"articles\": [\r\n {\r\n \"title\": \"aaaaa\",\r\n \"reader\": [\r\n \"tom\",\r\n \"lucy\"\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"name\": \"tom\",\r\n \"friends\": [\r\n \"mike\",\r\n \"bbb\"\r\n ],\r\n \"articles\": [\r\n {\r\n \"title\": \"xxxxx\",\r\n \"reader\": [\r\n \"tom\",\r\n \"qqqq\"\r\n ]\r\n }\r\n ]\r\n }\r\n]\r\n```\r\nWe can use the friendship relation to build a directional graph, and a user node can be represented using the articles written by himself. And the relationship between articles can be built when the article has read by the same user.\r\nThis dataset can be used to model the heterogeneous relationship between users and articles, and this graph can be used to build recommendation systems to recommend articles to the user, or potential friends to the user.", "Hi,\r\n\r\nyou can do the following to load this data into a `Dataset`:\r\n```python\r\nfrom datasets import Dataset\r\nexamples = [\r\n {\r\n \"name\": \"mike\",\r\n \"friends\": [\r\n \"tom\",\r\n \"lily\"\r\n ],\r\n \"articles\": [\r\n {\r\n \"title\": \"aaaaa\",\r\n \"reader\": [\r\n \"tom\",\r\n \"lucy\"\r\n ]\r\n }\r\n ]\r\n },\r\n {\r\n \"name\": \"tom\",\r\n \"friends\": [\r\n \"mike\",\r\n \"bbb\"\r\n ],\r\n \"articles\": [\r\n {\r\n \"title\": \"xxxxx\",\r\n \"reader\": [\r\n \"tom\",\r\n \"qqqq\"\r\n ]\r\n }\r\n ]\r\n }\r\n]\r\n\r\nkeys = examples[0].keys()\r\nvalues = [ex.values() for ex in examples]\r\ndataset = Dataset.from_dict({k: list(v) for k, v in zip(keys, zip(*values))})\r\n```\r\n\r\nLet us know if this works for you.", "Thank you so much, and that works! I also have a question that if the dataset is very large, that cannot be loaded into the memory. How to create the Dataset?", "If your dataset doesn't fit in memory, store it in a local file and load it from there. Check out [this chapter](https://huggingface.co/docs/datasets/master/loading_datasets.html#from-local-files) in the docs for more info.", "Nice! Thanks for your help." ]
1,622,295,321,000
1,622,596,863,000
1,622,596,863,000
NONE
null
Thanks for this amazing library! And my question is I have structured data that is organized with a graph. For example, a dataset with users' friendship relations and user's articles. When I try to save a python dict in the dataset, an error occurred ``did not recognize Python value type when inferring an Arrow data type''. Although I also know that storing a python dict in pyarrow datasets is not the best practice, but I have no idea about how to save structured data in the Datasets. Thank you very much for your help.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2426/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2426/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2425
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2425/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2425/comments
https://api.github.com/repos/huggingface/datasets/issues/2425/events
https://github.com/huggingface/datasets/pull/2425
906,385,457
MDExOlB1bGxSZXF1ZXN0NjU3NDAwMjM3
2,425
Fix Docstring Mistake: dataset vs. metric
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "IMO this PR is ready for review. I do not know why tests fail...", "The CI fail is unrelated to this PR, and it has been fixed on master, merging :)", "> I just have one comment: we use rouge, not rogue :p\r\n\r\nOops!", "rebased on master" ]
1,622,268,593,000
1,622,535,484,000
1,622,535,484,000
CONTRIBUTOR
null
PR to fix #2412
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2425/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2425", "html_url": "https://github.com/huggingface/datasets/pull/2425", "diff_url": "https://github.com/huggingface/datasets/pull/2425.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2425.patch", "merged_at": 1622535484000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2424
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2424/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2424/comments
https://api.github.com/repos/huggingface/datasets/issues/2424/events
https://github.com/huggingface/datasets/issues/2424
906,193,679
MDU6SXNzdWU5MDYxOTM2Nzk=
2,424
load_from_disk and save_to_disk are not compatible with each other
{ "login": "roholazandie", "id": 7584674, "node_id": "MDQ6VXNlcjc1ODQ2NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7584674?v=4", "gravatar_id": "", "url": "https://api.github.com/users/roholazandie", "html_url": "https://github.com/roholazandie", "followers_url": "https://api.github.com/users/roholazandie/followers", "following_url": "https://api.github.com/users/roholazandie/following{/other_user}", "gists_url": "https://api.github.com/users/roholazandie/gists{/gist_id}", "starred_url": "https://api.github.com/users/roholazandie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/roholazandie/subscriptions", "organizations_url": "https://api.github.com/users/roholazandie/orgs", "repos_url": "https://api.github.com/users/roholazandie/repos", "events_url": "https://api.github.com/users/roholazandie/events{/privacy}", "received_events_url": "https://api.github.com/users/roholazandie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\n`load_dataset` returns an instance of `DatasetDict` if `split` is not specified, so instead of `Dataset.load_from_disk`, use `DatasetDict.load_from_disk` to load the dataset from disk.", "Thanks it worked!", "Though I see a stream of issues open by people lost between datasets and datasets dicts so maybe there is here something that could be better in terms of UX. Could be better error handling or something else smarter to even avoid said errors but maybe we should think about this. Reopening to use this issue as a discussion place but feel free to open a new open if you prefer @lhoestq @albertvillanova ", "We should probably improve the error message indeed.\r\n\r\nAlso note that there exists a function `load_from_disk` that can load a Dataset or a DatasetDict. Under the hood it calls either `Dataset.load_from_disk` or `DatasetDict.load_from_disk`:\r\n\r\n\r\n```python\r\nfrom datasets import load_from_disk\r\n\r\ndataset_dict = load_from_disk(\"path/to/dataset/dict\")\r\nsingle_dataset = load_from_disk(\"path/to/single/dataset\")\r\n```", "I just opened #2437 to improve the error message", "Superseded by #2462 " ]
1,622,243,230,000
1,623,180,152,000
1,623,180,152,000
NONE
null
## Describe the bug load_from_disk and save_to_disk are not compatible. When I use save_to_disk to save a dataset to disk it works perfectly but given the same directory load_from_disk throws an error that it can't find state.json. looks like the load_from_disk only works on one split ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("art") dataset.save_to_disk("mydir") d = Dataset.load_from_disk("mydir") ``` ## Expected results It is expected that these two functions be the reverse of each other without more manipulation ## Actual results FileNotFoundError: [Errno 2] No such file or directory: 'mydir/art/state.json' ## Environment info - `datasets` version: 1.6.2 - Platform: Linux-5.4.0-73-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in>
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2424/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2424/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2423
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2423/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2423/comments
https://api.github.com/repos/huggingface/datasets/issues/2423/events
https://github.com/huggingface/datasets/pull/2423
905,935,753
MDExOlB1bGxSZXF1ZXN0NjU2OTc5MjA5
2,423
add `desc` in `map` for `DatasetDict` object
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The CI error is unrelated to the PR, merging", "@lhoestq, can we release this feature if you guys are planning for any patch release for Datasets. It'll slow down [#11927](https://github.com/huggingface/transformers/pull/11927) otherwise :/ ", "Sure definitely, having a discrepancy between Dataset.map and DatasetDict.map is an issue that we should fix and include in a patch release. Will do it in the coming days" ]
1,622,230,124,000
1,622,472,683,000
1,622,466,484,000
CONTRIBUTOR
null
`desc` in `map` currently only works with `Dataset` objects. This PR adds support for `DatasetDict` objects as well
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2423/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2423/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2423", "html_url": "https://github.com/huggingface/datasets/pull/2423", "diff_url": "https://github.com/huggingface/datasets/pull/2423.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2423.patch", "merged_at": 1622466484000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2422
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2422/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2422/comments
https://api.github.com/repos/huggingface/datasets/issues/2422/events
https://github.com/huggingface/datasets/pull/2422
905,568,548
MDExOlB1bGxSZXF1ZXN0NjU2NjM3MzY1
2,422
Fix save_to_disk nested features order in dataset_info.json
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,622,214,208,000
1,622,215,617,000
1,622,215,616,000
MEMBER
null
Fix issue https://github.com/huggingface/datasets/issues/2267 The order of the nested features matters (pyarrow limitation), but the save_to_disk method was saving the features types as JSON with `sort_keys=True`, which was breaking the order of the nested features.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2422/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2422", "html_url": "https://github.com/huggingface/datasets/pull/2422", "diff_url": "https://github.com/huggingface/datasets/pull/2422.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2422.patch", "merged_at": 1622215616000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2421
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2421/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2421/comments
https://api.github.com/repos/huggingface/datasets/issues/2421/events
https://github.com/huggingface/datasets/pull/2421
905,549,756
MDExOlB1bGxSZXF1ZXN0NjU2NjIwMTM3
2,421
doc: fix typo HF_MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,622,213,530,000
1,622,800,365,000
1,622,800,365,000
CONTRIBUTOR
null
MAX_MEMORY_DATASET_SIZE_IN_BYTES should be HF_MAX_MEMORY_DATASET_SIZE_IN_BYTES
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2421/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2421/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2421", "html_url": "https://github.com/huggingface/datasets/pull/2421", "diff_url": "https://github.com/huggingface/datasets/pull/2421.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2421.patch", "merged_at": 1622800365000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2420
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2420/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2420/comments
https://api.github.com/repos/huggingface/datasets/issues/2420/events
https://github.com/huggingface/datasets/pull/2420
904,821,772
MDExOlB1bGxSZXF1ZXN0NjU1OTQ1ODgw
2,420
Updated Dataset Description
{ "login": "binny-mathew", "id": 10741860, "node_id": "MDQ6VXNlcjEwNzQxODYw", "avatar_url": "https://avatars.githubusercontent.com/u/10741860?v=4", "gravatar_id": "", "url": "https://api.github.com/users/binny-mathew", "html_url": "https://github.com/binny-mathew", "followers_url": "https://api.github.com/users/binny-mathew/followers", "following_url": "https://api.github.com/users/binny-mathew/following{/other_user}", "gists_url": "https://api.github.com/users/binny-mathew/gists{/gist_id}", "starred_url": "https://api.github.com/users/binny-mathew/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/binny-mathew/subscriptions", "organizations_url": "https://api.github.com/users/binny-mathew/orgs", "repos_url": "https://api.github.com/users/binny-mathew/repos", "events_url": "https://api.github.com/users/binny-mathew/events{/privacy}", "received_events_url": "https://api.github.com/users/binny-mathew/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,622,185,851,000
1,623,327,095,000
1,623,327,095,000
CONTRIBUTOR
null
Added Point of contact information and several other details about the dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2420/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2420/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2420", "html_url": "https://github.com/huggingface/datasets/pull/2420", "diff_url": "https://github.com/huggingface/datasets/pull/2420.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2420.patch", "merged_at": 1623327095000 }
true
https://api.github.com/repos/huggingface/datasets/issues/2419
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2419/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2419/comments
https://api.github.com/repos/huggingface/datasets/issues/2419/events
https://github.com/huggingface/datasets/pull/2419
904,347,339
MDExOlB1bGxSZXF1ZXN0NjU1NTA1OTM1
2,419
adds license information for DailyDialog.
{ "login": "aditya2211", "id": 11574558, "node_id": "MDQ6VXNlcjExNTc0NTU4", "avatar_url": "https://avatars.githubusercontent.com/u/11574558?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aditya2211", "html_url": "https://github.com/aditya2211", "followers_url": "https://api.github.com/users/aditya2211/followers", "following_url": "https://api.github.com/users/aditya2211/following{/other_user}", "gists_url": "https://api.github.com/users/aditya2211/gists{/gist_id}", "starred_url": "https://api.github.com/users/aditya2211/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aditya2211/subscriptions", "organizations_url": "https://api.github.com/users/aditya2211/orgs", "repos_url": "https://api.github.com/users/aditya2211/repos", "events_url": "https://api.github.com/users/aditya2211/events{/privacy}", "received_events_url": "https://api.github.com/users/aditya2211/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Thanks! Can you also add it as metadata in the YAML block at the top of the file?\r\n\r\nShould be in the form:\r\n\r\n```\r\nlicenses:\r\n- cc-by-sa-4.0\r\n```", "seems like we need to add all the other tags ? \r\n``` \r\nif error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset cards:\r\nE YAML tags:\r\nE __init__() missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'languages', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n```", "I'll let @lhoestq or @yjernite chime in (and maybe complete/merge). Thanks!", "Looks like CircleCI has an incident. Let's wait for it to be working again and make sure the CI is green", "The remaining error is unrelated to this PR, merging" ]
1,622,156,622,000
1,622,467,012,000
1,622,467,012,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2419/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2419/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2419", "html_url": "https://github.com/huggingface/datasets/pull/2419", "diff_url": "https://github.com/huggingface/datasets/pull/2419.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2419.patch", "merged_at": 1622467012000 }
true