url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.09B
node_id
stringlengths
18
32
number
int64
1
3.49k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
null
assignees
sequence
milestone
null
comments
sequence
created_at
int64
1,587B
1,641B
updated_at
int64
1,587B
1,641B
closed_at
int64
1,587B
1,641B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/3485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3485/comments
https://api.github.com/repos/huggingface/datasets/issues/3485/events
https://github.com/huggingface/datasets/issues/3485
1,089,027,581
I_kwDODunzps5A6T39
3,485
skip columns which cannot set to specific format when set_format
{ "login": "tshu-w", "id": 13161779, "node_id": "MDQ6VXNlcjEzMTYxNzc5", "avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tshu-w", "html_url": "https://github.com/tshu-w", "followers_url": "https://api.github.com/users/tshu-w/followers", "following_url": "https://api.github.com/users/tshu-w/following{/other_user}", "gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}", "starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions", "organizations_url": "https://api.github.com/users/tshu-w/orgs", "repos_url": "https://api.github.com/users/tshu-w/repos", "events_url": "https://api.github.com/users/tshu-w/events{/privacy}", "received_events_url": "https://api.github.com/users/tshu-w/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "You can add columns that you wish to set into `torch` format using `dataset.set_format(\"torch\", ['id', 'abc'])` so that input batch of the transform only contains those columns", "Sorry, I miss `output_all_columns` args and thought after `dataset.set_format(\"torch\", columns=columns)` I can only get specific columns I assigned." ]
1,640,589,595,000
1,640,596,027,000
1,640,596,027,000
NONE
null
**Is your feature request related to a problem? Please describe.** When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns. **Describe the solution you'd like** skip columns which cannot set to specific format when set_format instead of raise an error.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3485/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3484/comments
https://api.github.com/repos/huggingface/datasets/issues/3484/events
https://github.com/huggingface/datasets/issues/3484
1,088,910,402
I_kwDODunzps5A53RC
3,484
make shape verification to use ArrayXD instead of nested lists for map
{ "login": "tshu-w", "id": 13161779, "node_id": "MDQ6VXNlcjEzMTYxNzc5", "avatar_url": "https://avatars.githubusercontent.com/u/13161779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tshu-w", "html_url": "https://github.com/tshu-w", "followers_url": "https://api.github.com/users/tshu-w/followers", "following_url": "https://api.github.com/users/tshu-w/following{/other_user}", "gists_url": "https://api.github.com/users/tshu-w/gists{/gist_id}", "starred_url": "https://api.github.com/users/tshu-w/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshu-w/subscriptions", "organizations_url": "https://api.github.com/users/tshu-w/orgs", "repos_url": "https://api.github.com/users/tshu-w/repos", "events_url": "https://api.github.com/users/tshu-w/events{/privacy}", "received_events_url": "https://api.github.com/users/tshu-w/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,640,571,362,000
1,640,571,362,000
null
NONE
null
As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=1&uniqifier=1), IMO make shape verifcaiton to use ArrayXD instead of nested lists for map can help user reduce unnecessary cast. I notice datasets have done something special for `input_ids` and `attention_mask` which is also unnecessary after this feature added.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3484/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3484/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3483/comments
https://api.github.com/repos/huggingface/datasets/issues/3483/events
https://github.com/huggingface/datasets/pull/3483
1,088,784,157
PR_kwDODunzps4wSAW4
3,483
Remove unused phony rule from Makefile
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,529,433,000
1,640,529,433,000
null
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3483/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3483", "html_url": "https://github.com/huggingface/datasets/pull/3483", "diff_url": "https://github.com/huggingface/datasets/pull/3483.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3483.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3482/comments
https://api.github.com/repos/huggingface/datasets/issues/3482/events
https://github.com/huggingface/datasets/pull/3482
1,088,317,921
PR_kwDODunzps4wQqE1
3,482
Fix duplicate keys in NewsQA
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Flaky tests?" ]
1,640,343,719,000
1,640,536,193,000
null
NONE
null
* Fix duplicate keys in NewsQA when loading from CSV files. * Fix s/narqa/newsqa/ in the download manually error message. * Make the download manually error message show nicely when printed. Otherwise, is hard to read due to spacing issues. * Fix the format of the license text. * Reformat the code to make it simpler.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3482/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3482", "html_url": "https://github.com/huggingface/datasets/pull/3482", "diff_url": "https://github.com/huggingface/datasets/pull/3482.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3482.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3481/comments
https://api.github.com/repos/huggingface/datasets/issues/3481/events
https://github.com/huggingface/datasets/pull/3481
1,088,308,343
PR_kwDODunzps4wQoJu
3,481
Fix overriding of filesystem info
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,342,551,000
1,640,344,139,000
1,640,344,139,000
MEMBER
null
Previously, `BaseCompressedFileFileSystem.info` was overridden and transformed from function to dict. This generated a bug for filesystem methods that use `self.info()`, like e.g. `fs.isfile()`. This PR: - Adds tests for `fs.isfile` (that use `fs.info`). - Fixes custom `BaseCompressedFileFileSystem.info` by removing its overriding.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3481/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3481/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3481", "html_url": "https://github.com/huggingface/datasets/pull/3481", "diff_url": "https://github.com/huggingface/datasets/pull/3481.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3481.patch", "merged_at": 1640344139000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3480
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3480/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3480/comments
https://api.github.com/repos/huggingface/datasets/issues/3480/events
https://github.com/huggingface/datasets/issues/3480
1,088,267,110
I_kwDODunzps5A3aNm
3,480
the compression format requested when saving a dataset in json format is not respected
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Thanks for reporting @SaulLu.\r\n\r\nAt first sight I think the problem is caused because `pandas` only takes into account the `compression` parameter if called with a non-null file path or buffer. And in our implementation, we call pandas `to_json` with `None` `path_or_buf`.\r\n\r\nWe should fix this:\r\n- either handling directly the `compression` parameter ourselves\r\n- or refactoring to pass non-null path or buffer to pandas\r\n\r\nCC: @lhoestq", "I was thinking if we can handle the `compression` parameter by ourselves? Compression types will be similar to what `pandas` offer. Initially, we can try this with 2-3 compression types and see how good/bad it is? Let me know if it sounds good, I can raise a PR for this next week" ]
1,640,337,831,000
1,640,421,200,000
null
NONE
null
## Describe the bug In the documentation of the `to_json` method, it is stated in the parameters that > **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json. however when we pass for example `compression="gzip"`, the saved file is not compressed. Would you also have expected compression to be applied? :relaxed: ## Steps to reproduce the bug ```python my_dict = {"a": [1, 2, 3], "b": [1, 2, 3]} ``` ### Result with datasets ```python from datasets import Dataset dataset = Dataset.from_dict(my_dict) dataset.to_json("dic_with_datasets.jsonl.gz", compression="gzip") !cat dic_with_datasets.jsonl.gz ``` output ``` {"a":1,"b":1} {"a":2,"b":2} {"a":3,"b":3} ``` Note: I would expected to see binary data here ### Result with pandas ```python import pandas as pd df = pd.DataFrame(my_dict) df.to_json("dic_with_pandas.jsonl.gz", lines=True, orient="records", compression="gzip") !cat dic_with_pandas.jsonl.gz ``` output ``` 4��a�dic_with_pandas.jsonl��VJT�2�QJ��\� ��g��yƵ���������)��� ``` Note: It looks like binary data ## Expected results I would have expected that the saved result with datasets would also be a binary file ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.11 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3480/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3480/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3479/comments
https://api.github.com/repos/huggingface/datasets/issues/3479/events
https://github.com/huggingface/datasets/issues/3479
1,088,232,880
I_kwDODunzps5A3R2w
3,479
Dataset preview is not available (I think for all Hugging Face datasets)
{ "login": "Abirate", "id": 66887439, "node_id": "MDQ6VXNlcjY2ODg3NDM5", "avatar_url": "https://avatars.githubusercontent.com/u/66887439?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Abirate", "html_url": "https://github.com/Abirate", "followers_url": "https://api.github.com/users/Abirate/followers", "following_url": "https://api.github.com/users/Abirate/following{/other_user}", "gists_url": "https://api.github.com/users/Abirate/gists{/gist_id}", "starred_url": "https://api.github.com/users/Abirate/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Abirate/subscriptions", "organizations_url": "https://api.github.com/users/Abirate/orgs", "repos_url": "https://api.github.com/users/Abirate/repos", "events_url": "https://api.github.com/users/Abirate/events{/privacy}", "received_events_url": "https://api.github.com/users/Abirate/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
null
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "You're right, we have an issue today with the datasets preview. We're investigating.", "It should be fixed now. Thanks for reporting.", "Down again. ", "Fixed for good." ]
1,640,333,928,000
1,640,356,066,000
1,640,356,066,000
NONE
null
## Dataset viewer issue for '*french_book_reviews*' **Link:** https://huggingface.co/datasets/Abirate/french_book_reviews **short description of the issue** For my dataset, the dataset preview is no longer functional (it used to work: The dataset had been added the day before and it was fine...) And, after looking over the datasets, I discovered that this issue affects all Hugging Face datasets (as of yesterday, December 23, 2021, around 10 p.m. (CET)). **Am I the one who added this dataset** : Yes **Note**: here a screenshot showing the issue ![Dataset preview is not available for my dataset](https://user-images.githubusercontent.com/66887439/147333078-60734578-420d-4e91-8691-a90afeaa8948.jpg) **And here for glue dataset :** ![Dataset preview is not available for other Hugging Face datasets(glue)](https://user-images.githubusercontent.com/66887439/147333492-26fa530c-befd-4992-8361-70c51397a25a.jpg)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3479/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3479/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3478/comments
https://api.github.com/repos/huggingface/datasets/issues/3478/events
https://github.com/huggingface/datasets/pull/3478
1,087,860,180
PR_kwDODunzps4wPMWq
3,478
Extend support for streaming datasets that use os.walk
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Nice. I'll update the dataset viewer once merged, and test on these four datasets" ]
1,640,277,775,000
1,640,343,020,000
1,640,343,019,000
MEMBER
null
This PR extends the support in streaming mode for datasets that use `os.walk`, by patching that function. This PR adds support for streaming mode to datasets: 1. autshumato 1. code_x_glue_cd_code_to_text 1. code_x_glue_tc_nl_code_search_adv 1. nchlt CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3478/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3478", "html_url": "https://github.com/huggingface/datasets/pull/3478", "diff_url": "https://github.com/huggingface/datasets/pull/3478.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3478.patch", "merged_at": 1640343019000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3477/comments
https://api.github.com/repos/huggingface/datasets/issues/3477/events
https://github.com/huggingface/datasets/pull/3477
1,087,850,253
PR_kwDODunzps4wPKPX
3,477
Use `iter_files` instead of `str(Path(...)` in image dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,276,815,000
1,640,347,820,000
null
CONTRIBUTOR
null
Use `iter_files` in the `beans` and the `cats_vs_dogs` dataset scripts as suggested by @albertvillanova. Additional changes: * Fix `iter_files` in `MockDownloadManager` (see this [CI error](https://app.circleci.com/pipelines/github/huggingface/datasets/9247/workflows/2657ff8a-b531-4fd9-a9fc-6541a72e8d83/jobs/57028)) * Add support for `os.path.isdir` and `os.path.isfile` in streaming (`os.path.isfile` is needed in `StreamingDownloadManager`'s `iter_files` to make `cats_vs_dogs` streamable) TODO: - [ ] add tests for `xisdir` and `xisfile`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3477/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3477", "html_url": "https://github.com/huggingface/datasets/pull/3477", "diff_url": "https://github.com/huggingface/datasets/pull/3477.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3477.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3476
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3476/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3476/comments
https://api.github.com/repos/huggingface/datasets/issues/3476/events
https://github.com/huggingface/datasets/pull/3476
1,087,622,872
PR_kwDODunzps4wOZ8a
3,476
Extend support for streaming datasets that use ET.parse
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,258,326,000
1,640,273,670,000
1,640,273,670,000
MEMBER
null
This PR extends the support in streaming mode for datasets that use `ET.parse`, by patching the function. This PR adds support for streaming mode to datasets: 1. ami 1. assin 1. assin2 1. counter 1. enriched_web_nlg 1. europarl_bilingual 1. hyperpartisan_news_detection 1. polsum 1. qa4mre 1. quail 1. ted_talks_iwslt 1. udhr 1. web_nlg 1. winograd_wsc CC: @severo
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3476/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3476/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3476", "html_url": "https://github.com/huggingface/datasets/pull/3476", "diff_url": "https://github.com/huggingface/datasets/pull/3476.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3476.patch", "merged_at": 1640273670000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3475/comments
https://api.github.com/repos/huggingface/datasets/issues/3475/events
https://github.com/huggingface/datasets/issues/3475
1,087,352,041
I_kwDODunzps5Az6zp
3,475
The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish
{ "login": "puzzler10", "id": 17426779, "node_id": "MDQ6VXNlcjE3NDI2Nzc5", "avatar_url": "https://avatars.githubusercontent.com/u/17426779?v=4", "gravatar_id": "", "url": "https://api.github.com/users/puzzler10", "html_url": "https://github.com/puzzler10", "followers_url": "https://api.github.com/users/puzzler10/followers", "following_url": "https://api.github.com/users/puzzler10/following{/other_user}", "gists_url": "https://api.github.com/users/puzzler10/gists{/gist_id}", "starred_url": "https://api.github.com/users/puzzler10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/puzzler10/subscriptions", "organizations_url": "https://api.github.com/users/puzzler10/orgs", "repos_url": "https://api.github.com/users/puzzler10/repos", "events_url": "https://api.github.com/users/puzzler10/events{/privacy}", "received_events_url": "https://api.github.com/users/puzzler10/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @puzzler10, thanks for reporting.\r\n\r\nPlease note this dataset is not hosted on Hugging Face Hub. See: \r\nhttps://github.com/huggingface/datasets/blob/c8f914473b041833fd47178fa4373cdcb56ac522/datasets/rotten_tomatoes/rotten_tomatoes.py#L42\r\n\r\nIf there are issues with the source data of a dataset, you should contact the data owners/creators instead. In the homepage associated with this dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/), you can find the authors of the dataset and how to contact them:\r\n> If you have any questions or comments regarding this site, please send email to Bo Pang or Lillian Lee.\r\n\r\nP.S.: Please also note that the example you gave of non-English review is in Portuguese (not Spanish). ;)", "Maybe best to just put a quick sentence in the dataset description that highlights this? " ]
1,640,231,803,000
1,640,305,383,000
null
NONE
null
## Describe the bug See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user. ## Steps to reproduce the bug Go to the [dataset viewer](https://huggingface.co/datasets/viewer/?dataset=rotten_tomatoes) for the dataset, set the offset to 4160 for the train dataset, and scroll through the results. I found ones at index 4166 and 4173. There's others too (e.g. index 2888) but those two are easy to find like that. ## Expected results English movie reviews only. ## Actual results Example of a Spanish movie review (4173): > "É uma pena que , mais tarde , o próprio filme abandone o tom de paródia e passe a utilizar os mesmos clichês que havia satirizado "
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3475/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3474
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3474/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3474/comments
https://api.github.com/repos/huggingface/datasets/issues/3474/events
https://github.com/huggingface/datasets/pull/3474
1,086,945,384
PR_kwDODunzps4wMMt0
3,474
Decode images when iterating
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,187,289,000
1,640,188,452,000
null
MEMBER
null
If I iterate over a vision dataset, the images are not decoded, and the dictionary with the bytes is returned. This PR enables image decoding in `Dataset.__iter__` Close https://github.com/huggingface/datasets/issues/3473
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3474/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3474", "html_url": "https://github.com/huggingface/datasets/pull/3474", "diff_url": "https://github.com/huggingface/datasets/pull/3474.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3474.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3473/comments
https://api.github.com/repos/huggingface/datasets/issues/3473/events
https://github.com/huggingface/datasets/issues/3473
1,086,937,610
I_kwDODunzps5AyVoK
3,473
Iterating over a vision dataset doesn't decode the images
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
closed
false
null
[]
null
[ "As discussed, I remember I set `decoded=False` here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed.", "> I set decoded=False here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed\r\n\r\nhttps://github.com/huggingface/datasets/pull/3430 will add more control to decoding, so I think it's OK to enable decoding in `__iter__` for now. After we merge the linked PR, the user can easily disable it again.", "@mariosasko I wonder why there is no issue in `Audio` feature with decoding disabled in `__iter__`, whereas there is in `Image` feature.\r\n\r\nEnabling decoding in `__iter__` will make fail Audio regressions tests: https://github.com/huggingface/datasets/runs/4608657230?check_suite_focus=true\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_not_decoded\r\nFAILED tests/features/test_audio.py::test_dataset_with_audio_feature_map_is_decoded\r\n========================= 2 failed, 15 passed in 8.37s =========================", "Please also note that the regression tests were implemented in accordance with the specifications:\r\n- when doing a `map` (wich calls `__iter__`) of a function that doesn't access the audio field, the decoding should be disabled; this is why the decoding is disabled in `__iter__` (and only enabled in `__getitem__`).", "> I wonder why there is no issue in Audio feature with decoding disabled in __iter__, whereas there is in Image feature.\r\n\r\n@albertvillanova Not sure if I understand this part. Currently, both the Image and the Audio feature don't decode data in `__iter__`, so their behavior is aligned there.\r\n", "Therefore, this is not an issue, neither for Audio nor Image feature.\r\n\r\nCould you please elaborate more on the expected use case? @lhoestq @NielsRogge \r\n\r\nThe expected use cases (in accordance with the specs: see #2324):\r\n- decoding should be enabled when accessing a specific item (`__getitem__`)\r\n- decoding should be disabled while iterating (`__iter__`) to allow preprocessing of non-audio/image features (like label or text, for example) using `.map`\r\n- decoding should be enabled in a `.map` only if the `.map` function accesses the audio/image feature (implemented using `LazyDict`)", "For me it's not an issue, actually. I just (mistakenly) tried to iterate over a PyTorch Dataset instead of a PyTorch DataLoader, \r\n\r\ni.e. I did this:\r\n\r\n`batch = next(iter(train_ds)) `\r\n\r\nwhereas I actually wanted to do\r\n\r\n`batch = next(iter(train_dataloader))`\r\n\r\nand then it turned out that in the first case, the image was a string of bytes rather than a Pillow image, hence Quentin opened an issue.", "Thanks @NielsRogge for the context.\r\n\r\nSo IMO everything is working as expected.\r\n\r\nI'm closing this issue. Feel free to reopen it again if further changes of the specs should be addressed." ]
1,640,186,792,000
1,640,272,971,000
1,640,272,917,000
MEMBER
null
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # passes first_image = next(iter(mnist))["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # fails ``` ## Expected results The image should be decoded, as a PIL Image ## Actual results We get a dictionary ``` {'bytes': b'\x89PNG\r\n\x1a\n\x00..., 'path': None} ``` ## Environment info - `datasets` version: 1.17.1.dev0 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.2 - PyArrow version: 6.0.0 The bug also exists in 1.17.0 ## Investigation I think the issue is that decoding is disabled in `__iter__`: https://github.com/huggingface/datasets/blob/dfe5b73387c5e27de6a16b0caeb39d3b9ded66d6/src/datasets/arrow_dataset.py#L1651-L1661 Do you remember why it was disabled in the first place @albertvillanova ? Also cc @mariosasko @NielsRogge
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3473/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3473/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3472/comments
https://api.github.com/repos/huggingface/datasets/issues/3472/events
https://github.com/huggingface/datasets/pull/3472
1,086,908,508
PR_kwDODunzps4wMEwA
3,472
Fix `str(Path(...))` conversion in streaming on Linux
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,185,563,000
1,640,191,973,000
1,640,191,972,000
CONTRIBUTOR
null
Fix `str(Path(...))` conversion in streaming on Linux. This should fix the streaming of the `beans` and `cats_vs_dogs` datasets.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3472/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3472/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3472", "html_url": "https://github.com/huggingface/datasets/pull/3472", "diff_url": "https://github.com/huggingface/datasets/pull/3472.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3472.patch", "merged_at": 1640191972000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3471/comments
https://api.github.com/repos/huggingface/datasets/issues/3471/events
https://github.com/huggingface/datasets/pull/3471
1,086,588,074
PR_kwDODunzps4wLAk6
3,471
Fix Tashkeela dataset to yield stripped text
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,162,490,000
1,640,167,928,000
1,640,167,927,000
MEMBER
null
This PR: - Yields stripped text - Fix path for Windows - Adds license - Adds more info in dataset card Close bigscience-workshop/data_tooling#279
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3471/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3471/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3471", "html_url": "https://github.com/huggingface/datasets/pull/3471", "diff_url": "https://github.com/huggingface/datasets/pull/3471.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3471.patch", "merged_at": 1640167927000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3470/comments
https://api.github.com/repos/huggingface/datasets/issues/3470/events
https://github.com/huggingface/datasets/pull/3470
1,086,049,888
PR_kwDODunzps4wJO8t
3,470
Fix rendering of docs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,107,021,000
1,640,165,027,000
1,640,165,027,000
MEMBER
null
Minor fix in docs. Currently, `ClassLabel` docstring rendering is not right.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3470/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3470/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3470", "html_url": "https://github.com/huggingface/datasets/pull/3470", "diff_url": "https://github.com/huggingface/datasets/pull/3470.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3470.patch", "merged_at": 1640165027000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3469/comments
https://api.github.com/repos/huggingface/datasets/issues/3469/events
https://github.com/huggingface/datasets/pull/3469
1,085,882,664
PR_kwDODunzps4wIrOV
3,469
Fix METEOR missing NLTK's omw-1.4
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "I also modified the doctest call to raise the exception that doctest may catch, instead of `doctest.UnexpectedException`.\r\nThis will make debugging easier if it happens again" ]
1,640,096,351,000
1,640,098,348,000
1,640,098,168,000
MEMBER
null
NLTK 3.6.6 now requires `omw-1.4` to be downloaded for METEOR to work. This should fix the CI on master
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3469/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3469", "html_url": "https://github.com/huggingface/datasets/pull/3469", "diff_url": "https://github.com/huggingface/datasets/pull/3469.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3469.patch", "merged_at": 1640098168000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3468/comments
https://api.github.com/repos/huggingface/datasets/issues/3468/events
https://github.com/huggingface/datasets/pull/3468
1,085,871,301
PR_kwDODunzps4wIozO
3,468
Add COCO dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The CI failures other than a missing dummy data file and missing fields in the card are unrelated to this PR. ", "Thanks a lot for this great work and fixing TFDS based script @mariosasko 🤗 will generate the dummy dataset and write the model card tomorrow!", "@mariosasko I added the dataset card, I'm on the dummy data rn. " ]
1,640,095,670,000
1,640,183,236,000
null
CONTRIBUTOR
null
This PR adds the MS COCO dataset. Compared to the [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/object_detection/coco.py) script, this implementation adds 8 additional configs to cover the tasks other than object detection. Some notes: * the data exposed by TFDS is contained in the `2014`, `2015`, `2017` and `2017_panoptic_segmentation` configs here * I've updated `encode_nested_example` for easier handling of missing values (cc @lhoestq @albertvillanova; will add tests if you are OK with the changes in `features.py`) * this implementation should fix https://github.com/huggingface/datasets/pull/3377#issuecomment-985559427 TODOs: - [x] dataset card - [ ] dummy data cc @merveenoyan Closes #2526
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3468/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 2, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3468/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3468", "html_url": "https://github.com/huggingface/datasets/pull/3468", "diff_url": "https://github.com/huggingface/datasets/pull/3468.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3468.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3467
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3467/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3467/comments
https://api.github.com/repos/huggingface/datasets/issues/3467/events
https://github.com/huggingface/datasets/pull/3467
1,085,870,665
PR_kwDODunzps4wIoqd
3,467
Push dataset infos.json to Hub
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The change from `___` to `--` was allowed by https://github.com/huggingface/moon-landing/pull/1657" ]
1,640,095,633,000
1,640,106,010,000
1,640,106,009,000
MEMBER
null
When doing `push_to_hub`, the feature types are lost (see issue https://github.com/huggingface/datasets/issues/3394). This PR fixes this by also pushing a `dataset_infos.json` file to the Hub, that stores the feature types. Other minor changes: - renamed the `___` separator to `--`, since `--` is now disallowed in a name in the back-end. I tested this feature with datasets like conll2003 that has feature types like `ClassLabel` that were previously lost. Close https://github.com/huggingface/datasets/issues/3394 I would like to include this in today's release (though not mandatory), so feel free to comment/suggest changes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3467/reactions", "total_count": 2, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3467/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3467", "html_url": "https://github.com/huggingface/datasets/pull/3467", "diff_url": "https://github.com/huggingface/datasets/pull/3467.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3467.patch", "merged_at": 1640106009000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3466/comments
https://api.github.com/repos/huggingface/datasets/issues/3466/events
https://github.com/huggingface/datasets/pull/3466
1,085,722,837
PR_kwDODunzps4wII3w
3,466
Add CRASS dataset
{ "login": "apergo-ai", "id": 68908804, "node_id": "MDQ6VXNlcjY4OTA4ODA0", "avatar_url": "https://avatars.githubusercontent.com/u/68908804?v=4", "gravatar_id": "", "url": "https://api.github.com/users/apergo-ai", "html_url": "https://github.com/apergo-ai", "followers_url": "https://api.github.com/users/apergo-ai/followers", "following_url": "https://api.github.com/users/apergo-ai/following{/other_user}", "gists_url": "https://api.github.com/users/apergo-ai/gists{/gist_id}", "starred_url": "https://api.github.com/users/apergo-ai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/apergo-ai/subscriptions", "organizations_url": "https://api.github.com/users/apergo-ai/orgs", "repos_url": "https://api.github.com/users/apergo-ai/repos", "events_url": "https://api.github.com/users/apergo-ai/events{/privacy}", "received_events_url": "https://api.github.com/users/apergo-ai/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi Albert,\r\nThank you for your comments.\r\nI hope I have uploaded my local git repo to include the dummy files and style reworkings.\r\nAdded YAML in Readme as well.\r\n\r\nPlease check again.\r\n\r\nHope it works now :)" ]
1,640,085,442,000
1,640,271,540,000
null
NONE
null
Added crass dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3466/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3466/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3466", "html_url": "https://github.com/huggingface/datasets/pull/3466", "diff_url": "https://github.com/huggingface/datasets/pull/3466.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3466.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3465/comments
https://api.github.com/repos/huggingface/datasets/issues/3465/events
https://github.com/huggingface/datasets/issues/3465
1,085,400,432
I_kwDODunzps5AseVw
3,465
Unable to load 'cnn_dailymail' dataset
{ "login": "talha1503", "id": 42352729, "node_id": "MDQ6VXNlcjQyMzUyNzI5", "avatar_url": "https://avatars.githubusercontent.com/u/42352729?v=4", "gravatar_id": "", "url": "https://api.github.com/users/talha1503", "html_url": "https://github.com/talha1503", "followers_url": "https://api.github.com/users/talha1503/followers", "following_url": "https://api.github.com/users/talha1503/following{/other_user}", "gists_url": "https://api.github.com/users/talha1503/gists{/gist_id}", "starred_url": "https://api.github.com/users/talha1503/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/talha1503/subscriptions", "organizations_url": "https://api.github.com/users/talha1503/orgs", "repos_url": "https://api.github.com/users/talha1503/repos", "events_url": "https://api.github.com/users/talha1503/events{/privacy}", "received_events_url": "https://api.github.com/users/talha1503/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" } ]
open
false
null
[]
null
[ "Hi @talha1503, thanks for reporting.\r\n\r\nIt seems there is an issue with one of the data files hosted at Google Drive:\r\n```\r\nGoogle Drive - Quota exceeded\r\n\r\nSorry, you can't view or download this file at this time.\r\n\r\nToo many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.\r\n```\r\n\r\nAs you probably know, Hugging Face does not host the data, and in this case the data owner decided to host their data at Google Drive, which has quota limits.\r\n\r\nIs there anything we could do, @lhoestq @mariosasko?", "This looks related to https://github.com/huggingface/datasets/issues/996" ]
1,640,057,541,000
1,640,097,343,000
null
NONE
null
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True) ``` ## Expected results Expecting to load 'cnn_dailymail' dataset. ## Actual results `NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3465/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3464/comments
https://api.github.com/repos/huggingface/datasets/issues/3464/events
https://github.com/huggingface/datasets/issues/3464
1,085,399,097
I_kwDODunzps5AseA5
3,464
struct.error: 'i' format requires -2147483648 <= number <= 2147483647
{ "login": "koukoulala", "id": 30341159, "node_id": "MDQ6VXNlcjMwMzQxMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/30341159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/koukoulala", "html_url": "https://github.com/koukoulala", "followers_url": "https://api.github.com/users/koukoulala/followers", "following_url": "https://api.github.com/users/koukoulala/following{/other_user}", "gists_url": "https://api.github.com/users/koukoulala/gists{/gist_id}", "starred_url": "https://api.github.com/users/koukoulala/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/koukoulala/subscriptions", "organizations_url": "https://api.github.com/users/koukoulala/orgs", "repos_url": "https://api.github.com/users/koukoulala/repos", "events_url": "https://api.github.com/users/koukoulala/events{/privacy}", "received_events_url": "https://api.github.com/users/koukoulala/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,640,057,341,000
1,640,057,341,000
null
NONE
null
## Describe the bug A clear and concise description of what the bug is. using latest datasets=datasets-1.16.1-py3-none-any.whl process my own multilingual dataset by following codes, and the number of rows in all dataset is 306000, the max_length of each sentence is 256: ![image](https://user-images.githubusercontent.com/30341159/146865779-3d25d011-1f42-4026-9e1b-76f6e1d172e9.png) then I get this error: ![image](https://user-images.githubusercontent.com/30341159/146865844-e60a404c-5f3a-403c-b2f1-acd943b5cdb8.png) I have seen the issue in #2134 and #2150, so I don't understand why latest repo still can't deal with big dataset. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: linux docker - Python version: 3.6
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3464/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3463/comments
https://api.github.com/repos/huggingface/datasets/issues/3463/events
https://github.com/huggingface/datasets/pull/3463
1,085,078,795
PR_kwDODunzps4wGB4P
3,463
Update swahili_news dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,024,420,000
1,640,067,843,000
1,640,067,842,000
MEMBER
null
Update dataset with latest verion data files. Fix #3462. Close bigscience-workshop/data_tooling#107
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3463/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3463/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3463", "html_url": "https://github.com/huggingface/datasets/pull/3463", "diff_url": "https://github.com/huggingface/datasets/pull/3463.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3463.patch", "merged_at": 1640067841000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3462/comments
https://api.github.com/repos/huggingface/datasets/issues/3462/events
https://github.com/huggingface/datasets/issues/3462
1,085,049,661
I_kwDODunzps5ArIs9
3,462
Update swahili_news dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,640,022,241,000
1,640,067,842,000
1,640,067,841,000
MEMBER
null
Please note also: the HuggingFace version at https://huggingface.co/datasets/swahili_news is outdated. An updated version, with deduplicated text and official splits, can be found at https://zenodo.org/record/5514203. ## Adding a Dataset - **Name:** swahili_news Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Related to: - bigscience-workshop/data_tooling#107
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3462/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3462/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3461/comments
https://api.github.com/repos/huggingface/datasets/issues/3461/events
https://github.com/huggingface/datasets/pull/3461
1,085,007,346
PR_kwDODunzps4wFzDP
3,461
Fix links in metrics description
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,640,019,379,000
1,640,020,492,000
1,640,020,491,000
MEMBER
null
Remove Markdown syntax for links in metrics description, as it is not properly rendered. Related to #3437.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3461/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3461", "html_url": "https://github.com/huggingface/datasets/pull/3461", "diff_url": "https://github.com/huggingface/datasets/pull/3461.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3461.patch", "merged_at": 1640020491000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3460
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3460/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3460/comments
https://api.github.com/repos/huggingface/datasets/issues/3460/events
https://github.com/huggingface/datasets/pull/3460
1,085,002,469
PR_kwDODunzps4wFyCf
3,460
Don't encode lists as strings when using `Value("string")`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,640,019,049,000
1,640,019,891,000
null
MEMBER
null
Following https://github.com/huggingface/datasets/pull/3456#event-5792250497 it looks like `datasets` can silently convert lists to strings using `str()`, instead of raising an error. This PR fixes this and should fix the issue with WER showing low values if the input format is not right.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3460/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3460/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3460", "html_url": "https://github.com/huggingface/datasets/pull/3460", "diff_url": "https://github.com/huggingface/datasets/pull/3460.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3460.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3459
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3459/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3459/comments
https://api.github.com/repos/huggingface/datasets/issues/3459/events
https://github.com/huggingface/datasets/issues/3459
1,084,969,672
I_kwDODunzps5Aq1LI
3,459
dataset.filter overwriting previously set dataset._indices values, resulting in the wrong elements being selected.
{ "login": "mmajurski", "id": 9354454, "node_id": "MDQ6VXNlcjkzNTQ0NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9354454?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mmajurski", "html_url": "https://github.com/mmajurski", "followers_url": "https://api.github.com/users/mmajurski/followers", "following_url": "https://api.github.com/users/mmajurski/following{/other_user}", "gists_url": "https://api.github.com/users/mmajurski/gists{/gist_id}", "starred_url": "https://api.github.com/users/mmajurski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mmajurski/subscriptions", "organizations_url": "https://api.github.com/users/mmajurski/orgs", "repos_url": "https://api.github.com/users/mmajurski/repos", "events_url": "https://api.github.com/users/mmajurski/events{/privacy}", "received_events_url": "https://api.github.com/users/mmajurski/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I think this is a duplicate of [#3190](https://github.com/huggingface/datasets/issues/3190)?", "Upgrading the datasets version as per #3190 fixes this bug. \r\nI'm Marking as closed." ]
1,640,017,009,000
1,640,018,097,000
1,640,018,097,000
NONE
null
## Describe the bug When using dataset.select to select a subset of a dataset, dataset._indices are set to indicate which elements are now considered in the dataset. The same thing happens when you shuffle the dataset; dataset._indices are set to indicate what the new order of the data is. However, if you then use a dataset.filter, that filter interacts with those dataset._indices values in a non-intuitive manner. https://huggingface.co/docs/datasets/_modules/datasets/arrow_dataset.html#Dataset.filter Effectively, it looks like the original set of _indices were discared and overwritten by the set created during the filter operation. I think this is actually an issue with how the map function handles dataset._indices. Ideally it should use the _indices it gets passed, and then return an updated _indices which reflect the map transformation applied to the starting _indices. ## Steps to reproduce the bug ```python dataset = load_dataset('imdb', split='train', keep_in_memory=True) dataset = dataset.shuffle(keep_in_memory=True) dataset = dataset.select(range(0, 10), keep_in_memory=True) print("initial 10 elements") print(dataset['label']) # -> [1, 1, 0, 1, 0, 0, 0, 1, 0, 0] dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True) print("filtered 10 elements looking for label 0") print(dataset['label']) # -> [1, 1, 1, 1, 1, 1] ``` ## Actual results ``` $ python indices_bug.py initial 10 elements [1, 1, 0, 1, 0, 0, 0, 1, 0, 0] filtered 10 elements looking for label 0 [1, 1, 1, 1, 1, 1] ``` This code block first shuffles the dataset (to get a mix of label 0 and label 1). Then it selects just the first 10 elements (the number of elements does not matter, 10 is just easy to visualize). The important part is that you select some subset of the dataset. Finally, a filter is applied to pull out just the elements with `label == 0`. The bug is that you cannot combine any dataset operation which sets the dataset._indices with filter. In this case I have 2, shuffle and subset. If you just use a single dataset._indices operation (in this case shuffle) the bug still shows up. The shuffle sets the dataset._indices and then filter uses those indices in the map, then overwrites dataset._indices with the filter results. ```python dataset = load_dataset('imdb', split='train', keep_in_memory=True) dataset = dataset.shuffle(keep_in_memory=True) dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True) dataset = dataset.select(range(0, 10), keep_in_memory=True) print(dataset['label']) # -> [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` ## Expected results In an ideal world, the dataset filter would respect any dataset._indices values which had previously been set. If you use dataset.filter with the base dataset (where dataset._indices has not been set) then the filter command works as expected. ## Environment info Here are the commands required to rebuild the conda environment from scratch. ``` # create a virtual environment conda create -n dataset_indices python=3.8 -y # activate the virtual environment conda activate dataset_indices # install huggingface datasets conda install datasets ``` <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-41-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 3.0.0 ### Full Conda Environment ``` $ conda env export name: dasaset_indices channels: - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=4.5=1_gnu - abseil-cpp=20210324.2=h2531618_0 - aiohttp=3.8.1=py38h7f8727e_0 - aiosignal=1.2.0=pyhd3eb1b0_0 - arrow-cpp=3.0.0=py38h6b21186_4 - attrs=21.2.0=pyhd3eb1b0_0 - aws-c-common=0.4.57=he6710b0_1 - aws-c-event-stream=0.1.6=h2531618_5 - aws-checksums=0.1.9=he6710b0_0 - aws-sdk-cpp=1.8.185=hce553d0_0 - bcj-cffi=0.5.1=py38h295c915_0 - blas=1.0=mkl - boost-cpp=1.73.0=h27cfd23_11 - bottleneck=1.3.2=py38heb32a55_1 - brotli=1.0.9=he6710b0_2 - brotli-python=1.0.9=py38heb0550a_2 - brotlicffi=1.0.9.2=py38h295c915_0 - brotlipy=0.7.0=py38h27cfd23_1003 - bzip2=1.0.8=h7b6447c_0 - c-ares=1.17.1=h27cfd23_0 - ca-certificates=2021.10.26=h06a4308_2 - certifi=2021.10.8=py38h06a4308_0 - cffi=1.14.6=py38h400218f_0 - conllu=4.4.1=pyhd3eb1b0_0 - cryptography=36.0.0=py38h9ce1e76_0 - dataclasses=0.8=pyh6d0b6a4_7 - dill=0.3.4=pyhd3eb1b0_0 - double-conversion=3.1.5=he6710b0_1 - et_xmlfile=1.1.0=py38h06a4308_0 - filelock=3.4.0=pyhd3eb1b0_0 - frozenlist=1.2.0=py38h7f8727e_0 - gflags=2.2.2=he6710b0_0 - glog=0.5.0=h2531618_0 - gmp=6.2.1=h2531618_2 - grpc-cpp=1.39.0=hae934f6_5 - huggingface_hub=0.0.17=pyhd3eb1b0_0 - icu=58.2=he6710b0_3 - idna=3.3=pyhd3eb1b0_0 - importlib-metadata=4.8.2=py38h06a4308_0 - importlib_metadata=4.8.2=hd3eb1b0_0 - intel-openmp=2021.4.0=h06a4308_3561 - krb5=1.19.2=hac12032_0 - ld_impl_linux-64=2.35.1=h7274673_9 - libboost=1.73.0=h3ff78a5_11 - libcurl=7.80.0=h0b77cf5_0 - libedit=3.1.20210910=h7f8727e_0 - libev=4.33=h7f8727e_1 - libevent=2.1.8=h1ba5d50_1 - libffi=3.3=he6710b0_2 - libgcc-ng=9.3.0=h5101ec6_17 - libgomp=9.3.0=h5101ec6_17 - libnghttp2=1.46.0=hce63b2e_0 - libprotobuf=3.17.2=h4ff587b_1 - libssh2=1.9.0=h1ba5d50_1 - libstdcxx-ng=9.3.0=hd4cf53a_17 - libthrift=0.14.2=hcc01f38_0 - libxml2=2.9.12=h03d6c58_0 - libxslt=1.1.34=hc22bd24_0 - lxml=4.6.3=py38h9120a33_0 - lz4-c=1.9.3=h295c915_1 - mkl=2021.4.0=h06a4308_640 - mkl-service=2.4.0=py38h7f8727e_0 - mkl_fft=1.3.1=py38hd3c417c_0 - mkl_random=1.2.2=py38h51133e4_0 - multiprocess=0.70.12.2=py38h7f8727e_0 - multivolumefile=0.2.3=pyhd3eb1b0_0 - ncurses=6.3=h7f8727e_2 - numexpr=2.7.3=py38h22e1b3c_1 - numpy=1.21.2=py38h20f2e39_0 - numpy-base=1.21.2=py38h79a1101_0 - openpyxl=3.0.9=pyhd3eb1b0_0 - openssl=1.1.1l=h7f8727e_0 - orc=1.6.9=ha97a36c_3 - packaging=21.3=pyhd3eb1b0_0 - pip=21.2.4=py38h06a4308_0 - py7zr=0.16.1=pyhd3eb1b0_1 - pycparser=2.21=pyhd3eb1b0_0 - pycryptodomex=3.10.1=py38h27cfd23_1 - pyopenssl=21.0.0=pyhd3eb1b0_1 - pyparsing=3.0.4=pyhd3eb1b0_0 - pyppmd=0.16.1=py38h295c915_0 - pysocks=1.7.1=py38h06a4308_0 - python=3.8.12=h12debd9_0 - python-dateutil=2.8.2=pyhd3eb1b0_0 - python-xxhash=2.0.2=py38h7f8727e_0 - pyzstd=0.14.4=py38h7f8727e_3 - re2=2020.11.01=h2531618_1 - readline=8.1=h27cfd23_0 - requests=2.26.0=pyhd3eb1b0_0 - setuptools=58.0.4=py38h06a4308_0 - six=1.16.0=pyhd3eb1b0_0 - snappy=1.1.8=he6710b0_0 - sqlite=3.36.0=hc218d9a_0 - texttable=1.6.4=pyhd3eb1b0_0 - tk=8.6.11=h1ccaba5_0 - typing_extensions=3.10.0.2=pyh06a4308_0 - uriparser=0.9.3=he6710b0_1 - utf8proc=2.6.1=h27cfd23_0 - wheel=0.37.0=pyhd3eb1b0_1 - xxhash=0.8.0=h7f8727e_3 - xz=5.2.5=h7b6447c_0 - zipp=3.6.0=pyhd3eb1b0_0 - zlib=1.2.11=h7f8727e_4 - zstd=1.4.9=haebb681_0 - pip: - async-timeout==4.0.2 - charset-normalizer==2.0.9 - datasets==1.16.1 - fsspec==2021.11.1 - huggingface-hub==0.2.1 - multidict==5.2.0 - pandas==1.3.5 - pyarrow==6.0.1 - pytz==2021.3 - pyyaml==6.0 - tqdm==4.62.3 - typing-extensions==4.0.1 - urllib3==1.26.7 - yarl==1.7.2 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3459/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3459/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3458/comments
https://api.github.com/repos/huggingface/datasets/issues/3458/events
https://github.com/huggingface/datasets/pull/3458
1,084,926,025
PR_kwDODunzps4wFiRb
3,458
Fix duplicated tag in wikicorpus dataset card
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "CI is failing just because of empty sections - merging" ]
1,640,014,456,000
1,640,016,205,000
1,640,016,204,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3458/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3458/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3458", "html_url": "https://github.com/huggingface/datasets/pull/3458", "diff_url": "https://github.com/huggingface/datasets/pull/3458.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3458.patch", "merged_at": 1640016204000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3457/comments
https://api.github.com/repos/huggingface/datasets/issues/3457/events
https://github.com/huggingface/datasets/issues/3457
1,084,862,121
I_kwDODunzps5Aqa6p
3,457
Add CMU Graphics Lab Motion Capture dataset
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
open
false
null
[]
null
[]
1,640,010,879,000
1,640,013,736,000
null
NONE
null
## Adding a Dataset - **Name:** CMU Graphics Lab Motion Capture database - **Description:** The database contains free motions which you can download and use. - **Data:** http://mocap.cs.cmu.edu/ - **Motivation:** Nice motion capture dataset Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3457/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3457/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3456/comments
https://api.github.com/repos/huggingface/datasets/issues/3456/events
https://github.com/huggingface/datasets/pull/3456
1,084,687,973
PR_kwDODunzps4wEwXz
3,456
[WER] Better error message for wer
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi ! I don't think this would solve this issue.\r\nCurrently it looks like there's a bug that converts the list `[\"hello it's nice\"]` to a string `'[\"hello it's nice\"]'` since this is what the metric expects as input. The conversion is done before the data are passed to `_compute()`.\r\n\r\nThis is `Value(\"string\").encode_example` that is called to do the conversion. Since `str()` encoding is too permissive we should consider raising an error if the example is not a string (even though it can be converted to string). ", "> called\r\n\r\nAh yeah you're right", "I just opened https://github.com/huggingface/datasets/pull/3460 to fix that. It now raises an error instead of computing the wrong WER", "Thank you - that should be good enough!" ]
1,640,000,320,000
1,640,019,217,000
1,640,019,216,000
MEMBER
null
Currently we have the following problem when using the WER. When the input format to the WER metric is wrong, instead of throwing an error message a word-error-rate is computed which is incorrect. E.g. when doing the following: ```python from datasets import load_metric wer = load_metric("wer") target_str = ["hello this is nice", "hello the weather is bloomy"] pred_str = [["hello it's nice"], ["hello it's the weather"]] print("Wrong:", wer.compute(predictions=pred_str, references=target_str)) print("Correct", wer.compute(predictions=[x[0] for x in pred_str], references=target_str)) ``` We get: ``` Wrong: 1.0 Correct 0.5555555555555556 ``` meaning that we get a word-error rate for incorrectly passed input formats. We should raise an error here instead so that people don't spent hours fixing a model while it's their incorrect evaluation metric is the problem for a low WER.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3456/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3456/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3456", "html_url": "https://github.com/huggingface/datasets/pull/3456", "diff_url": "https://github.com/huggingface/datasets/pull/3456.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3456.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3455
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3455/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3455/comments
https://api.github.com/repos/huggingface/datasets/issues/3455/events
https://github.com/huggingface/datasets/issues/3455
1,084,599,650
I_kwDODunzps5Apa1i
3,455
Easier information editing
{ "login": "borgr", "id": 6416600, "node_id": "MDQ6VXNlcjY0MTY2MDA=", "avatar_url": "https://avatars.githubusercontent.com/u/6416600?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borgr", "html_url": "https://github.com/borgr", "followers_url": "https://api.github.com/users/borgr/followers", "following_url": "https://api.github.com/users/borgr/following{/other_user}", "gists_url": "https://api.github.com/users/borgr/gists{/gist_id}", "starred_url": "https://api.github.com/users/borgr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borgr/subscriptions", "organizations_url": "https://api.github.com/users/borgr/orgs", "repos_url": "https://api.github.com/users/borgr/repos", "events_url": "https://api.github.com/users/borgr/events{/privacy}", "received_events_url": "https://api.github.com/users/borgr/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "Hi ! I guess you are talking about the dataset cards that are in this repository on github ?\r\n\r\nI think github allows to submit a PR even for 1 line though the `Edit file` button on the page of the dataset card.\r\n\r\nMaybe let's mention this in `CONTRIBUTING.md` ?" ]
1,639,995,043,000
1,640,011,739,000
null
NONE
null
**Is your feature request related to a problem? Please describe.** It requires a lot of effort to improve a datasheet. **Describe the solution you'd like** UI or at least a link to the place where the code that needs to be edited is (and an easy way to edit this code directly from the site, without cloning, branching, makefile etc.) **Describe alternatives you've considered** The current Ux is to have the 8 steps for contribution while One just wishes to change a line a type etc.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3455/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3455/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3454/comments
https://api.github.com/repos/huggingface/datasets/issues/3454/events
https://github.com/huggingface/datasets/pull/3454
1,084,519,107
PR_kwDODunzps4wENam
3,454
Fix iter_archive generator
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,639,990,215,000
1,639,994,700,000
1,639,994,699,000
MEMBER
null
This PR: - Adds tests to DownloadManager and StreamingDownloadManager `iter_archive` for both path and file inputs - Fixes bugs in `iter_archive` introduced in: - #3443 Fix #3453.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3454/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3454/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3454", "html_url": "https://github.com/huggingface/datasets/pull/3454", "diff_url": "https://github.com/huggingface/datasets/pull/3454.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3454.patch", "merged_at": 1639994699000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3453
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3453/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3453/comments
https://api.github.com/repos/huggingface/datasets/issues/3453/events
https://github.com/huggingface/datasets/issues/3453
1,084,515,911
I_kwDODunzps5ApGZH
3,453
ValueError while iter_archive
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,639,989,978,000
1,639,994,699,000
1,639,994,699,000
MEMBER
null
## Describe the bug After the merge of: - #3443 the method `iter_archive` throws a ValueError: ``` ValueError: read of closed file ``` ## Steps to reproduce the bug ```python for path, file in dl_manager.iter_archive(archive_path): pass ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3453/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3453/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3452
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3452/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3452/comments
https://api.github.com/repos/huggingface/datasets/issues/3452/events
https://github.com/huggingface/datasets/issues/3452
1,083,803,178
I_kwDODunzps5AmYYq
3,452
why the stratify option is omitted from test_train_split function?
{ "login": "j-sieger", "id": 9985334, "node_id": "MDQ6VXNlcjk5ODUzMzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9985334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/j-sieger", "html_url": "https://github.com/j-sieger", "followers_url": "https://api.github.com/users/j-sieger/followers", "following_url": "https://api.github.com/users/j-sieger/following{/other_user}", "gists_url": "https://api.github.com/users/j-sieger/gists{/gist_id}", "starred_url": "https://api.github.com/users/j-sieger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/j-sieger/subscriptions", "organizations_url": "https://api.github.com/users/j-sieger/orgs", "repos_url": "https://api.github.com/users/j-sieger/repos", "events_url": "https://api.github.com/users/j-sieger/events{/privacy}", "received_events_url": "https://api.github.com/users/j-sieger/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,639,823,867,000
1,639,823,867,000
null
NONE
null
why the stratify option is omitted from test_train_split function? is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3452/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3452/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3451
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3451/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3451/comments
https://api.github.com/repos/huggingface/datasets/issues/3451/events
https://github.com/huggingface/datasets/pull/3451
1,083,459,137
PR_kwDODunzps4wA5LP
3,451
[Staging] Update dataset repos automatically on the Hub
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "do keep us updated on how it's going in staging! cc @SBrandeis ", "Sure ! For now it works smoothly. We'll also do a new release today.\r\n\r\nI can send you some repos to explore on staging, in case you want to see how they look like after being updated.\r\nFor example [swahili_news](https://moon-staging.huggingface.co/datasets/swahili_news/tree/main)" ]
1,639,761,131,000
1,640,082,346,000
1,640,009,391,000
MEMBER
null
Let's have a script that updates the dataset repositories on staging for now. This way we can make sure it works fine before going in prod. Related to https://github.com/huggingface/datasets/issues/3341 The script runs on each commit on `master`. It checks the datasets that were changed, and it pushes the changes to the corresponding repositories on the Hub. If there's a new dataset, then a new repository is created. If the commit is a new release of `datasets`, it also pushes the tag to all the repositories.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3451/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3451/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3451", "html_url": "https://github.com/huggingface/datasets/pull/3451", "diff_url": "https://github.com/huggingface/datasets/pull/3451.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3451.patch", "merged_at": 1640009391000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3450
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3450/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3450/comments
https://api.github.com/repos/huggingface/datasets/issues/3450/events
https://github.com/huggingface/datasets/issues/3450
1,083,450,158
I_kwDODunzps5AlCMu
3,450
Unexpected behavior doing Split + Filter
{ "login": "jbrachat", "id": 26432605, "node_id": "MDQ6VXNlcjI2NDMyNjA1", "avatar_url": "https://avatars.githubusercontent.com/u/26432605?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jbrachat", "html_url": "https://github.com/jbrachat", "followers_url": "https://api.github.com/users/jbrachat/followers", "following_url": "https://api.github.com/users/jbrachat/following{/other_user}", "gists_url": "https://api.github.com/users/jbrachat/gists{/gist_id}", "starred_url": "https://api.github.com/users/jbrachat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbrachat/subscriptions", "organizations_url": "https://api.github.com/users/jbrachat/orgs", "repos_url": "https://api.github.com/users/jbrachat/repos", "events_url": "https://api.github.com/users/jbrachat/events{/privacy}", "received_events_url": "https://api.github.com/users/jbrachat/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! This is an issue with `datasets` 1.12. Sorry for the inconvenience. Can you update to `>=1.13` ?\r\nsee https://github.com/huggingface/datasets/issues/3190\r\n\r\nMaybe we should also backport the bug fix to `1.12` (in a new version `1.12.2`)" ]
1,639,760,439,000
1,640,011,897,000
null
NONE
null
## Describe the bug I observed unexpected behavior when applying 'train_test_split' followed by 'filter' on dataset. Elements of the training dataset eventually end up in the test dataset (after applying the 'filter') ## Steps to reproduce the bug ``` from datasets import Dataset import pandas as pd dic = {'x': [1,2,3,4,5,6,7,8,9], 'y':['q','w','e','r','t','y','u','i','o']} df = pd.DataFrame.from_dict(dic) dataset = Dataset.from_pandas(df) split_dataset = dataset.train_test_split(test_size=0.5, shuffle=False, seed=42) train_dataset = split_dataset["train"] eval_dataset = split_dataset["test"] eval_dataset_2 = eval_dataset.filter(lambda example: example['x'] % 2 == 0) print( eval_dataset['x']) print(eval_dataset_2['x']) ``` One observes that elements in eval_dataset2 are actually coming from the training dataset... ## Expected results The expected results would be that the filtered eval dataset would only contain elements from the original eval dataset. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Windows 10 - Python version: 3.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3450/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3450/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3449
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3449/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3449/comments
https://api.github.com/repos/huggingface/datasets/issues/3449/events
https://github.com/huggingface/datasets/issues/3449
1,083,373,018
I_kwDODunzps5AkvXa
3,449
Add `__add__()`, `__iadd__()` and similar to `Dataset` class
{ "login": "sgraaf", "id": 8904453, "node_id": "MDQ6VXNlcjg5MDQ0NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgraaf", "html_url": "https://github.com/sgraaf", "followers_url": "https://api.github.com/users/sgraaf/followers", "following_url": "https://api.github.com/users/sgraaf/following{/other_user}", "gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions", "organizations_url": "https://api.github.com/users/sgraaf/orgs", "repos_url": "https://api.github.com/users/sgraaf/repos", "events_url": "https://api.github.com/users/sgraaf/events{/privacy}", "received_events_url": "https://api.github.com/users/sgraaf/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "I was going through the codebase, and I believe the implementation of __add__() and __iadd__() will be similar to concatenate_datasets() after the elimination of code for arguments other than the list of datasets (info, split, axis). \r\n(Assuming elimination of axis means concatenating over axis 1.)" ]
1,639,754,951,000
1,640,343,970,000
null
NONE
null
**Is your feature request related to a problem? Please describe.** No. **Describe the solution you'd like** I would like to be able to concatenate datasets as follows: ```python >>> dataset["train"] += dataset["validation"] ``` ... instead of using `concatenate_datasets()`: ```python >>> raw_datasets["train"] = concatenate_datasets([raw_datasets["train"], raw_datasets["validation"]]) >>> del raw_datasets["validation"] ``` **Describe alternatives you've considered** Well, I have considered `concatenate_datasets()` 😀 **Additional context** N.a.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3449/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3449/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3448/comments
https://api.github.com/repos/huggingface/datasets/issues/3448/events
https://github.com/huggingface/datasets/issues/3448
1,083,231,080
I_kwDODunzps5AkMto
3,448
JSONDecodeError with HuggingFace dataset viewer
{ "login": "kathrynchapman", "id": 57716109, "node_id": "MDQ6VXNlcjU3NzE2MTA5", "avatar_url": "https://avatars.githubusercontent.com/u/57716109?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kathrynchapman", "html_url": "https://github.com/kathrynchapman", "followers_url": "https://api.github.com/users/kathrynchapman/followers", "following_url": "https://api.github.com/users/kathrynchapman/following{/other_user}", "gists_url": "https://api.github.com/users/kathrynchapman/gists{/gist_id}", "starred_url": "https://api.github.com/users/kathrynchapman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kathrynchapman/subscriptions", "organizations_url": "https://api.github.com/users/kathrynchapman/orgs", "repos_url": "https://api.github.com/users/kathrynchapman/repos", "events_url": "https://api.github.com/users/kathrynchapman/events{/privacy}", "received_events_url": "https://api.github.com/users/kathrynchapman/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
[ "Hi ! I think the issue comes from the dataset_infos.json file: it has the \"flat\" field twice.\r\n\r\nCan you try deleting this file and regenerating it please ?", "Thanks! That fixed that, but now I am getting:\r\nServer Error\r\nStatus code: 400\r\nException: KeyError\r\nMessage: 'feature'\r\n\r\nI checked the dataset_infos.json and pubmed_neg.py script, I don't use 'feature' anywhere as a key. Is the dataset viewer expecting that I do?", "It seems that the `feature` key is missing from some feature type definition in your dataset_infos.json:\r\n```json\r\n\t\t\t\"tokens\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n\t\t\t},\r\n\t\t\t\"tags\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n\t\t\t}\r\n```\r\nThey should be\r\n```json\r\n\t\t\t\"tokens\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\"\r\n \"feature\": {\"dtype\": \"string\", \"id\": null, \"_type\": \"Value\"}\r\n\t\t\t},\r\n\t\t\t\"tags\": {\r\n\t\t\t\t\"dtype\": \"list\",\r\n\t\t\t\t\"id\": null,\r\n\t\t\t\t\"_type\": \"Sequence\",\r\n \"feature\": {\"num_classes\": 5, \"names\": [\"-\", \"S\", \"H\", \"N\", \"C\"], \"names_file\": null, \"id\": null, \"_type\": \"ClassLabel\"}\r\n\t\t\t}\r\n```\r\n\r\nNote that you can generate the dataset_infos.json automatically to avoid mistakes:\r\n```bash\r\ndatasets-cli test ./path/to/dataset --save_infos\r\n```" ]
1,639,745,561,000
1,640,008,852,000
null
NONE
null
## Dataset viewer issue for 'pubmed_neg' **Link:** https://huggingface.co/datasets/IGESML/pubmed_neg I am getting the error: Status code: 400 Exception: JSONDecodeError Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202) I have checked all files - I am not using single quotes anywhere. Not sure what is causing this issue. Am I the one who added this dataset ? Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3448/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3448/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3447
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3447/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3447/comments
https://api.github.com/repos/huggingface/datasets/issues/3447/events
https://github.com/huggingface/datasets/issues/3447
1,082,539,790
I_kwDODunzps5Ahj8O
3,447
HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading
{ "login": "dunalduck0", "id": 51274745, "node_id": "MDQ6VXNlcjUxMjc0NzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/51274745?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dunalduck0", "html_url": "https://github.com/dunalduck0", "followers_url": "https://api.github.com/users/dunalduck0/followers", "following_url": "https://api.github.com/users/dunalduck0/following{/other_user}", "gists_url": "https://api.github.com/users/dunalduck0/gists{/gist_id}", "starred_url": "https://api.github.com/users/dunalduck0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dunalduck0/subscriptions", "organizations_url": "https://api.github.com/users/dunalduck0/orgs", "repos_url": "https://api.github.com/users/dunalduck0/repos", "events_url": "https://api.github.com/users/dunalduck0/events{/privacy}", "received_events_url": "https://api.github.com/users/dunalduck0/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! Indeed it says \"downloading and preparing\" but in your case it didn't need to download anything since you used local files (it would have thrown an error otherwise). I think we can improve the logging to make it clearer in this case", "@lhoestq Thank you for explaining. I am sorry but I was not clear about my intention. I didn't want to kill internet traffic; I wanted to kill all write activity. In other words, you can imagine that my storage has only read access but crashes on write.\r\n\r\nWhen run_clm.py is invoked with the same parameters, the hash in the cache directory \"datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/...\" doesn't change, and my job can load cached data properly. This is great.\r\n\r\nUnfortunately, when params change (which happens sometimes), the hash changes and the old cache is invalid. datasets builder would create a new cache directory with the new hash and create JSON builder there, even though every JSON builder is the same. I didn't find a way to avoid such behavior.\r\n\r\nThis problem can be resolved when using datasets.map() for tokenizing and grouping text. This function allows me to specify output filenames with --cache_file_names, so that the cached files are always valid.\r\n\r\nThis is the code that I used to freeze cache filenames for tokenization. I wish I could do the same to datasets.load_dataset()\r\n```\r\n tokenized_datasets = raw_datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n remove_columns=column_names,\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n desc=\"Running tokenizer on dataset\",\r\n cache_file_names={k: os.path.join(model_args.cache_dir, f'{k}-tokenized') for k in raw_datasets},\r\n )\r\n```", "Hi ! `load_dataset` may re-generate your dataset if some parameters changed indeed. If you want to freeze a dataset loaded with `load_dataset`, I think the best solution is just to save it somewhere on your disk with `.save_to_disk(my_dataset_dir)` and reload it with `load_from_disk(my_dataset_dir)`. This way you will be able to reload the dataset without having to run `load_dataset`" ]
1,639,680,673,000
1,640,000,609,000
null
NONE
null
## Describe the bug According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning, datasets still tried to download "custom data configuration" for JSON, despite I have run the program once and cached all data into the same --cache_dir. "Downloading" is not an issue when running with local disk, but crashes often with cloud storage because (1) multiply GPU processes try to access the same file, AND (2) FileLocker fails to synchronize all processes, due to storage throttling. 99% of times, when the main process releases FileLocker, the file is not actually ready for access in cloud storage and thus triggers "FileNotFound" errors for all other processes. Well, another way to resolve the problem is to investigate super reliable cloud storage, but that's out of scope here. ## Steps to reproduce the bug ``` export HF_DATASETS_OFFLINE=1 python run_clm.py --model_name_or_path=models/gpt-j-6B --train_file=trainpy.v2.train.json --validation_file=trainpy.v2.eval.json --cache_dir=datacache/trainpy.v2 ``` ## Expected results datasets should stop all "downloading" behavior but reuse the cached JSON configuration. I think the problem here is part of the cache directory path, "default-471372bed4b51b53", is randomly generated, and it could change if some parameters changed. And I didn't find a way to use a fixed path to ensure datasets to reuse cached data every time. ## Actual results The logging shows datasets are still downloading into "datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426". ``` 12/16/2021 10:25:59 - WARNING - datasets.builder - Using custom data configuration default-471372bed4b51b53 12/16/2021 10:25:59 - INFO - datasets.builder - Generating dataset json (datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426) Downloading and preparing dataset json/default to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426... 100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 17623.13it/s] 12/16/2021 10:25:59 - INFO - datasets.utils.download_manager - Downloading took 0.0 min 12/16/2021 10:26:00 - INFO - datasets.utils.download_manager - Checksum Computation took 0.0 min 100%|███████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1206.99it/s] 12/16/2021 10:26:00 - INFO - datasets.utils.info_utils - Unable to verify checksums. 12/16/2021 10:26:00 - INFO - datasets.builder - Generating split train 12/16/2021 10:26:01 - INFO - datasets.builder - Generating split validation 12/16/2021 10:26:02 - INFO - datasets.utils.info_utils - Unable to verify splits sizes. Dataset json downloaded and prepared to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426. Subsequent calls will reuse this data. 100%|█████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 53.54it/s] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux - Python version: 3.8.10 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3447/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3447/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3446
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3446/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3446/comments
https://api.github.com/repos/huggingface/datasets/issues/3446/events
https://github.com/huggingface/datasets/pull/3446
1,082,414,229
PR_kwDODunzps4v9dFM
3,446
Remove redundant local path information in audio/image datasets
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Cool, I'm in favor of this PR. Our official examples in speech already make use of `\"audio\"` so no need to change anything there. It would be great if we could prominently feature how one can get the audio path without decoding in the docs.", "@patrickvonplaten Yes, I agree.\r\n\r\ncc @stevhliu we should add an example where decoding is disabled (to read paths) to [this section](https://github.com/huggingface/datasets/blob/master/docs/source/audio_process.rst#audio-datasets) in the docs and remove the mentions of `path`/`file` (if we merge this PR)." ]
1,639,672,515,000
1,639,675,804,000
null
CONTRIBUTOR
null
Remove the redundant path information in the audio/image dataset as discussed in https://github.com/huggingface/datasets/pull/3430#issuecomment-994734828 TODOs: * [ ] merge https://github.com/huggingface/datasets/pull/3430 * [ ] merge https://github.com/huggingface/datasets/pull/3364 * [ ] re-generate the info files of the updated audio datasets cc: @patrickvonplaten @anton-l @nateraw (I expect this to break the audio/vision examples in Transformers; after this change you'll be able to access underlying paths as follows `dset = dset.cast_column("audio", Audio(..., decode=False)); path = dset[0]["audio"]`)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3446/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3446/timeline
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3446", "html_url": "https://github.com/huggingface/datasets/pull/3446", "diff_url": "https://github.com/huggingface/datasets/pull/3446.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3446.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/3445
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3445/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3445/comments
https://api.github.com/repos/huggingface/datasets/issues/3445/events
https://github.com/huggingface/datasets/issues/3445
1,082,370,968
I_kwDODunzps5Ag6uY
3,445
question
{ "login": "BAKAYOKO0232", "id": 38075175, "node_id": "MDQ6VXNlcjM4MDc1MTc1", "avatar_url": "https://avatars.githubusercontent.com/u/38075175?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BAKAYOKO0232", "html_url": "https://github.com/BAKAYOKO0232", "followers_url": "https://api.github.com/users/BAKAYOKO0232/followers", "following_url": "https://api.github.com/users/BAKAYOKO0232/following{/other_user}", "gists_url": "https://api.github.com/users/BAKAYOKO0232/gists{/gist_id}", "starred_url": "https://api.github.com/users/BAKAYOKO0232/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BAKAYOKO0232/subscriptions", "organizations_url": "https://api.github.com/users/BAKAYOKO0232/orgs", "repos_url": "https://api.github.com/users/BAKAYOKO0232/repos", "events_url": "https://api.github.com/users/BAKAYOKO0232/events{/privacy}", "received_events_url": "https://api.github.com/users/BAKAYOKO0232/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
[ "Hi ! What's your question ?" ]
1,639,670,220,000
1,639,749,168,000
null
NONE
null
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3445/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3444/comments
https://api.github.com/repos/huggingface/datasets/issues/3444/events
https://github.com/huggingface/datasets/issues/3444
1,082,078,961
I_kwDODunzps5Afzbx
3,444
Align the Dataset and IterableDataset processing API
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "Yes I agree, these should be as aligned as possible. Maybe we can also check the feedback in the survey at http://hf.co/oss-survey and see if people mentioned related things on the API (in particular if we go the breaking change way, it would be good to be sure we are taking the right direction for the community).", "I like this proposal.\r\n\r\n> There is also an important difference in terms of behavior:\r\nDataset.map adds new columns (with dict.update)\r\nBUT\r\nIterableDataset discards previous columns (it overwrites the dict)\r\nIMO the two methods should have the same behavior. This would be an important breaking change though.\r\n\r\n> The main breaking change would be the change of behavior of IterableDataset.map, because currently it discards all the previous columns instead of keeping them.\r\n\r\nYes, this behavior of `IterableDataset.map` was surprising to me the first time I used it because I was expecting the same behavior as `Dataset.map`, so I'm OK with the breaking change here.\r\n\r\n> IterableDataset only supports \"torch\" (it misses tf, jax, pandas, arrow) and is missing the parameters: columns, output_all_columns and format_kwargs\r\n\r\n\\+ it's also missing the actual formatting code (we return unformatted tensors)\r\n> We could have a completely aligned map method if both methods were lazy by default, but this is a very big breaking change so I'm not sure we can consider doing that.\r\n\r\n> For information, TFDS does lazy map by default, and has an additional .cache() method.\r\n\r\nIf I understand this part correctly, the idea would be for `Dataset.map` to behave similarly to `Dataset.with_transform` (lazy processing) and to have an option to cache processed data (with `.cache()`). This idea is really nice because it can also be applied to `IterableDataset` to fix https://github.com/huggingface/datasets/issues/3142 (again we get the aligned APIs). However, this change would break a lot of things, so I'm still not sure if this is a step in the right direction (maybe it's OK for Datasets 2.0?) \r\n> If the two APIs are more aligned it would be awesome for the examples in transformers, and it would create a satisfactory experience for users that want to switch from one mode to the other.\r\n\r\nYes, it would be amazing to have an option to easily switch between these two modes.\r\n\r\nI agree with the rest.\r\n", "> If I understand this part correctly, the idea would be for Dataset.map to behave similarly to Dataset.with_transform (lazy processing) and to have an option to cache processed data (with .cache()). This idea is really nice because it can also be applied to IterableDataset to fix #3142 (again we get the aligned APIs). However, this change would break a lot of things, so I'm still not sure if this is a step in the right direction (maybe it's OK for Datasets 2.0?)\r\n\r\nYea this is too big of a change in my opinion. Anyway it's fine as it is right now with streaming=lazy and regular=eager." ]
1,639,653,971,000
1,640,099,740,000
null
MEMBER
null
## Intro Currently the two classes have two distinct API for processing: ### The `.map()` method Both have those parameters in common: function, batched, batch_size - IterableDataset is missing those parameters: with_indices, with_rank, input_columns, drop_last_batch, remove_columns, features, disable_nullable, fn_kwargs, num_proc - Dataset also has additional parameters that are exclusive, due to caching: keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, suffix_template, new_fingerprint - There is also an important difference in terms of behavior: **Dataset.map adds new columns** (with dict.update) BUT **IterableDataset discards previous columns** (it overwrites the dict) IMO the two methods should have the same behavior. This would be an important breaking change though. - Dataset.map is eager while IterableDataset.map is lazy ### The `.shuffle()` method - Both have an optional seed parameter, but IterableDataset requires a mandatory parameter buffer_size to control the size of the local buffer used for approximate shuffling. - IterableDataset is missing the parameter generator - Also Dataset has exclusive parameters due to caching: keep_in_memory, load_from_cache_file, indices_cache_file_name, writer_batch_size, new_fingerprint ### The `.with_format()` method - IterableDataset only supports "torch" (it misses tf, jax, pandas, arrow) and is missing the parameters: columns, output_all_columns and format_kwargs ### Other methods - Both have the same `remove_columns` method - IterableDataset is missing: cast, cast_column, filter, rename_column, rename_columns, class_encode_column, flatten, prepare_for_task, train_test_split, shard - Some other methods are missing but we can discuss them: set_transform, formatted_as, with_transform - And others don't really make sense for an iterable dataset: select, sort, add_column, add_item - Dataset is missing skip and take, that IterableDataset implements. ## Questions I think it would be nice to be able to switch between streaming and regular dataset easily, without changing the processing code significantly. 1. What should be aligned and what shouldn't between those two APIs ? IMO the minimum is to align the main processing methods. It would mean aligning breaking the current `Iterable.map` to have the same behavior as `Dataset.map` (add columns with dict.update), and add multiprocessing as well as the missing parameters. It would also mean implementing the missing methods: cast, cast_column, filter, rename_column, rename_columns, class_encode_column, flatten, prepare_for_task, train_test_split, shard 2. What are the breaking changes for IterableDataset ? The main breaking change would be the change of behavior of `IterableDataset.map`, because currently it discards all the previous columns instead of keeping them. 3. Shall we also do some changes for regular datasets ? I agree the simplest would be to have the exact same methods for both Dataset and IterableDataset. However this is probably not a good idea because it would prevent users from using the best benefits of them. That's why we can keep some aspects of regular datasets as they are: - keep the eager Dataset.map with caching - keep the with_transform method for lazy processing - keep Dataset.select (it could also be added to IterableDataset even though it's not recommended) We could have a completely aligned `map` method if both methods were lazy by default, but this is a very big breaking change so I'm not sure we can consider doing that. For information, TFDS does lazy map by default, and has an additional `.cache()` method. ## Opinions ? I'd love to gather some opinions about this here. If the two APIs are more aligned it would be awesome for the examples in `transformers`, and it would create a satisfactory experience for users that want to switch from one mode to the other. cc @mariosasko @albertvillanova @thomwolf @patrickvonplaten @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3444/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3444/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3443
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3443/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3443/comments
https://api.github.com/repos/huggingface/datasets/issues/3443/events
https://github.com/huggingface/datasets/pull/3443
1,082,052,833
PR_kwDODunzps4v8QDX
3,443
Extend iter_archive to support file object input
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,639,652,354,000
1,639,763,583,000
1,639,763,582,000
MEMBER
null
This PR adds support to passing a file object to `[Streaming]DownloadManager.iter_archive`. With this feature, we can iterate over a tar file inside another tar file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3443/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3443/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3443", "html_url": "https://github.com/huggingface/datasets/pull/3443", "diff_url": "https://github.com/huggingface/datasets/pull/3443.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3443.patch", "merged_at": 1639763582000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3442/comments
https://api.github.com/repos/huggingface/datasets/issues/3442/events
https://github.com/huggingface/datasets/pull/3442
1,081,862,747
PR_kwDODunzps4v7oBZ
3,442
Extend text to support yielding lines, paragraphs or documents
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The parameter can also be named `split_by` with values \"line\", \"paragraph\" or \"document\" (no 's' at the end)", "> The parameter can also be named `split_by` with values \"line\", \"paragraph\" or \"document\" (no 's' at the end)\r\n\r\n@lhoestq @mariosasko I would avoid the term `split` in this context and keep it only for \"train\", \"validation\" and \"test\" splits.\r\n- https://huggingface.co/docs/datasets/process.html#split\r\n > datasets.Dataset.train_test_split() creates train and test splits, if your dataset doesn’t already have them.\r\n- https://huggingface.co/docs/datasets/process.html#process-multiple-splits\r\n > Many datasets have splits that you can process simultaneously with datasets.DatasetDict.map().\r\n\r\nPlease note that in the documentation, one of the terms more frequently used in this context is **\"row\"**:\r\n- https://huggingface.co/docs/datasets/access.html#features-and-columns\r\n > A dataset is a table of rows and typed columns.\r\n\r\n > Return the number of rows and columns with the following standard attributes:\r\n > dataset.num_columns\r\n > 4\r\n > dataset.num_rows\r\n > 3668\r\n\r\n- https://huggingface.co/docs/datasets/access.html#rows-slices-batches-and-columns\r\n > Get several rows of your dataset at a time with slice notation or a list of indices:\r\n- https://huggingface.co/docs/datasets/process.html#map\r\n > This function can even create new rows and columns.\r\n\r\nOther of the terms more frequently used in the docs (in the code as well) is **\"example\"**:\r\n- https://huggingface.co/docs/datasets/process.html#map\r\n > It allows you to apply a processing function to each example in a dataset, independently or in batches.\r\n- https://huggingface.co/docs/datasets/process.html#batch-processing\r\n > datasets.Dataset.map() also supports working with batches of examples.\r\n- https://huggingface.co/docs/datasets/process.html#split-long-examples\r\n > When your examples are too long, you may want to split them\r\n- https://huggingface.co/docs/datasets/process.html#data-augmentation\r\n > With batch processing, you can even augment your dataset with additional examples.\r\n\r\nLess frequently used: **\"item\"**:\r\n- https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.add_item\r\n > Add item to Dataset.\r\n\r\nOther term used in the docs (although less frequently) is **\"sample\"**. The advantage of this word is that it is also a verb, so we can use the parameter: \"sample_by\" (if you insist on using a verb instead of a noun).\r\n\r\nIn summary, these proposals:\r\n- config.row\r\n- config.example\r\n- config.item\r\n- config.sample\r\n- config.sample_by", "I like `sample_by`. Another idea I had was `separate_by`.\r\n\r\nIt could also be `sampling`, `sampling_method`, `separation_method`.\r\n\r\nNot a big fan of the proposed nouns alone since they are very generic, that's why I tried to have something more specific.\r\n\r\nI also agree that we actually should avoid `split` to avoid any confusion", "Thanks for the analysis of the used terms. I also like `sample_by` (`separate_by` is good too).", "Thank you !! :D " ]
1,639,639,997,000
1,640,019,550,000
1,640,018,358,000
MEMBER
null
Add `config.row` option to `text` module to allow yielding lines (default, current case), paragraphs or documents. Feel free to comment on the name of the config parameter `row`: - Currently, the docs state datasets are made of rows and columns - Other names I considered: `example`, `item`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3442/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3442/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3442", "html_url": "https://github.com/huggingface/datasets/pull/3442", "diff_url": "https://github.com/huggingface/datasets/pull/3442.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3442.patch", "merged_at": 1640018358000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3441
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3441/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3441/comments
https://api.github.com/repos/huggingface/datasets/issues/3441/events
https://github.com/huggingface/datasets/issues/3441
1,081,571,784
I_kwDODunzps5Ad3nI
3,441
Add QuALITY dataset
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[]
1,639,607,179,000
1,639,607,179,000
null
MEMBER
null
## Adding a Dataset - **Name:** QuALITY - **Description:** A challenging question answering with very long contexts (Twitter [thread](https://twitter.com/sleepinyourhat/status/1471225421794529281?s=20)) - **Paper:** No ArXiv link yet, but draft is [here](https://github.com/nyu-mll/quality/blob/main/quality_preprint.pdf) - **Data:** GitHub repo [here](https://github.com/nyu-mll/quality) - **Motivation:** This dataset would serve as a nice way to benchmark long-range Transformer models like BigBird, Longformer and their descendants. In particular, it would be very interesting to see how the S4 model fares on this given it's impressive performance on the Long Range Arena Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3441/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3441/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3440
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3440/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3440/comments
https://api.github.com/repos/huggingface/datasets/issues/3440/events
https://github.com/huggingface/datasets/issues/3440
1,081,528,426
I_kwDODunzps5AdtBq
3,440
datasets keeps reading from cached files, although I disabled it
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! What version of `datasets` are you using ? Can you also provide the logs you get before it raises the error ?" ]
1,639,603,582,000
1,639,668,747,000
null
NONE
null
## Describe the bug Hi, I am trying to avoid dataset library using cached files, I get the following bug when this tried to read the cached files. I tried to do the followings: ``` from datasets import set_caching_enabled set_caching_enabled(False) ``` also force redownlaod: ``` download_mode='force_redownload' ``` but none worked so far, this is on a cluster and on some of the machines this reads from the cached files, I really appreciate any idea on how to fully remove caching @lhoestq many thanks ``` File "run_clm.py", line 496, in <module> main() File "run_clm.py", line 419, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 943, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/transformers/trainer.py", line 1445, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 172, in evaluate output = self.eval_loop( File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 241, in eval_loop metrics = self.compute_pet_metrics(eval_datasets, model, self.extra_info[metric_key_prefix], task=task) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 268, in compute_pet_metrics centroids = self._compute_per_token_train_centroids(model, task=task) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 353, in _compute_per_token_train_centroids data = get_label_samples(self.get_per_task_train_dataset(task), label) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 350, in get_label_samples return dataset.filter(lambda example: int(example['labels']) == label) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 470, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/fingerprint.py", line 406, in wrapper out = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2519, in filter indices = self.map( File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2036, in map return self._map_single( File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 503, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 470, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/fingerprint.py", line 406, in wrapper out = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2248, in _map_single return Dataset.from_file(cache_file_name, info=info, split=self.split) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 654, in from_file return cls( File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 593, in __init__ self.info.features = self.info.features.reorder_fields_as(inferred_features) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/features/features.py", line 1092, in reorder_fields_as return Features(recursive_reorder(self, other)) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/features/features.py", line 1081, in recursive_reorder raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position) ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'candidates_ids': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'labels': Value(dtype='int64', id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'extra_fields': {}, 'task': Value(dtype='string', id=None)} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: linux - Python version: 3.8.12 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3440/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3440/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3439/comments
https://api.github.com/repos/huggingface/datasets/issues/3439/events
https://github.com/huggingface/datasets/pull/3439
1,081,389,723
PR_kwDODunzps4v6Hxs
3,439
Add `cast_column` to `IterableDataset`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Awesome thanks a lot @mariosasko " ]
1,639,594,845,000
1,639,670,120,000
1,639,670,119,000
CONTRIBUTOR
null
Closes #3369. cc: @patrickvonplaten
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3439/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3439/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3439", "html_url": "https://github.com/huggingface/datasets/pull/3439", "diff_url": "https://github.com/huggingface/datasets/pull/3439.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3439.patch", "merged_at": 1639670119000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3438
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3438/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3438/comments
https://api.github.com/repos/huggingface/datasets/issues/3438/events
https://github.com/huggingface/datasets/pull/3438
1,081,302,203
PR_kwDODunzps4v52Va
3,438
Update supported versions of Python in setup.py
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,639,589,412,000
1,640,010,133,000
1,640,010,132,000
CONTRIBUTOR
null
Update the list of supported versions of Python in `setup.py` to keep the PyPI project description updated.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3438/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3438", "html_url": "https://github.com/huggingface/datasets/pull/3438", "diff_url": "https://github.com/huggingface/datasets/pull/3438.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3438.patch", "merged_at": 1640010132000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3437
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3437/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3437/comments
https://api.github.com/repos/huggingface/datasets/issues/3437/events
https://github.com/huggingface/datasets/pull/3437
1,081,247,889
PR_kwDODunzps4v5qzI
3,437
Update BLEURT hyperlink
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "seems like a very very low-prio improvement :)", "@albertvillanova thanks for the feedback! I removed the formatting altogether since I think this is a bit simpler tor read than non-rendered Markdown" ]
1,639,586,087,000
1,639,747,706,000
1,639,747,705,000
MEMBER
null
The description of BLEURT on the hf.co website has a strange use of URL hyperlinking. This PR attempts to fix this, although I am not 100% sure Markdown syntax is allowed on the frontend or not. ![Screen Shot 2021-12-15 at 17 31 27](https://user-images.githubusercontent.com/26859204/146226432-c83cbdaf-f57d-4999-b53c-85da718ff7fb.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3437/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3437", "html_url": "https://github.com/huggingface/datasets/pull/3437", "diff_url": "https://github.com/huggingface/datasets/pull/3437.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3437.patch", "merged_at": 1639747705000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3436
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3436/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3436/comments
https://api.github.com/repos/huggingface/datasets/issues/3436/events
https://github.com/huggingface/datasets/pull/3436
1,081,068,139
PR_kwDODunzps4v5FE3
3,436
Add the OneStopQa dataset
{ "login": "scaperex", "id": 28459495, "node_id": "MDQ6VXNlcjI4NDU5NDk1", "avatar_url": "https://avatars.githubusercontent.com/u/28459495?v=4", "gravatar_id": "", "url": "https://api.github.com/users/scaperex", "html_url": "https://github.com/scaperex", "followers_url": "https://api.github.com/users/scaperex/followers", "following_url": "https://api.github.com/users/scaperex/following{/other_user}", "gists_url": "https://api.github.com/users/scaperex/gists{/gist_id}", "starred_url": "https://api.github.com/users/scaperex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/scaperex/subscriptions", "organizations_url": "https://api.github.com/users/scaperex/orgs", "repos_url": "https://api.github.com/users/scaperex/repos", "events_url": "https://api.github.com/users/scaperex/events{/privacy}", "received_events_url": "https://api.github.com/users/scaperex/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,639,576,411,000
1,639,751,520,000
1,639,747,529,000
CONTRIBUTOR
null
Adding OneStopQA, a multiple choice reading comprehension dataset annotated according to the STARC (Structured Annotations for Reading Comprehension) scheme.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3436/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3436/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3436", "html_url": "https://github.com/huggingface/datasets/pull/3436", "diff_url": "https://github.com/huggingface/datasets/pull/3436.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3436.patch", "merged_at": 1639747529000 }
true
https://api.github.com/repos/huggingface/datasets/issues/3435
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3435/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3435/comments
https://api.github.com/repos/huggingface/datasets/issues/3435/events
https://github.com/huggingface/datasets/pull/3435
1,081,043,756
PR_kwDODunzps4v4_-0
3,435
Improve Wikipedia Loading Script
{ "login": "geohci", "id": 45494522, "node_id": "MDQ6VXNlcjQ1NDk0NTIy", "avatar_url": "https://avatars.githubusercontent.com/u/45494522?v=4", "gravatar_id": "", "url": "https://api.github.com/users/geohci", "html_url": "https://github.com/geohci", "followers_url": "https://api.github.com/users/geohci/followers", "following_url": "https://api.github.com/users/geohci/following{/other_user}", "gists_url": "https://api.github.com/users/geohci/gists{/gist_id}", "starred_url": "https://api.github.com/users/geohci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/geohci/subscriptions", "organizations_url": "https://api.github.com/users/geohci/orgs", "repos_url": "https://api.github.com/users/geohci/repos", "events_url": "https://api.github.com/users/geohci/events{/privacy}", "received_events_url": "https://api.github.com/users/geohci/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "I wanted to flag a change from since we discussed this: I initially wrote a function for using the Wikimedia APIs to collect namespace aliases, but decided that adding in more http requests to the script wasn't a great idea so instead used that code to build a static list that I just added directly to the code.\r\n\r\nAlso, an FYI that python library dependencies weren't working on my local end so I wasn't able to directly test the code. I tested a copy with the problematic elements stripped (beam etc.) that worked fine, but someone with a working local copy may want to test just to make sure I didn't accidentally break anything.", "Also, while I would argue more strongly for some of the changes in this code, they are five distinct changes so not so hard to remove one or two if other folks think they aren't worth the overhead etc." ]
1,639,575,006,000
1,639,575,420,000
null
NONE
null
* More structured approach to detecting redirects * Remove redundant template filter code (covered by strip_code) * Add language-specific lists of additional media namespace aliases for filtering * Add language-specific lists of category namespace aliases for new link text cleaning step * Remove magic words (parser directions like __TOC__ that occasionally occur in text) Fix #3400 With support from @albertvillanova CC @yjernite
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3435/timeline
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/3435", "html_url": "https://github.com/huggingface/datasets/pull/3435", "diff_url": "https://github.com/huggingface/datasets/pull/3435.diff", "patch_url": "https://github.com/huggingface/datasets/pull/3435.patch", "merged_at": null }
true

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
2
Add dataset card