comments_url
stringlengths
70
70
timeline_url
stringlengths
70
70
closed_at
stringlengths
20
20
performed_via_github_app
null
state_reason
stringclasses
3 values
node_id
stringlengths
18
32
state
stringclasses
2 values
assignees
listlengths
0
4
draft
bool
2 classes
number
int64
1.61k
6.73k
user
dict
title
stringlengths
1
290
events_url
stringlengths
68
68
milestone
dict
labels_url
stringlengths
75
75
created_at
stringlengths
20
20
active_lock_reason
null
locked
bool
1 class
assignee
dict
pull_request
dict
id
int64
771M
2.18B
labels
listlengths
0
4
url
stringlengths
61
61
comments
sequencelengths
0
30
repository_url
stringclasses
1 value
author_association
stringclasses
3 values
body
stringlengths
0
228k
updated_at
stringlengths
20
20
html_url
stringlengths
49
51
reactions
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/1912/comments
https://api.github.com/repos/huggingface/datasets/issues/1912/timeline
2021-02-24T13:44:53Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc2NDY2ODQx
closed
[]
false
1,912
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Update: WMT - use mirror links
https://api.github.com/repos/huggingface/datasets/issues/1912/events
null
https://api.github.com/repos/huggingface/datasets/issues/1912/labels{/name}
2021-02-19T13:42:34Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1912.diff", "html_url": "https://github.com/huggingface/datasets/pull/1912", "merged_at": "2021-02-24T13:44:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/1912.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1912" }
812,034,140
[]
https://api.github.com/repos/huggingface/datasets/issues/1912
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts. Now downloading the wmt datasets is blazing fast :) cc @stas00 @patrickvonplaten
2021-02-24T13:44:53Z
https://github.com/huggingface/datasets/pull/1912
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 4, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/1912/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1911/comments
https://api.github.com/repos/huggingface/datasets/issues/1911/timeline
null
null
null
MDU6SXNzdWU4MTIwMDk5NTY=
open
[]
null
1,911
{ "avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4", "events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}", "followers_url": "https://api.github.com/users/ayubSubhaniya/followers", "following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}", "gists_url": "https://api.github.com/users/ayubSubhaniya/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ayubSubhaniya", "id": 20911334, "login": "ayubSubhaniya", "node_id": "MDQ6VXNlcjIwOTExMzM0", "organizations_url": "https://api.github.com/users/ayubSubhaniya/orgs", "received_events_url": "https://api.github.com/users/ayubSubhaniya/received_events", "repos_url": "https://api.github.com/users/ayubSubhaniya/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ayubSubhaniya/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayubSubhaniya/subscriptions", "type": "User", "url": "https://api.github.com/users/ayubSubhaniya" }
Saving processed dataset running infinitely
https://api.github.com/repos/huggingface/datasets/issues/1911/events
null
https://api.github.com/repos/huggingface/datasets/issues/1911/labels{/name}
2021-02-19T13:09:19Z
null
false
null
null
812,009,956
[]
https://api.github.com/repos/huggingface/datasets/issues/1911
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
I have a text dataset of size 220M. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes. filter() function was way to slow, so I used a hack to use pyarrow filter table function, which is damm fast. Mentioned [here](https://github.com/huggingface/datasets/issues/1796) ```dataset._data = dataset._data.filter(...)``` It took 1 hr for the filter. Then i use `save_to_disk()` on processed dataset and it is running forever. I have been waiting since 8 hrs, it has not written a single byte. Infact it has actually read from disk more than 100GB, screenshot below shows the stats using `iotop`. Second process is the one. <img width="1672" alt="Screenshot 2021-02-19 at 6 36 53 PM" src="https://user-images.githubusercontent.com/20911334/108508197-7325d780-72e1-11eb-8369-7c057d137d81.png"> I am not able to figure out, whether this is some issue with dataset library or that it is due to my hack for filter() function.
2021-02-23T07:34:44Z
https://github.com/huggingface/datasets/issues/1911
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1911/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1910/comments
https://api.github.com/repos/huggingface/datasets/issues/1910/timeline
2021-03-04T22:02:47Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc2MTg0MDQ3
closed
[]
false
1,910
{ "avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4", "events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}", "followers_url": "https://api.github.com/users/ZihanWangKi/followers", "following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}", "gists_url": "https://api.github.com/users/ZihanWangKi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ZihanWangKi", "id": 21319243, "login": "ZihanWangKi", "node_id": "MDQ6VXNlcjIxMzE5MjQz", "organizations_url": "https://api.github.com/users/ZihanWangKi/orgs", "received_events_url": "https://api.github.com/users/ZihanWangKi/received_events", "repos_url": "https://api.github.com/users/ZihanWangKi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ZihanWangKi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZihanWangKi/subscriptions", "type": "User", "url": "https://api.github.com/users/ZihanWangKi" }
Adding CoNLLpp dataset.
https://api.github.com/repos/huggingface/datasets/issues/1910/events
null
https://api.github.com/repos/huggingface/datasets/issues/1910/labels{/name}
2021-02-19T05:12:30Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1910.diff", "html_url": "https://github.com/huggingface/datasets/pull/1910", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1910.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1910" }
811,697,108
[]
https://api.github.com/repos/huggingface/datasets/issues/1910
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
2021-03-04T22:02:47Z
https://github.com/huggingface/datasets/pull/1910
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1910/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1907/comments
https://api.github.com/repos/huggingface/datasets/issues/1907/timeline
2021-02-22T23:22:04Z
null
completed
MDU6SXNzdWU4MTE1MjA1Njk=
closed
[]
null
1,907
{ "avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4", "events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}", "followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers", "following_url": "https://api.github.com/users/francisco-perez-sorrosal/following{/other_user}", "gists_url": "https://api.github.com/users/francisco-perez-sorrosal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/francisco-perez-sorrosal", "id": 918006, "login": "francisco-perez-sorrosal", "node_id": "MDQ6VXNlcjkxODAwNg==", "organizations_url": "https://api.github.com/users/francisco-perez-sorrosal/orgs", "received_events_url": "https://api.github.com/users/francisco-perez-sorrosal/received_events", "repos_url": "https://api.github.com/users/francisco-perez-sorrosal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/francisco-perez-sorrosal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/francisco-perez-sorrosal/subscriptions", "type": "User", "url": "https://api.github.com/users/francisco-perez-sorrosal" }
DBPedia14 Dataset Checksum bug?
https://api.github.com/repos/huggingface/datasets/issues/1907/events
null
https://api.github.com/repos/huggingface/datasets/issues/1907/labels{/name}
2021-02-18T22:25:48Z
null
false
null
null
811,520,569
[]
https://api.github.com/repos/huggingface/datasets/issues/1907
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Hi there!!! I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error: ``` Traceback (most recent call last): File "./conditional_classification/basic_pipeline.py", line 178, in <module> main() File "./conditional_classification/basic_pipeline.py", line 128, in main corpus.load_data(limit_train_examples_per_class=args.data_args.train_examples_per_class, File "/home/fp/dev/conditional_classification/conditional_classification/datasets_base.py", line 83, in load_data datasets = load_dataset(self.name, split=dataset_split) File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/load.py", line 609, in load_dataset builder_instance.download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 526, in download_and_prepare self._download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 586, in _download_and_prepare verify_checksums( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k'] ``` I've seen this has happened before in other datasets as reported in #537. I've tried clearing my cache and call again `load_dataset` but still is not working. My same codebase is successfully downloading and using other datasets (e.g. AGNews) without any problem, so I guess something has happened specifically to the DBPedia dataset in the last few days. Can you please check if there's a problem with the checksums? Or this is related to any other stuff? I've seen that the path in the cache for the dataset is `/home/fp/.cache/huggingface/datasets/d_bpedia14/dbpedia_14/2.0.0/a70413e39e7a716afd0e90c9e53cb053691f56f9ef5fe317bd07f2c368e8e897...` and includes `d_bpedia14` instead maybe of `dbpedia_14`. Was this maybe a bug introduced recently? Thanks!
2021-02-22T23:22:05Z
https://github.com/huggingface/datasets/issues/1907
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1907/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1906/comments
https://api.github.com/repos/huggingface/datasets/issues/1906/timeline
null
null
null
MDU6SXNzdWU4MTE0MDUyNzQ=
open
[]
null
1,906
{ "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/justin-yan", "id": 7731709, "login": "justin-yan", "node_id": "MDQ6VXNlcjc3MzE3MDk=", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "repos_url": "https://api.github.com/users/justin-yan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "type": "User", "url": "https://api.github.com/users/justin-yan" }
Feature Request: Support for Pandas `Categorical`
https://api.github.com/repos/huggingface/datasets/issues/1906/events
null
https://api.github.com/repos/huggingface/datasets/issues/1906/labels{/name}
2021-02-18T19:46:05Z
null
false
null
null
811,405,274
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
https://api.github.com/repos/huggingface/datasets/issues/1906
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table ``` I'm curious if https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L796 could be built out in a way similar to `Sequence`? e.g. a `Map` class (or whatever name the maintainers might prefer) that can accept: ``` index_type = generate_from_arrow_type(pa_type.index_type) value_type = generate_from_arrow_type(pa_type.value_type) ``` and then additional code points to modify: - FeatureType: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L694 - A branch to handle Map in get_nested_type: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L719 - I don't quite understand what `encode_nested_example` does but perhaps a branch there? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L755 - Similarly, I don't quite understand why `Sequence` is used this way in `generate_from_dict`, but perhaps a branch here? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L775 I couldn't find other usages of `Sequence` outside of defining specific datasets, so I'm not sure if that's a comprehensive set of touchpoints.
2021-02-23T14:38:50Z
https://github.com/huggingface/datasets/issues/1906
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1906/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1905/comments
https://api.github.com/repos/huggingface/datasets/issues/1905/timeline
2021-02-20T22:01:30Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc1OTIxMDk1
closed
[]
true
1,905
{ "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/justin-yan", "id": 7731709, "login": "justin-yan", "node_id": "MDQ6VXNlcjc3MzE3MDk=", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "repos_url": "https://api.github.com/users/justin-yan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "type": "User", "url": "https://api.github.com/users/justin-yan" }
Standardizing datasets.dtypes
https://api.github.com/repos/huggingface/datasets/issues/1905/events
null
https://api.github.com/repos/huggingface/datasets/issues/1905/labels{/name}
2021-02-18T19:15:31Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1905.diff", "html_url": "https://github.com/huggingface/datasets/pull/1905", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1905.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1905" }
811,384,174
[]
https://api.github.com/repos/huggingface/datasets/issues/1905
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https://github.com/huggingface/datasets/pull/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here). This moves away from `str(pyarrow.DataType)` as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes. I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here.
2021-02-20T22:01:30Z
https://github.com/huggingface/datasets/pull/1905
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1905/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1904/comments
https://api.github.com/repos/huggingface/datasets/issues/1904/timeline
2021-02-18T17:10:01Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc1ODE4MjA0
closed
[]
false
1,904
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Fix to_pandas for boolean ArrayXD
https://api.github.com/repos/huggingface/datasets/issues/1904/events
null
https://api.github.com/repos/huggingface/datasets/issues/1904/labels{/name}
2021-02-18T16:30:46Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1904.diff", "html_url": "https://github.com/huggingface/datasets/pull/1904", "merged_at": "2021-02-18T17:10:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/1904.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1904" }
811,260,904
[]
https://api.github.com/repos/huggingface/datasets/issues/1904
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`. zero copy is available for all primitive types except booleans see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy and https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22 cc @SBrandeis
2021-02-18T17:10:03Z
https://github.com/huggingface/datasets/pull/1904
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1904/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1903/comments
https://api.github.com/repos/huggingface/datasets/issues/1903/timeline
2021-03-01T09:39:12Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc1NzIwOTk2
closed
[]
false
1,903
{ "avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4", "events_url": "https://api.github.com/users/vrindaprabhu/events{/privacy}", "followers_url": "https://api.github.com/users/vrindaprabhu/followers", "following_url": "https://api.github.com/users/vrindaprabhu/following{/other_user}", "gists_url": "https://api.github.com/users/vrindaprabhu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vrindaprabhu", "id": 16264631, "login": "vrindaprabhu", "node_id": "MDQ6VXNlcjE2MjY0NjMx", "organizations_url": "https://api.github.com/users/vrindaprabhu/orgs", "received_events_url": "https://api.github.com/users/vrindaprabhu/received_events", "repos_url": "https://api.github.com/users/vrindaprabhu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vrindaprabhu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vrindaprabhu/subscriptions", "type": "User", "url": "https://api.github.com/users/vrindaprabhu" }
Initial commit for the addition of TIMIT dataset
https://api.github.com/repos/huggingface/datasets/issues/1903/events
null
https://api.github.com/repos/huggingface/datasets/issues/1903/labels{/name}
2021-02-18T14:23:12Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1903.diff", "html_url": "https://github.com/huggingface/datasets/pull/1903", "merged_at": "2021-03-01T09:39:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/1903.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1903" }
811,145,531
[]
https://api.github.com/repos/huggingface/datasets/issues/1903
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Below points needs to be addressed: - Creation of dummy dataset is failing - Need to check on the data representation - License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania Also the links (_except the download_) point to the ami corpus! ;-) @patrickvonplaten Requesting your comments, will be happy to address them!
2021-03-01T09:39:12Z
https://github.com/huggingface/datasets/pull/1903
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1903/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1902/comments
https://api.github.com/repos/huggingface/datasets/issues/1902/timeline
2021-02-18T09:55:41Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc1NTQwMDM1
closed
[]
false
1,902
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Fix setimes_2 wmt urls
https://api.github.com/repos/huggingface/datasets/issues/1902/events
null
https://api.github.com/repos/huggingface/datasets/issues/1902/labels{/name}
2021-02-18T09:42:26Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1902.diff", "html_url": "https://github.com/huggingface/datasets/pull/1902", "merged_at": "2021-02-18T09:55:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/1902.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1902" }
810,931,171
[]
https://api.github.com/repos/huggingface/datasets/issues/1902
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Continuation of #1901 Some other urls were missing https
2021-02-18T09:55:41Z
https://github.com/huggingface/datasets/pull/1902
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1902/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1901/comments
https://api.github.com/repos/huggingface/datasets/issues/1901/timeline
2021-02-18T09:39:21Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc1NDY5MDUy
closed
[]
false
1,901
{ "avatar_url": "https://avatars.githubusercontent.com/u/3883941?v=4", "events_url": "https://api.github.com/users/YangWang92/events{/privacy}", "followers_url": "https://api.github.com/users/YangWang92/followers", "following_url": "https://api.github.com/users/YangWang92/following{/other_user}", "gists_url": "https://api.github.com/users/YangWang92/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/YangWang92", "id": 3883941, "login": "YangWang92", "node_id": "MDQ6VXNlcjM4ODM5NDE=", "organizations_url": "https://api.github.com/users/YangWang92/orgs", "received_events_url": "https://api.github.com/users/YangWang92/received_events", "repos_url": "https://api.github.com/users/YangWang92/repos", "site_admin": false, "starred_url": "https://api.github.com/users/YangWang92/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/YangWang92/subscriptions", "type": "User", "url": "https://api.github.com/users/YangWang92" }
Fix OPUS dataset download errors
https://api.github.com/repos/huggingface/datasets/issues/1901/events
null
https://api.github.com/repos/huggingface/datasets/issues/1901/labels{/name}
2021-02-18T07:39:41Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1901.diff", "html_url": "https://github.com/huggingface/datasets/pull/1901", "merged_at": "2021-02-18T09:39:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/1901.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1901" }
810,845,605
[]
https://api.github.com/repos/huggingface/datasets/issues/1901
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Replace http to https. https://github.com/huggingface/datasets/issues/854 https://discuss.huggingface.co/t/cannot-download-wmt16/2081
2021-02-18T15:07:20Z
https://github.com/huggingface/datasets/pull/1901
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1901/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1900/comments
https://api.github.com/repos/huggingface/datasets/issues/1900/timeline
2021-02-19T18:27:11Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc1MTkxNTc3
closed
[]
false
1,900
{ "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/justin-yan", "id": 7731709, "login": "justin-yan", "node_id": "MDQ6VXNlcjc3MzE3MDk=", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "repos_url": "https://api.github.com/users/justin-yan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "type": "User", "url": "https://api.github.com/users/justin-yan" }
Issue #1895: Bugfix for string_to_arrow timestamp[ns] support
https://api.github.com/repos/huggingface/datasets/issues/1900/events
null
https://api.github.com/repos/huggingface/datasets/issues/1900/labels{/name}
2021-02-17T20:26:04Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1900.diff", "html_url": "https://github.com/huggingface/datasets/pull/1900", "merged_at": "2021-02-19T18:27:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/1900.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1900" }
810,512,488
[]
https://api.github.com/repos/huggingface/datasets/issues/1900
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Should resolve https://github.com/huggingface/datasets/issues/1895 The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType. While adding unit-testing, I noticed that support for the double/float types also don't invert correctly, so I added them, which I believe would hypothetically make this section of `Value` redundant: ``` def __post_init__(self): if self.dtype == "double": # fix inferred type self.dtype = "float64" if self.dtype == "float": # fix inferred type self.dtype = "float32" ``` However, since I think Value.dtype is part of the public interface, removing that would result in a backward-incompatible change, so I didn't muck with that. The rest of the PR consists of docstrings that I added while developing locally so I could keep track of which functions were supposed to be inverses of each other, and thought I'd include them initially in case you want to keep them around, but I'm happy to delete or remove any of them at your request!
2021-02-19T18:27:11Z
https://github.com/huggingface/datasets/pull/1900
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1900/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1899/comments
https://api.github.com/repos/huggingface/datasets/issues/1899/timeline
2021-02-17T17:20:49Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4
closed
[]
false
1,899
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Fix: ALT - fix duplicated examples in alt-parallel
https://api.github.com/repos/huggingface/datasets/issues/1899/events
null
https://api.github.com/repos/huggingface/datasets/issues/1899/labels{/name}
2021-02-17T15:53:56Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1899.diff", "html_url": "https://github.com/huggingface/datasets/pull/1899", "merged_at": "2021-02-17T17:20:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/1899.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1899" }
810,308,332
[]
https://api.github.com/repos/huggingface/datasets/issues/1899
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field. This was due to a bad copy of a python dict. This PR fixes that.
2021-02-17T17:20:49Z
https://github.com/huggingface/datasets/pull/1899
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1899/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1898/comments
https://api.github.com/repos/huggingface/datasets/issues/1898/timeline
2021-02-19T06:18:46Z
null
completed
MDU6SXNzdWU4MTAxNTcyNTE=
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
1,898
{ "avatar_url": "https://avatars.githubusercontent.com/u/33179372?v=4", "events_url": "https://api.github.com/users/10-zin/events{/privacy}", "followers_url": "https://api.github.com/users/10-zin/followers", "following_url": "https://api.github.com/users/10-zin/following{/other_user}", "gists_url": "https://api.github.com/users/10-zin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/10-zin", "id": 33179372, "login": "10-zin", "node_id": "MDQ6VXNlcjMzMTc5Mzcy", "organizations_url": "https://api.github.com/users/10-zin/orgs", "received_events_url": "https://api.github.com/users/10-zin/received_events", "repos_url": "https://api.github.com/users/10-zin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/10-zin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/10-zin/subscriptions", "type": "User", "url": "https://api.github.com/users/10-zin" }
ALT dataset has repeating instances in all splits
https://api.github.com/repos/huggingface/datasets/issues/1898/events
null
https://api.github.com/repos/huggingface/datasets/issues/1898/labels{/name}
2021-02-17T12:51:42Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
null
810,157,251
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/1898
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/ Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits. Would be great if this could be fixed :) Added a snapshot of the contents from `explore-datset` feature, for quick reference. ![image](https://user-images.githubusercontent.com/33179372/108206321-442a2d00-714c-11eb-882f-b4b6e708ef9c.png)
2021-02-19T06:18:46Z
https://github.com/huggingface/datasets/issues/1898
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1898/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1897/comments
https://api.github.com/repos/huggingface/datasets/issues/1897/timeline
2021-02-17T13:15:15Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy
closed
[]
false
1,897
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Fix PandasArrayExtensionArray conversion to native type
https://api.github.com/repos/huggingface/datasets/issues/1897/events
null
https://api.github.com/repos/huggingface/datasets/issues/1897/labels{/name}
2021-02-17T11:48:24Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1897.diff", "html_url": "https://github.com/huggingface/datasets/pull/1897", "merged_at": "2021-02-17T13:15:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/1897.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1897" }
810,113,263
[]
https://api.github.com/repos/huggingface/datasets/issues/1897
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types. However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because 1. the PandasExtensionArray.isna method was wrong 2. the conversion of a PandasExtensionArray to a numpy array with dtype=object was returning a multidimensional array while pandas excepts a 1D array in this case (more info [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray)) I fixed these two issues and now the conversion to native types works, and so is the export to csv. cc @SBrandeis
2021-02-17T13:15:16Z
https://github.com/huggingface/datasets/pull/1897
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1897/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1895/comments
https://api.github.com/repos/huggingface/datasets/issues/1895/timeline
2021-02-19T18:27:11Z
null
completed
MDU6SXNzdWU4MDk2MzAyNzE=
closed
[]
null
1,895
{ "avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4", "events_url": "https://api.github.com/users/justin-yan/events{/privacy}", "followers_url": "https://api.github.com/users/justin-yan/followers", "following_url": "https://api.github.com/users/justin-yan/following{/other_user}", "gists_url": "https://api.github.com/users/justin-yan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/justin-yan", "id": 7731709, "login": "justin-yan", "node_id": "MDQ6VXNlcjc3MzE3MDk=", "organizations_url": "https://api.github.com/users/justin-yan/orgs", "received_events_url": "https://api.github.com/users/justin-yan/received_events", "repos_url": "https://api.github.com/users/justin-yan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/justin-yan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/justin-yan/subscriptions", "type": "User", "url": "https://api.github.com/users/justin-yan" }
Bug Report: timestamp[ns] not recognized
https://api.github.com/repos/huggingface/datasets/issues/1895/events
null
https://api.github.com/repos/huggingface/datasets/issues/1895/labels{/name}
2021-02-16T20:38:04Z
null
false
null
null
809,630,271
[]
https://api.github.com/repos/huggingface/datasets/issues/1895
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Repro: ``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type. ``` The factory function seems to be just "timestamp": https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp It seems like https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L36-L43 could have a little bit of additional structure for handling these cases? I'd be happy to take a shot at opening a PR if I could receive some guidance on whether parsing something like `timestamp[ns]` and resolving it to timestamp('ns') is the goal of this method. Alternatively, if I'm using this incorrectly (e.g. is the expectation that we always provide a schema when timestamps are involved?), that would be very helpful to know as well! ``` $ pip list # only the relevant libraries/versions datasets 1.2.1 pandas 1.0.3 pyarrow 3.0.0 ```
2021-02-19T18:27:11Z
https://github.com/huggingface/datasets/issues/1895
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1895/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1894/comments
https://api.github.com/repos/huggingface/datasets/issues/1894/timeline
null
null
null
MDU6SXNzdWU4MDk2MDk2NTQ=
open
[]
null
1,894
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
benchmarking against MMapIndexedDataset
https://api.github.com/repos/huggingface/datasets/issues/1894/events
null
https://api.github.com/repos/huggingface/datasets/issues/1894/labels{/name}
2021-02-16T20:04:58Z
null
false
null
null
809,609,654
[]
https://api.github.com/repos/huggingface/datasets/issues/1894
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB of tokens). Questions: 1) Is this (basically identical) performance expected? 2) Is there a scenario where this library will outperform `MMapIndexedDataset`? (maybe more examples/larger examples?) 3) Should I be using different benchmarking tools than `psrecord`/how do you guys do benchmarks? Thanks in advance! Sam
2021-02-17T18:52:28Z
https://github.com/huggingface/datasets/issues/1894
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1894/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1893/comments
https://api.github.com/repos/huggingface/datasets/issues/1893/timeline
2021-03-03T17:42:02Z
null
completed
MDU6SXNzdWU4MDk1NTY1MDM=
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
1,893
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
wmt19 is broken
https://api.github.com/repos/huggingface/datasets/issues/1893/events
null
https://api.github.com/repos/huggingface/datasets/issues/1893/labels{/name}
2021-02-16T18:39:58Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
null
809,556,503
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/1893
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
1. Check which lang pairs we have: `--dataset_name wmt19`: Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de'] 2. OK, let's pick `ru-en`: `--dataset_name wmt19 --dataset_config "ru-en"` no cookies: ``` Traceback (most recent call last): File "./run_seq2seq.py", line 661, in <module> main() File "./run_seq2seq.py", line 317, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 572, in download_and_prepare self._download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 628, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt19/436092de5f3faaf0fc28bc84875475b384e90a5470fa6afaee11039ceddc5052/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 276, in download_and_extract return self.extract(self.download(url_or_urls)) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 191, in download downloaded_path_or_paths = map_nested( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 233, in map_nested mapped = [ File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 234, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 172, in _single_map_nested return function(data_struct) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 211, in _download return cached_path(url_or_filename, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz ```
2021-03-03T17:42:02Z
https://github.com/huggingface/datasets/issues/1893
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1893/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1892/comments
https://api.github.com/repos/huggingface/datasets/issues/1892/timeline
2021-03-25T11:53:23Z
null
completed
MDU6SXNzdWU4MDk1NTQxNzQ=
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
1,892
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
request to mirror wmt datasets, as they are really slow to download
https://api.github.com/repos/huggingface/datasets/issues/1892/events
null
https://api.github.com/repos/huggingface/datasets/issues/1892/labels{/name}
2021-02-16T18:36:11Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
null
809,554,174
[]
https://api.github.com/repos/huggingface/datasets/issues/1892
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download. Thank you!
2021-10-26T06:55:42Z
https://github.com/huggingface/datasets/issues/1892
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1892/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1891/comments
https://api.github.com/repos/huggingface/datasets/issues/1891/timeline
2022-10-05T12:48:38Z
null
completed
MDU6SXNzdWU4MDk1NTAwMDE=
closed
[]
null
1,891
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
suggestion to improve a missing dataset error
https://api.github.com/repos/huggingface/datasets/issues/1891/events
null
https://api.github.com/repos/huggingface/datasets/issues/1891/labels{/name}
2021-02-16T18:29:13Z
null
false
null
null
809,550,001
[]
https://api.github.com/repos/huggingface/datasets/issues/1891
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
I was using `--dataset_name wmt19` all was good. Then thought perhaps wmt20 is out, so I tried to use `--dataset_name wmt20`, got 3 different errors (1 repeated twice), none telling me the real issue - that `wmt20` isn't in the `datasets`: ``` True, predict_with_generate=True) Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 323, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 335, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./run_seq2seq.py", line 661, in <module> main() File "./run_seq2seq.py", line 317, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 706, in load_dataset module_path, hash, resolved_file_path = prepare_module( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 343, in prepare_module raise FileNotFoundError( FileNotFoundError: Couldn't find file locally at wmt20/wmt20.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py. The file is also not present on the master branch on github. ``` Suggestion: if it is not in a local path, check that there is an actual `https://github.com/huggingface/datasets/tree/master/datasets/wmt20` first and assert "dataset `wmt20` doesn't exist in datasets", rather than trying to find a load script - since the whole repo is not there. The error occured when running: ``` cd examples/seq2seq export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python ./run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_val_samples 500 --dataset_name wmt20 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " ``` Thanks.
2022-10-05T12:48:38Z
https://github.com/huggingface/datasets/issues/1891
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1891/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1890/comments
https://api.github.com/repos/huggingface/datasets/issues/1890/timeline
2021-02-16T15:12:33Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc0MjY0OTMx
closed
[]
false
1,890
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Reformat dataset cards section titles
https://api.github.com/repos/huggingface/datasets/issues/1890/events
null
https://api.github.com/repos/huggingface/datasets/issues/1890/labels{/name}
2021-02-16T15:11:47Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1890.diff", "html_url": "https://github.com/huggingface/datasets/pull/1890", "merged_at": "2021-02-16T15:12:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/1890.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1890" }
809,395,586
[]
https://api.github.com/repos/huggingface/datasets/issues/1890
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Titles are formatted like [Foo](#foo) instead of just Foo
2021-02-16T15:12:34Z
https://github.com/huggingface/datasets/pull/1890
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1890/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1889/comments
https://api.github.com/repos/huggingface/datasets/issues/1889/timeline
2021-02-18T18:42:34Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc0MTY1NDAz
closed
[]
false
1,889
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
Implement to_dict and to_pandas for Dataset
https://api.github.com/repos/huggingface/datasets/issues/1889/events
null
https://api.github.com/repos/huggingface/datasets/issues/1889/labels{/name}
2021-02-16T12:38:19Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1889.diff", "html_url": "https://github.com/huggingface/datasets/pull/1889", "merged_at": "2021-02-18T18:42:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/1889.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1889" }
809,276,015
[]
https://api.github.com/repos/huggingface/datasets/issues/1889
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
With options to return a generator or the full dataset
2021-02-18T18:42:37Z
https://github.com/huggingface/datasets/pull/1889
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1889/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1888/comments
https://api.github.com/repos/huggingface/datasets/issues/1888/timeline
2021-02-16T11:58:57Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4
closed
[]
false
1,888
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Docs for adding new column on formatted dataset
https://api.github.com/repos/huggingface/datasets/issues/1888/events
null
https://api.github.com/repos/huggingface/datasets/issues/1888/labels{/name}
2021-02-16T11:45:00Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1888.diff", "html_url": "https://github.com/huggingface/datasets/pull/1888", "merged_at": "2021-02-16T11:58:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/1888.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1888" }
809,241,123
[]
https://api.github.com/repos/huggingface/datasets/issues/1888
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added Close #1872
2021-03-30T14:01:03Z
https://github.com/huggingface/datasets/pull/1888
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1888/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1887/comments
https://api.github.com/repos/huggingface/datasets/issues/1887/timeline
2021-02-19T09:41:59Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc0MTI2NTMy
closed
[]
false
1,887
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
Implement to_csv for Dataset
https://api.github.com/repos/huggingface/datasets/issues/1887/events
null
https://api.github.com/repos/huggingface/datasets/issues/1887/labels{/name}
2021-02-16T11:27:29Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1887.diff", "html_url": "https://github.com/huggingface/datasets/pull/1887", "merged_at": "2021-02-19T09:41:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/1887.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1887" }
809,229,809
[]
https://api.github.com/repos/huggingface/datasets/issues/1887
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
cc @thomwolf `to_csv` supports passing either a file path or a *binary* file object The writing is batched to avoid loading the whole table in memory
2021-02-19T09:41:59Z
https://github.com/huggingface/datasets/pull/1887
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1887/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1886/comments
https://api.github.com/repos/huggingface/datasets/issues/1886/timeline
2021-03-09T18:51:31Z
null
null
MDExOlB1bGxSZXF1ZXN0NTc0MTE5ODcz
closed
[]
false
1,886
{ "avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4", "events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}", "followers_url": "https://api.github.com/users/BirgerMoell/followers", "following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}", "gists_url": "https://api.github.com/users/BirgerMoell/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BirgerMoell", "id": 1704131, "login": "BirgerMoell", "node_id": "MDQ6VXNlcjE3MDQxMzE=", "organizations_url": "https://api.github.com/users/BirgerMoell/orgs", "received_events_url": "https://api.github.com/users/BirgerMoell/received_events", "repos_url": "https://api.github.com/users/BirgerMoell/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BirgerMoell/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BirgerMoell/subscriptions", "type": "User", "url": "https://api.github.com/users/BirgerMoell" }
Common voice
https://api.github.com/repos/huggingface/datasets/issues/1886/events
null
https://api.github.com/repos/huggingface/datasets/issues/1886/labels{/name}
2021-02-16T11:16:10Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1886.diff", "html_url": "https://github.com/huggingface/datasets/pull/1886", "merged_at": "2021-03-09T18:51:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/1886.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1886" }
809,221,885
[]
https://api.github.com/repos/huggingface/datasets/issues/1886
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Started filling out information about the dataset and a dataset card. To do Create tagging file Update the common_voice.py file with more information
2021-03-09T18:51:31Z
https://github.com/huggingface/datasets/pull/1886
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1886/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1885/comments
https://api.github.com/repos/huggingface/datasets/issues/1885/timeline
2021-02-16T11:44:12Z
null
null
MDExOlB1bGxSZXF1ZXN0NTczODQyNzcz
closed
[]
false
1,885
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
add missing info on how to add large files
https://api.github.com/repos/huggingface/datasets/issues/1885/events
null
https://api.github.com/repos/huggingface/datasets/issues/1885/labels{/name}
2021-02-15T23:46:39Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1885.diff", "html_url": "https://github.com/huggingface/datasets/pull/1885", "merged_at": "2021-02-16T11:44:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/1885.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1885" }
808,881,501
[]
https://api.github.com/repos/huggingface/datasets/issues/1885
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to. @lhoestq
2021-02-16T16:22:19Z
https://github.com/huggingface/datasets/pull/1885
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1885/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1884/comments
https://api.github.com/repos/huggingface/datasets/issues/1884/timeline
2021-07-30T11:01:18Z
null
null
MDExOlB1bGxSZXF1ZXN0NTczNzQwNzI5
closed
[]
false
1,884
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
dtype fix when using numpy arrays
https://api.github.com/repos/huggingface/datasets/issues/1884/events
null
https://api.github.com/repos/huggingface/datasets/issues/1884/labels{/name}
2021-02-15T18:55:25Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1884.diff", "html_url": "https://github.com/huggingface/datasets/pull/1884", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1884.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1884" }
808,755,894
[]
https://api.github.com/repos/huggingface/datasets/issues/1884
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array
2021-07-30T11:01:18Z
https://github.com/huggingface/datasets/pull/1884
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1884/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1883/comments
https://api.github.com/repos/huggingface/datasets/issues/1883/timeline
2021-02-24T14:53:26Z
null
null
MDExOlB1bGxSZXF1ZXN0NTczNzM2NTIz
closed
[]
false
1,883
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
Add not-in-place implementations for several dataset transforms
https://api.github.com/repos/huggingface/datasets/issues/1883/events
null
https://api.github.com/repos/huggingface/datasets/issues/1883/labels{/name}
2021-02-15T18:44:26Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1883.diff", "html_url": "https://github.com/huggingface/datasets/pull/1883", "merged_at": "2021-02-24T14:53:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1883.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1883" }
808,750,623
[]
https://api.github.com/repos/huggingface/datasets/issues/1883
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Should we deprecate in-place versions of such methods?
2021-02-24T14:54:49Z
https://github.com/huggingface/datasets/pull/1883
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1883/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1882/comments
https://api.github.com/repos/huggingface/datasets/issues/1882/timeline
null
null
null
MDExOlB1bGxSZXF1ZXN0NTczNzA4OTEw
open
[]
false
1,882
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Create Remote Manager
https://api.github.com/repos/huggingface/datasets/issues/1882/events
null
https://api.github.com/repos/huggingface/datasets/issues/1882/labels{/name}
2021-02-15T17:36:24Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1882.diff", "html_url": "https://github.com/huggingface/datasets/pull/1882", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1882.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1882" }
808,716,576
[]
https://api.github.com/repos/huggingface/datasets/issues/1882
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Refactoring to separate the concern of remote (HTTP/FTP requests) management.
2022-07-06T15:19:47Z
https://github.com/huggingface/datasets/pull/1882
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1882/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1881/comments
https://api.github.com/repos/huggingface/datasets/issues/1881/timeline
2021-02-15T15:09:48Z
null
null
MDExOlB1bGxSZXF1ZXN0NTczNTk1Nzkw
closed
[]
false
1,881
{ "avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4", "events_url": "https://api.github.com/users/pminervini/events{/privacy}", "followers_url": "https://api.github.com/users/pminervini/followers", "following_url": "https://api.github.com/users/pminervini/following{/other_user}", "gists_url": "https://api.github.com/users/pminervini/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pminervini", "id": 227357, "login": "pminervini", "node_id": "MDQ6VXNlcjIyNzM1Nw==", "organizations_url": "https://api.github.com/users/pminervini/orgs", "received_events_url": "https://api.github.com/users/pminervini/received_events", "repos_url": "https://api.github.com/users/pminervini/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pminervini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pminervini/subscriptions", "type": "User", "url": "https://api.github.com/users/pminervini" }
`list_datasets()` returns a list of strings, not objects
https://api.github.com/repos/huggingface/datasets/issues/1881/events
null
https://api.github.com/repos/huggingface/datasets/issues/1881/labels{/name}
2021-02-15T14:20:15Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1881.diff", "html_url": "https://github.com/huggingface/datasets/pull/1881", "merged_at": "2021-02-15T15:09:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/1881.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1881" }
808,578,200
[]
https://api.github.com/repos/huggingface/datasets/issues/1881
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Here and there in the docs there is still stuff like this: ```python >>> datasets_list = list_datasets() >>> print(', '.join(dataset.id for dataset in datasets_list)) ``` However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects.
2021-02-15T15:09:49Z
https://github.com/huggingface/datasets/pull/1881
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1881/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1880/comments
https://api.github.com/repos/huggingface/datasets/issues/1880/timeline
2021-02-15T14:18:18Z
null
null
MDExOlB1bGxSZXF1ZXN0NTczNTgzNjg0
closed
[]
false
1,880
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Update multi_woz_v22 checksums
https://api.github.com/repos/huggingface/datasets/issues/1880/events
null
https://api.github.com/repos/huggingface/datasets/issues/1880/labels{/name}
2021-02-15T14:00:18Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1880.diff", "html_url": "https://github.com/huggingface/datasets/pull/1880", "merged_at": "2021-02-15T14:18:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/1880.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1880" }
808,563,439
[]
https://api.github.com/repos/huggingface/datasets/issues/1880
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
As noticed in #1876 the checksums of this dataset are outdated. I updated them in this PR
2021-02-15T14:18:19Z
https://github.com/huggingface/datasets/pull/1880
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1880/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1879/comments
https://api.github.com/repos/huggingface/datasets/issues/1879/timeline
2021-02-19T18:35:14Z
null
null
MDExOlB1bGxSZXF1ZXN0NTczNTY1NDAx
closed
[]
false
1,879
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Replace flatten_nested
https://api.github.com/repos/huggingface/datasets/issues/1879/events
null
https://api.github.com/repos/huggingface/datasets/issues/1879/labels{/name}
2021-02-15T13:29:40Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1879.diff", "html_url": "https://github.com/huggingface/datasets/pull/1879", "merged_at": "2021-02-19T18:35:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/1879.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1879" }
808,541,442
[]
https://api.github.com/repos/huggingface/datasets/issues/1879
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Replace `flatten_nested` with `NestedDataStructure.flatten`. This is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller/user of the data structure. Eventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class. I have also generalized the flattening, and now it handles multiple levels of nesting.
2021-02-19T18:35:14Z
https://github.com/huggingface/datasets/pull/1879
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1879/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1878/comments
https://api.github.com/repos/huggingface/datasets/issues/1878/timeline
2021-02-15T14:18:09Z
null
null
MDExOlB1bGxSZXF1ZXN0NTczNTUyODk3
closed
[]
false
1,878
{ "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anton-l", "id": 26864830, "login": "anton-l", "node_id": "MDQ6VXNlcjI2ODY0ODMw", "organizations_url": "https://api.github.com/users/anton-l/orgs", "received_events_url": "https://api.github.com/users/anton-l/received_events", "repos_url": "https://api.github.com/users/anton-l/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "type": "User", "url": "https://api.github.com/users/anton-l" }
Add LJ Speech dataset
https://api.github.com/repos/huggingface/datasets/issues/1878/events
null
https://api.github.com/repos/huggingface/datasets/issues/1878/labels{/name}
2021-02-15T13:10:42Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1878.diff", "html_url": "https://github.com/huggingface/datasets/pull/1878", "merged_at": "2021-02-15T14:18:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/1878.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1878" }
808,526,883
[]
https://api.github.com/repos/huggingface/datasets/issues/1878
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
This PR adds the LJ Speech dataset (https://keithito.com/LJ-Speech-Dataset/) As requested by #1841 The ASR format is based on #1767 There are a couple of quirks that should be addressed: - I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by paperswithcode). Since the number of speech datasets is about to grow, maybe these categories should be added to the main list? - Similarly to #1767 this dataset uses only a single dummy sample to reduce the zip size (`wav`s are quite heavy). Is there a plan to allow LFS or S3 usage for dummy data in the repo? - The dataset is distributed under the Public Domain license, which is not used anywhere else in the repo, AFAIK. Do you think Public Domain is worth adding to the tagger app as well? Pinging @patrickvonplaten to review
2021-02-15T19:39:41Z
https://github.com/huggingface/datasets/pull/1878
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1878/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1877/comments
https://api.github.com/repos/huggingface/datasets/issues/1877/timeline
2021-03-26T16:51:58Z
null
completed
MDU6SXNzdWU4MDg0NjIyNzI=
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
1,877
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Allow concatenation of both in-memory and on-disk datasets
https://api.github.com/repos/huggingface/datasets/issues/1877/events
null
https://api.github.com/repos/huggingface/datasets/issues/1877/labels{/name}
2021-02-15T11:39:46Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
null
808,462,272
[]
https://api.github.com/repos/huggingface/datasets/issues/1877
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
This is a prerequisite for the addition of the `add_item` feature (see #1870). Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files). This assumption is used for pickling for example: - in-memory dataset can just be pickled/unpickled in-memory - on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling Maybe let's have a design that allows a Dataset to have a Table that can be rebuilt from heterogenous sources like in-memory tables or on-disk tables ? This could also be further extended in the future One idea would be to define a list of sources and each source implements a way to reload its corresponding pyarrow Table. Then the dataset would be the concatenation of all these tables. Depending on the source type, the serialization using pickle would be different. In-memory data would be copied while on-disk data would simply be replaced by the path to these data. If you have some ideas you would like to share about the design/API feel free to do so :) cc @albertvillanova
2021-03-26T16:51:58Z
https://github.com/huggingface/datasets/issues/1877
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1877/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1876/comments
https://api.github.com/repos/huggingface/datasets/issues/1876/timeline
2021-08-04T18:08:00Z
null
completed
MDU6SXNzdWU4MDgwMjU4NTk=
closed
[]
null
1,876
{ "avatar_url": "https://avatars.githubusercontent.com/u/5945326?v=4", "events_url": "https://api.github.com/users/Vincent950129/events{/privacy}", "followers_url": "https://api.github.com/users/Vincent950129/followers", "following_url": "https://api.github.com/users/Vincent950129/following{/other_user}", "gists_url": "https://api.github.com/users/Vincent950129/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Vincent950129", "id": 5945326, "login": "Vincent950129", "node_id": "MDQ6VXNlcjU5NDUzMjY=", "organizations_url": "https://api.github.com/users/Vincent950129/orgs", "received_events_url": "https://api.github.com/users/Vincent950129/received_events", "repos_url": "https://api.github.com/users/Vincent950129/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Vincent950129/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Vincent950129/subscriptions", "type": "User", "url": "https://api.github.com/users/Vincent950129" }
load_dataset("multi_woz_v22") NonMatchingChecksumError
https://api.github.com/repos/huggingface/datasets/issues/1876/events
null
https://api.github.com/repos/huggingface/datasets/issues/1876/labels{/name}
2021-02-14T19:14:48Z
null
false
null
null
808,025,859
[]
https://api.github.com/repos/huggingface/datasets/issues/1876
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError. To reproduce: `dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')` This will give the following error: ``` raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_003.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_004.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_005.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_006.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_007.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_008.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_009.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_010.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_012.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_013.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_014.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_015.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_016.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_017.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_002.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_002.json'] ```
2021-08-04T18:08:00Z
https://github.com/huggingface/datasets/issues/1876
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1876/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1875/comments
https://api.github.com/repos/huggingface/datasets/issues/1875/timeline
2021-02-17T15:56:27Z
null
null
MDExOlB1bGxSZXF1ZXN0NTczMDM2NzE0
closed
[]
false
1,875
{ "avatar_url": "https://avatars.githubusercontent.com/u/6061911?v=4", "events_url": "https://api.github.com/users/ddhruvkr/events{/privacy}", "followers_url": "https://api.github.com/users/ddhruvkr/followers", "following_url": "https://api.github.com/users/ddhruvkr/following{/other_user}", "gists_url": "https://api.github.com/users/ddhruvkr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ddhruvkr", "id": 6061911, "login": "ddhruvkr", "node_id": "MDQ6VXNlcjYwNjE5MTE=", "organizations_url": "https://api.github.com/users/ddhruvkr/orgs", "received_events_url": "https://api.github.com/users/ddhruvkr/received_events", "repos_url": "https://api.github.com/users/ddhruvkr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ddhruvkr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ddhruvkr/subscriptions", "type": "User", "url": "https://api.github.com/users/ddhruvkr" }
Adding sari metric
https://api.github.com/repos/huggingface/datasets/issues/1875/events
null
https://api.github.com/repos/huggingface/datasets/issues/1875/labels{/name}
2021-02-14T04:38:35Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1875.diff", "html_url": "https://github.com/huggingface/datasets/pull/1875", "merged_at": "2021-02-17T15:56:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1875.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1875" }
807,887,267
[]
https://api.github.com/repos/huggingface/datasets/issues/1875
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark.
2021-02-17T15:56:27Z
https://github.com/huggingface/datasets/pull/1875
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1875/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1874/comments
https://api.github.com/repos/huggingface/datasets/issues/1874/timeline
2021-03-04T10:38:22Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy
closed
[]
false
1,874
{ "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "events_url": "https://api.github.com/users/lucadiliello/events{/privacy}", "followers_url": "https://api.github.com/users/lucadiliello/followers", "following_url": "https://api.github.com/users/lucadiliello/following{/other_user}", "gists_url": "https://api.github.com/users/lucadiliello/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lucadiliello", "id": 23355969, "login": "lucadiliello", "node_id": "MDQ6VXNlcjIzMzU1OTY5", "organizations_url": "https://api.github.com/users/lucadiliello/orgs", "received_events_url": "https://api.github.com/users/lucadiliello/received_events", "repos_url": "https://api.github.com/users/lucadiliello/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lucadiliello/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lucadiliello/subscriptions", "type": "User", "url": "https://api.github.com/users/lucadiliello" }
Adding Europarl Bilingual dataset
https://api.github.com/repos/huggingface/datasets/issues/1874/events
null
https://api.github.com/repos/huggingface/datasets/issues/1874/labels{/name}
2021-02-13T17:02:04Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1874.diff", "html_url": "https://github.com/huggingface/datasets/pull/1874", "merged_at": "2021-03-04T10:38:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/1874.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1874" }
807,786,094
[]
https://api.github.com/repos/huggingface/datasets/issues/1874
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php). This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some keys that references to inexistent sentences). I chose to follow the the style of a similar dataset available in this repository: `multi_para_crawl`.
2021-03-04T10:38:22Z
https://github.com/huggingface/datasets/pull/1874
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1874/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1873/comments
https://api.github.com/repos/huggingface/datasets/issues/1873/timeline
2021-02-16T14:21:58Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcyOTM4MTYy
closed
[]
false
1,873
{ "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "events_url": "https://api.github.com/users/cstorm125/events{/privacy}", "followers_url": "https://api.github.com/users/cstorm125/followers", "following_url": "https://api.github.com/users/cstorm125/following{/other_user}", "gists_url": "https://api.github.com/users/cstorm125/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cstorm125", "id": 15519308, "login": "cstorm125", "node_id": "MDQ6VXNlcjE1NTE5MzA4", "organizations_url": "https://api.github.com/users/cstorm125/orgs", "received_events_url": "https://api.github.com/users/cstorm125/received_events", "repos_url": "https://api.github.com/users/cstorm125/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cstorm125/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cstorm125/subscriptions", "type": "User", "url": "https://api.github.com/users/cstorm125" }
add iapp_wiki_qa_squad
https://api.github.com/repos/huggingface/datasets/issues/1873/events
null
https://api.github.com/repos/huggingface/datasets/issues/1873/labels{/name}
2021-02-13T13:34:27Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1873.diff", "html_url": "https://github.com/huggingface/datasets/pull/1873", "merged_at": "2021-02-16T14:21:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/1873.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1873" }
807,750,745
[]
https://api.github.com/repos/huggingface/datasets/issues/1873
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
`iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset) to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in 5761/742/739 questions from 1529/191/192 articles.
2021-02-16T14:21:58Z
https://github.com/huggingface/datasets/pull/1873
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1873/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1872/comments
https://api.github.com/repos/huggingface/datasets/issues/1872/timeline
2021-03-30T14:01:45Z
null
completed
MDU6SXNzdWU4MDc3MTE5MzU=
closed
[]
null
1,872
{ "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/villmow", "id": 2743060, "login": "villmow", "node_id": "MDQ6VXNlcjI3NDMwNjA=", "organizations_url": "https://api.github.com/users/villmow/orgs", "received_events_url": "https://api.github.com/users/villmow/received_events", "repos_url": "https://api.github.com/users/villmow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "type": "User", "url": "https://api.github.com/users/villmow" }
Adding a new column to the dataset after set_format was called
https://api.github.com/repos/huggingface/datasets/issues/1872/events
null
https://api.github.com/repos/huggingface/datasets/issues/1872/labels{/name}
2021-02-13T09:14:35Z
null
false
null
null
807,711,935
[]
https://api.github.com/repos/huggingface/datasets/issues/1872
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
Hi, thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side. I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True)`. This converts the integer columns into tensors, but keeps the lists of strings as they are. I then call `map` to add a new column to my dataset, which is a **list of strings**. Once I iterate through my dataset, I get an error that the new column can't be converted into a tensor (which is probably caused by `set_format`). Below some pseudo code: ```python def augment_func(sample: Dict) -> Dict: # do something return { "some_integer_column1" : augmented_data["some_integer_column1"], # <-- tensor "some_integer_column2" : augmented_data["some_integer_column2"], # <-- tensor "NEW_COLUMN": targets, # <-- list of strings } data = datasets.load_dataset(__file__, data_dir="...", split="train") data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True) augmented_dataset = data.map(augment_func, batched=False) for sample in augmented_dataset: print(sample) # fails ``` and the exception: ```python Traceback (most recent call last): File "dataset.py", line 487, in <module> main() File "dataset.py", line 471, in main for sample in augmented_dataset: File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 697, in __iter__ yield self._getitem( File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1069, in _getitem outputs = self._convert_outputs( File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 890, in _convert_outputs v = map_nested(command, v, **map_nested_kwargs) File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp> return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp> return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 851, in command return torch.tensor(x, **format_kwargs) TypeError: new(): invalid data type 'str' ``` Thanks!
2021-03-30T14:01:45Z
https://github.com/huggingface/datasets/issues/1872
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1872/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1871/comments
https://api.github.com/repos/huggingface/datasets/issues/1871/timeline
2021-03-08T10:12:45Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcyODk5Nzgz
closed
[]
false
1,871
{ "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "events_url": "https://api.github.com/users/frankier/events{/privacy}", "followers_url": "https://api.github.com/users/frankier/followers", "following_url": "https://api.github.com/users/frankier/following{/other_user}", "gists_url": "https://api.github.com/users/frankier/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/frankier", "id": 299380, "login": "frankier", "node_id": "MDQ6VXNlcjI5OTM4MA==", "organizations_url": "https://api.github.com/users/frankier/orgs", "received_events_url": "https://api.github.com/users/frankier/received_events", "repos_url": "https://api.github.com/users/frankier/repos", "site_admin": false, "starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankier/subscriptions", "type": "User", "url": "https://api.github.com/users/frankier" }
Add newspop dataset
https://api.github.com/repos/huggingface/datasets/issues/1871/events
null
https://api.github.com/repos/huggingface/datasets/issues/1871/labels{/name}
2021-02-13T07:31:23Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1871.diff", "html_url": "https://github.com/huggingface/datasets/pull/1871", "merged_at": "2021-03-08T10:12:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/1871.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1871" }
807,697,671
[]
https://api.github.com/repos/huggingface/datasets/issues/1871
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
2021-03-08T10:12:45Z
https://github.com/huggingface/datasets/pull/1871
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1871/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1870/comments
https://api.github.com/repos/huggingface/datasets/issues/1870/timeline
2021-04-23T10:01:31Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcyNTc4Mjc4
closed
[]
false
1,870
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Implement Dataset add_item
https://api.github.com/repos/huggingface/datasets/issues/1870/events
{ "closed_at": "2021-05-31T16:20:53Z", "closed_issues": 3, "created_at": "2021-04-09T13:16:31Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-05-14T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/3", "id": 6644287, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels", "node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==", "number": 3, "open_issues": 0, "state": "closed", "title": "1.7", "updated_at": "2021-05-31T16:20:53Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/3" }
https://api.github.com/repos/huggingface/datasets/issues/1870/labels{/name}
2021-02-12T15:03:46Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1870.diff", "html_url": "https://github.com/huggingface/datasets/pull/1870", "merged_at": "2021-04-23T10:01:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/1870.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1870" }
807,306,564
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
https://api.github.com/repos/huggingface/datasets/issues/1870
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Implement `Dataset.add_item`. Close #1854.
2021-04-23T10:01:31Z
https://github.com/huggingface/datasets/pull/1870
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1870/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1869/comments
https://api.github.com/repos/huggingface/datasets/issues/1869/timeline
2021-02-12T16:13:08Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcyNDU0NTMy
closed
[]
false
1,869
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Remove outdated commands in favor of huggingface-cli
https://api.github.com/repos/huggingface/datasets/issues/1869/events
null
https://api.github.com/repos/huggingface/datasets/issues/1869/labels{/name}
2021-02-12T11:28:10Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1869.diff", "html_url": "https://github.com/huggingface/datasets/pull/1869", "merged_at": "2021-02-12T16:13:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/1869.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1869" }
807,159,835
[]
https://api.github.com/repos/huggingface/datasets/issues/1869
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Removing the old user commands since `huggingface_hub` is going to be used instead. cc @julien-c
2021-02-12T16:13:09Z
https://github.com/huggingface/datasets/pull/1869
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1869/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1868/comments
https://api.github.com/repos/huggingface/datasets/issues/1868/timeline
2021-02-12T11:03:06Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcyNDM2MjA0
closed
[]
false
1,868
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Update oscar sizes
https://api.github.com/repos/huggingface/datasets/issues/1868/events
null
https://api.github.com/repos/huggingface/datasets/issues/1868/labels{/name}
2021-02-12T10:55:35Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1868.diff", "html_url": "https://github.com/huggingface/datasets/pull/1868", "merged_at": "2021-02-12T11:03:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/1868.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1868" }
807,138,159
[]
https://api.github.com/repos/huggingface/datasets/issues/1868
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
This commit https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan
2021-02-12T11:03:07Z
https://github.com/huggingface/datasets/pull/1868
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1868/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1867/comments
https://api.github.com/repos/huggingface/datasets/issues/1867/timeline
2021-02-24T12:00:43Z
null
completed
MDU6SXNzdWU4MDcxMjcxODE=
closed
[]
null
1,867
{ "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avacaondata", "id": 35173563, "login": "avacaondata", "node_id": "MDQ6VXNlcjM1MTczNTYz", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "repos_url": "https://api.github.com/users/avacaondata/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "type": "User", "url": "https://api.github.com/users/avacaondata" }
ERROR WHEN USING SET_TRANSFORM()
https://api.github.com/repos/huggingface/datasets/issues/1867/events
null
https://api.github.com/repos/huggingface/datasets/issues/1867/labels{/name}
2021-02-12T10:38:31Z
null
false
null
null
807,127,181
[]
https://api.github.com/repos/huggingface/datasets/issues/1867
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797 However, when I try to use Trainer from transformers with such dataset, it throws an error: ``` TypeError: __init__() missing 1 required positional argument: 'transform' [INFO|trainer.py:357] 2021-02-12 10:18:09,893 >> The following columns in the training set don't have a corresponding argument in `AlbertForMaskedLM.forward` and have been ignored: text. Exception in device=TPU:0: __init__() missing 1 required positional argument: 'transform' Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 368, in _mp_fn main() File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 332, in main data_collator=data_collator, File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 286, in __init__ self._remove_unused_columns(self.train_dataset, description="training") File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 359, in _remove_unused_columns dataset.set_format(type=dataset.format["type"], columns=columns) File "/home/alejandro_vaca/datasets/src/datasets/fingerprint.py", line 312, in wrapper out = func(self, *args, **kwargs) File "/home/alejandro_vaca/datasets/src/datasets/arrow_dataset.py", line 818, in set_format _ = get_formatter(type, **format_kwargs) File "/home/alejandro_vaca/datasets/src/datasets/formatting/__init__.py", line 112, in get_formatter return _FORMAT_TYPES[format_type](**format_kwargs) TypeError: __init__() missing 1 required positional argument: 'transform' ``` The code I'm using: ```{python} def tokenize_function(examples): # Remove empty lines examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()] return tokenizer(examples["text"], padding=padding, truncation=True, max_length=data_args.max_seq_length) datasets.set_transform(tokenize_function) data_collator = DataCollatorForWholeWordMask(tokenizer=tokenizer, mlm_probability=data_args.mlm_probability) # Initialize our Trainer trainer = Trainer( model=model, args=training_args, train_dataset=datasets["train"] if training_args.do_train else None, eval_dataset=datasets["val"] if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator, ) ``` I've installed from source, master branch.
2021-03-01T14:04:24Z
https://github.com/huggingface/datasets/issues/1867
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1867/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1866/comments
https://api.github.com/repos/huggingface/datasets/issues/1866/timeline
2021-02-17T14:22:36Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcyMzM3NDQ1
closed
[]
false
1,866
{ "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "events_url": "https://api.github.com/users/frankier/events{/privacy}", "followers_url": "https://api.github.com/users/frankier/followers", "following_url": "https://api.github.com/users/frankier/following{/other_user}", "gists_url": "https://api.github.com/users/frankier/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/frankier", "id": 299380, "login": "frankier", "node_id": "MDQ6VXNlcjI5OTM4MA==", "organizations_url": "https://api.github.com/users/frankier/orgs", "received_events_url": "https://api.github.com/users/frankier/received_events", "repos_url": "https://api.github.com/users/frankier/repos", "site_admin": false, "starred_url": "https://api.github.com/users/frankier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/frankier/subscriptions", "type": "User", "url": "https://api.github.com/users/frankier" }
Add dataset for Financial PhraseBank
https://api.github.com/repos/huggingface/datasets/issues/1866/events
null
https://api.github.com/repos/huggingface/datasets/issues/1866/labels{/name}
2021-02-12T07:30:56Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1866.diff", "html_url": "https://github.com/huggingface/datasets/pull/1866", "merged_at": "2021-02-17T14:22:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/1866.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1866" }
807,017,816
[]
https://api.github.com/repos/huggingface/datasets/issues/1866
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
2021-02-17T14:22:36Z
https://github.com/huggingface/datasets/pull/1866
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1866/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1865/comments
https://api.github.com/repos/huggingface/datasets/issues/1865/timeline
2021-02-12T16:59:44Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcxODE2ODI2
closed
[]
false
1,865
{ "avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4", "events_url": "https://api.github.com/users/Valahaar/events{/privacy}", "followers_url": "https://api.github.com/users/Valahaar/followers", "following_url": "https://api.github.com/users/Valahaar/following{/other_user}", "gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Valahaar", "id": 19476123, "login": "Valahaar", "node_id": "MDQ6VXNlcjE5NDc2MTIz", "organizations_url": "https://api.github.com/users/Valahaar/orgs", "received_events_url": "https://api.github.com/users/Valahaar/received_events", "repos_url": "https://api.github.com/users/Valahaar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions", "type": "User", "url": "https://api.github.com/users/Valahaar" }
Updated OPUS Open Subtitles Dataset with metadata information
https://api.github.com/repos/huggingface/datasets/issues/1865/events
null
https://api.github.com/repos/huggingface/datasets/issues/1865/labels{/name}
2021-02-11T13:26:26Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1865.diff", "html_url": "https://github.com/huggingface/datasets/pull/1865", "merged_at": "2021-02-12T16:59:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/1865.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1865" }
806,388,290
[]
https://api.github.com/repos/huggingface/datasets/issues/1865
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Close #1844 Problems: - I ran `python datasets-cli test datasets/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be? - Possibly related to the above, I tried doing `pip uninstall datasets && pip install -e ".[dev]"` after the changes, and loading the dataset via `load_dataset("open_subtitles", lang1='hi', lang2='it')` to check if the update worked, but the loaded dataset did not contain the metadata fields (neither in the features nor doing `next(iter(dataset['train']))`). What step(s) did I miss? Questions: - Is it ok to have a `classmethod` in there? I have not seen any in the few other datasets I have checked. I could make it a local method of the `_generate_examples` method, but I'd rather not duplicate the logic...
2021-02-19T12:38:09Z
https://github.com/huggingface/datasets/pull/1865
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1865/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1864/comments
https://api.github.com/repos/huggingface/datasets/issues/1864/timeline
2021-02-11T08:19:51Z
null
completed
MDU6SXNzdWU4MDYxNzI4NDM=
closed
[]
null
1,864
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
Add Winogender Schemas
https://api.github.com/repos/huggingface/datasets/issues/1864/events
null
https://api.github.com/repos/huggingface/datasets/issues/1864/labels{/name}
2021-02-11T08:18:38Z
null
false
null
null
806,172,843
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
https://api.github.com/repos/huggingface/datasets/issues/1864
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Adding a Dataset - **Name:** Winogender Schemas - **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems. - **Paper:** https://arxiv.org/abs/1804.09301 - **Data:** https://github.com/rudinger/winogender-schemas (see data directory) - **Motivation:** Testing gender bias in automated coreference resolution systems, improve coreference resolution in general. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2021-02-11T08:19:51Z
https://github.com/huggingface/datasets/issues/1864
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1864/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1863/comments
https://api.github.com/repos/huggingface/datasets/issues/1863/timeline
null
null
null
MDU6SXNzdWU4MDYxNzEzMTE=
open
[]
null
1,863
{ "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NielsRogge", "id": 48327001, "login": "NielsRogge", "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "repos_url": "https://api.github.com/users/NielsRogge/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "type": "User", "url": "https://api.github.com/users/NielsRogge" }
Add WikiCREM
https://api.github.com/repos/huggingface/datasets/issues/1863/events
null
https://api.github.com/repos/huggingface/datasets/issues/1863/labels{/name}
2021-02-11T08:16:00Z
null
false
null
null
806,171,311
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
https://api.github.com/repos/huggingface/datasets/issues/1863
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Adding a Dataset - **Name:** WikiCREM - **Description:** A large unsupervised corpus for coreference resolution. - **Paper:** https://arxiv.org/abs/1905.06290 - **Github repo:**: https://github.com/vid-koci/bert-commonsense - **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3 - **Motivation:** Coreference resolution, common sense reasoning Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2021-03-07T07:27:13Z
https://github.com/huggingface/datasets/issues/1863
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1863/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1862/comments
https://api.github.com/repos/huggingface/datasets/issues/1862/timeline
2021-02-10T18:17:47Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcxMjc2ODAx
closed
[]
false
1,862
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Fix writing GPU Faiss index
https://api.github.com/repos/huggingface/datasets/issues/1862/events
null
https://api.github.com/repos/huggingface/datasets/issues/1862/labels{/name}
2021-02-10T17:32:03Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1862.diff", "html_url": "https://github.com/huggingface/datasets/pull/1862", "merged_at": "2021-02-10T18:17:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/1862.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1862" }
805,722,293
[]
https://api.github.com/repos/huggingface/datasets/issues/1862
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU. I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu` Close #1859
2021-02-10T18:17:48Z
https://github.com/huggingface/datasets/pull/1862
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1862/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1861/comments
https://api.github.com/repos/huggingface/datasets/issues/1861/timeline
2021-02-10T16:14:59Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcxMjAwNjA1
closed
[]
false
1,861
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Fix Limit url
https://api.github.com/repos/huggingface/datasets/issues/1861/events
null
https://api.github.com/repos/huggingface/datasets/issues/1861/labels{/name}
2021-02-10T15:44:56Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1861.diff", "html_url": "https://github.com/huggingface/datasets/pull/1861", "merged_at": "2021-02-10T16:14:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/1861.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1861" }
805,631,215
[]
https://api.github.com/repos/huggingface/datasets/issues/1861
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset This PR uses the previous commit sha to download the file instead, as suggested by @Paethon Close #1836
2021-02-10T16:15:00Z
https://github.com/huggingface/datasets/pull/1861
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1861/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1860/comments
https://api.github.com/repos/huggingface/datasets/issues/1860/timeline
2021-02-12T19:13:29Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcxMDk4OTIz
closed
[]
false
1,860
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Add loading from the Datasets Hub + add relative paths in download manager
https://api.github.com/repos/huggingface/datasets/issues/1860/events
null
https://api.github.com/repos/huggingface/datasets/issues/1860/labels{/name}
2021-02-10T13:24:11Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1860.diff", "html_url": "https://github.com/huggingface/datasets/pull/1860", "merged_at": "2021-02-12T19:13:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/1860.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1860" }
805,510,037
[]
https://api.github.com/repos/huggingface/datasets/issues/1860
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data. For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files. You can load it using ```python from datasets import load_dataset d = load_dataset("lhoestq/custom_squad") ``` To be able to use the data files that live right next to the dataset script on the repo in the hub, I added relative paths support for the DownloadManager. For example in the repo mentioned above, there are two json files that can be downloaded via ```python _URLS = { "train": "train-v1.1.json", "dev": "dev-v1.1.json", } downloaded_files = dl_manager.download_and_extract(_URLS) ``` To make it work, I set the `base_path` of the DownloadManager to be the parent path of the dataset script (which comes from either a local path or a remote url). I also had to add the auth header of the requests to huggingface.co for private datasets repos. The token is fetched from [huggingface_hub](https://github.com/huggingface/huggingface_hub).
2021-02-12T19:13:30Z
https://github.com/huggingface/datasets/pull/1860
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1860/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1859/comments
https://api.github.com/repos/huggingface/datasets/issues/1859/timeline
2021-02-10T18:17:47Z
null
completed
MDU6SXNzdWU4MDU0NzkwMjU=
closed
[]
null
1,859
{ "avatar_url": "https://avatars.githubusercontent.com/u/3995321?v=4", "events_url": "https://api.github.com/users/corticalstack/events{/privacy}", "followers_url": "https://api.github.com/users/corticalstack/followers", "following_url": "https://api.github.com/users/corticalstack/following{/other_user}", "gists_url": "https://api.github.com/users/corticalstack/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/corticalstack", "id": 3995321, "login": "corticalstack", "node_id": "MDQ6VXNlcjM5OTUzMjE=", "organizations_url": "https://api.github.com/users/corticalstack/orgs", "received_events_url": "https://api.github.com/users/corticalstack/received_events", "repos_url": "https://api.github.com/users/corticalstack/repos", "site_admin": false, "starred_url": "https://api.github.com/users/corticalstack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/corticalstack/subscriptions", "type": "User", "url": "https://api.github.com/users/corticalstack" }
Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU)
https://api.github.com/repos/huggingface/datasets/issues/1859/events
null
https://api.github.com/repos/huggingface/datasets/issues/1859/labels{/name}
2021-02-10T12:41:00Z
null
false
null
null
805,479,025
[]
https://api.github.com/repos/huggingface/datasets/issues/1859
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
Error serializing faiss index. Error as follows: `Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index` Note: `torch.cuda.is_available()` reports: ``` Cuda is available cuda:0 ``` Adding index, device=0 for GPU. `dataset.add_faiss_index(column='embeddings', index_name='idx_embeddings', device=0)` However, during a quick debug, self.faiss_index has no attr "device" when checked in` search.py, method save`, so fails to transform gpu index to cpu index. If I add index without device, index is saved OK. ``` def save(self, file: str): """Serialize the FaissIndex on disk""" import faiss # noqa: F811 if ( hasattr(self.faiss_index, "device") and self.faiss_index.device is not None and self.faiss_index.device > -1 ): index = faiss.index_gpu_to_cpu(self.faiss_index) else: index = self.faiss_index faiss.write_index(index, file) ```
2021-02-10T18:32:12Z
https://github.com/huggingface/datasets/issues/1859
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1859/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1858/comments
https://api.github.com/repos/huggingface/datasets/issues/1858/timeline
2021-02-10T15:52:29Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcxMDcxNzIx
closed
[]
false
1,858
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Clean config getenvs
https://api.github.com/repos/huggingface/datasets/issues/1858/events
null
https://api.github.com/repos/huggingface/datasets/issues/1858/labels{/name}
2021-02-10T12:39:14Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1858.diff", "html_url": "https://github.com/huggingface/datasets/pull/1858", "merged_at": "2021-02-10T15:52:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/1858.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1858" }
805,477,774
[]
https://api.github.com/repos/huggingface/datasets/issues/1858
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Following #1848 Remove double getenv calls and fix one issue with rarfile cc @albertvillanova
2021-02-10T15:52:30Z
https://github.com/huggingface/datasets/pull/1858
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1858/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1857/comments
https://api.github.com/repos/huggingface/datasets/issues/1857/timeline
2021-08-03T05:06:13Z
null
completed
MDU6SXNzdWU4MDUzOTExMDc=
closed
[]
null
1,857
{ "avatar_url": "https://avatars.githubusercontent.com/u/1376337?v=4", "events_url": "https://api.github.com/users/mwrzalik/events{/privacy}", "followers_url": "https://api.github.com/users/mwrzalik/followers", "following_url": "https://api.github.com/users/mwrzalik/following{/other_user}", "gists_url": "https://api.github.com/users/mwrzalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mwrzalik", "id": 1376337, "login": "mwrzalik", "node_id": "MDQ6VXNlcjEzNzYzMzc=", "organizations_url": "https://api.github.com/users/mwrzalik/orgs", "received_events_url": "https://api.github.com/users/mwrzalik/received_events", "repos_url": "https://api.github.com/users/mwrzalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mwrzalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mwrzalik/subscriptions", "type": "User", "url": "https://api.github.com/users/mwrzalik" }
Unable to upload "community provided" dataset - 400 Client Error
https://api.github.com/repos/huggingface/datasets/issues/1857/events
null
https://api.github.com/repos/huggingface/datasets/issues/1857/labels{/name}
2021-02-10T10:39:01Z
null
false
null
null
805,391,107
[]
https://api.github.com/repos/huggingface/datasets/issues/1857
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Hi, i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens: ``` $ datasets-cli login $ datasets-cli upload_dataset my_dataset About to upload file /path/to/my_dataset/dataset_infos.json to S3 under filename my_dataset/dataset_infos.json and namespace username About to upload file /path/to/my_dataset/my_dataset.py to S3 under filename my_dataset/my_dataset.py and namespace username Proceed? [Y/n] Y Uploading... This might take a while if files are large 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/presign huggingface.co migrated to a new model hosting system. You need to upgrade to transformers v3.5+ to upload new models. More info at https://discuss.hugginface.co or https://twitter.com/julien_c. Thank you! ``` I'm using the latest releases of datasets and transformers.
2021-08-03T05:06:13Z
https://github.com/huggingface/datasets/issues/1857
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1857/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1856/comments
https://api.github.com/repos/huggingface/datasets/issues/1856/timeline
2022-03-15T13:55:23Z
null
completed
MDU6SXNzdWU4MDUzNjAyMDA=
closed
[]
null
1,856
{ "avatar_url": "https://avatars.githubusercontent.com/u/19946372?v=4", "events_url": "https://api.github.com/users/yanxi0830/events{/privacy}", "followers_url": "https://api.github.com/users/yanxi0830/followers", "following_url": "https://api.github.com/users/yanxi0830/following{/other_user}", "gists_url": "https://api.github.com/users/yanxi0830/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yanxi0830", "id": 19946372, "login": "yanxi0830", "node_id": "MDQ6VXNlcjE5OTQ2Mzcy", "organizations_url": "https://api.github.com/users/yanxi0830/orgs", "received_events_url": "https://api.github.com/users/yanxi0830/received_events", "repos_url": "https://api.github.com/users/yanxi0830/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yanxi0830/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yanxi0830/subscriptions", "type": "User", "url": "https://api.github.com/users/yanxi0830" }
load_dataset("amazon_polarity") NonMatchingChecksumError
https://api.github.com/repos/huggingface/datasets/issues/1856/events
null
https://api.github.com/repos/huggingface/datasets/issues/1856/labels{/name}
2021-02-10T10:00:56Z
null
false
null
null
805,360,200
[]
https://api.github.com/repos/huggingface/datasets/issues/1856
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-3-8559a03fe0f8> in <module>() ----> 1 dataset = load_dataset("amazon_polarity") 3 frames /usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download'] ```
2022-03-15T13:55:24Z
https://github.com/huggingface/datasets/issues/1856
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1856/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1855/comments
https://api.github.com/repos/huggingface/datasets/issues/1855/timeline
2021-02-10T12:33:09Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcwODkzNDY3
closed
[]
false
1,855
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Minor fix in the docs
https://api.github.com/repos/huggingface/datasets/issues/1855/events
null
https://api.github.com/repos/huggingface/datasets/issues/1855/labels{/name}
2021-02-10T07:27:43Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1855.diff", "html_url": "https://github.com/huggingface/datasets/pull/1855", "merged_at": "2021-02-10T12:33:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/1855.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1855" }
805,256,579
[]
https://api.github.com/repos/huggingface/datasets/issues/1855
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
2021-02-10T12:33:09Z
https://github.com/huggingface/datasets/pull/1855
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1855/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1854/comments
https://api.github.com/repos/huggingface/datasets/issues/1854/timeline
2021-04-23T10:01:30Z
null
completed
MDU6SXNzdWU4MDUyMDQzOTc=
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
1,854
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
Feature Request: Dataset.add_item
https://api.github.com/repos/huggingface/datasets/issues/1854/events
null
https://api.github.com/repos/huggingface/datasets/issues/1854/labels{/name}
2021-02-10T06:06:00Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
null
805,204,397
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
https://api.github.com/repos/huggingface/datasets/issues/1854
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.map(binarizer)`. Is this possible at the moment? Is there an example? I'm happy to use raw `pa.Table` but not sure whether it will support uneven length entries. ### Desired API ```python import numpy as np tokenized: List[np.NDArray[np.int64]] = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5]) def build_dataset_from_tokenized(tokenized: List[np.NDArray[int]]) -> Dataset: """FIXME""" dataset = EmptyDataset() for t in tokenized: dataset.append(t) return dataset ds = build_dataset_from_tokenized(tokenized) assert (ds[0] == np.array([4,4,2])).all() ``` ### What I tried grep, google for "add one entry at a time", "datasets.append" ### Current Code This code achieves the same result but doesn't fit into the `add_item` abstraction. ```python dataset = load_dataset('text', data_files={'train': 'train.txt'}) tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_length=4096) def tokenize_function(examples): ids = tokenizer(examples['text'], return_attention_mask=False)['input_ids'] return {'input_ids': [x[1:] for x in ids]} ds = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=['text'], load_from_cache_file=not overwrite_cache) print(ds['train'][0]) => np array ``` Thanks in advance!
2021-04-23T10:01:30Z
https://github.com/huggingface/datasets/issues/1854
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1854/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1853/comments
https://api.github.com/repos/huggingface/datasets/issues/1853/timeline
2021-02-10T12:32:34Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcwNTAwMjc4
closed
[]
false
1,853
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Configure library root logger at the module level
https://api.github.com/repos/huggingface/datasets/issues/1853/events
null
https://api.github.com/repos/huggingface/datasets/issues/1853/labels{/name}
2021-02-09T18:11:12Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1853.diff", "html_url": "https://github.com/huggingface/datasets/pull/1853", "merged_at": "2021-02-10T12:32:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/1853.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1853" }
804,791,166
[]
https://api.github.com/repos/huggingface/datasets/issues/1853
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Configure library root logger at the datasets.logging module level (singleton-like). By doing it this way: - we are sure configuration is done only once: module level code is only runned once - no need of global variable - no need of threading lock
2021-02-10T12:32:34Z
https://github.com/huggingface/datasets/pull/1853
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1853/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1852/comments
https://api.github.com/repos/huggingface/datasets/issues/1852/timeline
2021-02-11T10:18:55Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcwMzY3NTU1
closed
[]
false
1,852
{ "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}", "followers_url": "https://api.github.com/users/zaidalyafeai/followers", "following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}", "gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zaidalyafeai", "id": 15667714, "login": "zaidalyafeai", "node_id": "MDQ6VXNlcjE1NjY3NzE0", "organizations_url": "https://api.github.com/users/zaidalyafeai/orgs", "received_events_url": "https://api.github.com/users/zaidalyafeai/received_events", "repos_url": "https://api.github.com/users/zaidalyafeai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions", "type": "User", "url": "https://api.github.com/users/zaidalyafeai" }
Add Arabic Speech Corpus
https://api.github.com/repos/huggingface/datasets/issues/1852/events
null
https://api.github.com/repos/huggingface/datasets/issues/1852/labels{/name}
2021-02-09T15:02:26Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1852.diff", "html_url": "https://github.com/huggingface/datasets/pull/1852", "merged_at": "2021-02-11T10:18:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/1852.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1852" }
804,633,033
[]
https://api.github.com/repos/huggingface/datasets/issues/1852
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
2021-02-11T10:18:55Z
https://github.com/huggingface/datasets/pull/1852
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1852/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1851/comments
https://api.github.com/repos/huggingface/datasets/issues/1851/timeline
2021-02-09T14:21:48Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5
closed
[]
false
1,851
{ "avatar_url": "https://avatars.githubusercontent.com/u/3596?v=4", "events_url": "https://api.github.com/users/pvl/events{/privacy}", "followers_url": "https://api.github.com/users/pvl/followers", "following_url": "https://api.github.com/users/pvl/following{/other_user}", "gists_url": "https://api.github.com/users/pvl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pvl", "id": 3596, "login": "pvl", "node_id": "MDQ6VXNlcjM1OTY=", "organizations_url": "https://api.github.com/users/pvl/orgs", "received_events_url": "https://api.github.com/users/pvl/received_events", "repos_url": "https://api.github.com/users/pvl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pvl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvl/subscriptions", "type": "User", "url": "https://api.github.com/users/pvl" }
set bert_score version dependency
https://api.github.com/repos/huggingface/datasets/issues/1851/events
null
https://api.github.com/repos/huggingface/datasets/issues/1851/labels{/name}
2021-02-09T12:51:07Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1851.diff", "html_url": "https://github.com/huggingface/datasets/pull/1851", "merged_at": "2021-02-09T14:21:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/1851.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1851" }
804,523,174
[]
https://api.github.com/repos/huggingface/datasets/issues/1851
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843)
2021-02-09T14:21:48Z
https://github.com/huggingface/datasets/pull/1851
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1851/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1850/comments
https://api.github.com/repos/huggingface/datasets/issues/1850/timeline
2021-02-09T15:16:26Z
null
null
MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx
closed
[]
false
1,850
{ "avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4", "events_url": "https://api.github.com/users/ggdupont/events{/privacy}", "followers_url": "https://api.github.com/users/ggdupont/followers", "following_url": "https://api.github.com/users/ggdupont/following{/other_user}", "gists_url": "https://api.github.com/users/ggdupont/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ggdupont", "id": 5583410, "login": "ggdupont", "node_id": "MDQ6VXNlcjU1ODM0MTA=", "organizations_url": "https://api.github.com/users/ggdupont/orgs", "received_events_url": "https://api.github.com/users/ggdupont/received_events", "repos_url": "https://api.github.com/users/ggdupont/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ggdupont/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ggdupont/subscriptions", "type": "User", "url": "https://api.github.com/users/ggdupont" }
Add cord 19 dataset
https://api.github.com/repos/huggingface/datasets/issues/1850/events
null
https://api.github.com/repos/huggingface/datasets/issues/1850/labels{/name}
2021-02-09T10:22:08Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1850.diff", "html_url": "https://github.com/huggingface/datasets/pull/1850", "merged_at": "2021-02-09T15:16:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/1850.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1850" }
804,412,249
[]
https://api.github.com/repos/huggingface/datasets/issues/1850
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template and at least fill the tags - [x] Both tests for the real data and the dummy data pass. ### Extras: - [x] add more metadata - [x] add full text - [x] add pre-computed document embedding
2021-02-09T15:16:26Z
https://github.com/huggingface/datasets/pull/1850
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1850/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1849/comments
https://api.github.com/repos/huggingface/datasets/issues/1849/timeline
2021-03-15T05:59:37Z
null
completed
MDU6SXNzdWU4MDQyOTI5NzE=
closed
[]
null
1,849
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
Add TIMIT
https://api.github.com/repos/huggingface/datasets/issues/1849/events
null
https://api.github.com/repos/huggingface/datasets/issues/1849/labels{/name}
2021-02-09T07:29:41Z
null
false
null
null
804,292,971
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
https://api.github.com/repos/huggingface/datasets/issues/1849
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/ / *Wikipedia*: https://en.wikipedia.org/wiki/TIMIT - **Data:** *https://deepai.org/dataset/timit* - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2021-03-15T05:59:37Z
https://github.com/huggingface/datasets/issues/1849
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1849/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1848/comments
https://api.github.com/repos/huggingface/datasets/issues/1848/timeline
2021-02-10T12:29:35Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1
closed
[]
false
1,848
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Refactoring: Create config module
https://api.github.com/repos/huggingface/datasets/issues/1848/events
null
https://api.github.com/repos/huggingface/datasets/issues/1848/labels{/name}
2021-02-08T18:43:51Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1848.diff", "html_url": "https://github.com/huggingface/datasets/pull/1848", "merged_at": "2021-02-10T12:29:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/1848.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1848" }
803,826,506
[]
https://api.github.com/repos/huggingface/datasets/issues/1848
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Refactorize configuration settings into their own module. This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created.
2021-02-10T12:29:35Z
https://github.com/huggingface/datasets/pull/1848
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1848/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1847/comments
https://api.github.com/repos/huggingface/datasets/issues/1847/timeline
2021-02-09T17:53:21Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0
closed
[]
false
1,847
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[Metrics] Add word error metric metric
https://api.github.com/repos/huggingface/datasets/issues/1847/events
null
https://api.github.com/repos/huggingface/datasets/issues/1847/labels{/name}
2021-02-08T18:41:15Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1847.diff", "html_url": "https://github.com/huggingface/datasets/pull/1847", "merged_at": "2021-02-09T17:53:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/1847.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1847" }
803,824,694
[]
https://api.github.com/repos/huggingface/datasets/issues/1847
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
This PR adds the word error rate metric to datasets. WER: https://en.wikipedia.org/wiki/Word_error_rate for speech recognition. WER is the main metric used in ASR. `jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939)
2021-02-09T17:53:21Z
https://github.com/huggingface/datasets/pull/1847
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1847/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1846/comments
https://api.github.com/repos/huggingface/datasets/issues/1846/timeline
2021-02-25T14:10:18Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY5NjczMzcy
closed
[]
false
1,846
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Make DownloadManager downloaded/extracted paths accessible
https://api.github.com/repos/huggingface/datasets/issues/1846/events
null
https://api.github.com/repos/huggingface/datasets/issues/1846/labels{/name}
2021-02-08T18:14:42Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1846.diff", "html_url": "https://github.com/huggingface/datasets/pull/1846", "merged_at": "2021-02-25T14:10:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/1846.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1846" }
803,806,380
[]
https://api.github.com/repos/huggingface/datasets/issues/1846
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Make accessible the file paths downloaded/extracted by DownloadManager. Close #1831. The approach: - I set these paths as DownloadManager attributes: these are DownloadManager's concerns - To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition
2021-02-25T14:10:18Z
https://github.com/huggingface/datasets/pull/1846
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1846/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1845/comments
https://api.github.com/repos/huggingface/datasets/issues/1845/timeline
2021-02-09T14:22:37Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY5NTk2MTIz
closed
[]
false
1,845
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Enable logging propagation and remove logging handler
https://api.github.com/repos/huggingface/datasets/issues/1845/events
null
https://api.github.com/repos/huggingface/datasets/issues/1845/labels{/name}
2021-02-08T16:22:13Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1845.diff", "html_url": "https://github.com/huggingface/datasets/pull/1845", "merged_at": "2021-02-09T14:22:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/1845.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1845" }
803,714,493
[]
https://api.github.com/repos/huggingface/datasets/issues/1845
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691 But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826 I also removed the handler that was added since, according to the logging [documentation](https://docs.python.org/3/howto/logging.html#configuring-logging-for-a-library): > It is strongly advised that you do not add any handlers other than NullHandler to your library’s loggers. This is because the configuration of handlers is the prerogative of the application developer who uses your library. The application developer knows their target audience and what handlers are most appropriate for their application: if you add handlers ‘under the hood’, you might well interfere with their ability to carry out unit tests and deliver logs which suit their requirements. It could have been useful if we wanted to have a custom formatter for the logging but I think it's more important to keep the logging as default to not interfere with the users' logging management. Therefore I also removed the two methods `datasets.logging.enable_default_handler` and `datasets.logging.disable_default_handler`. cc @albertvillanova this should let you use capsys/caplog in pytest cc @LysandreJik @sgugger if you want to do the same in `transformers`
2021-02-09T14:22:38Z
https://github.com/huggingface/datasets/pull/1845
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1845/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1844/comments
https://api.github.com/repos/huggingface/datasets/issues/1844/timeline
2021-02-12T17:38:58Z
null
completed
MDU6SXNzdWU4MDM1ODgxMjU=
closed
[]
null
1,844
{ "avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4", "events_url": "https://api.github.com/users/Valahaar/events{/privacy}", "followers_url": "https://api.github.com/users/Valahaar/followers", "following_url": "https://api.github.com/users/Valahaar/following{/other_user}", "gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Valahaar", "id": 19476123, "login": "Valahaar", "node_id": "MDQ6VXNlcjE5NDc2MTIz", "organizations_url": "https://api.github.com/users/Valahaar/orgs", "received_events_url": "https://api.github.com/users/Valahaar/received_events", "repos_url": "https://api.github.com/users/Valahaar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions", "type": "User", "url": "https://api.github.com/users/Valahaar" }
Update Open Subtitles corpus with original sentence IDs
https://api.github.com/repos/huggingface/datasets/issues/1844/events
null
https://api.github.com/repos/huggingface/datasets/issues/1844/labels{/name}
2021-02-08T13:55:13Z
null
false
null
null
803,588,125
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
https://api.github.com/repos/huggingface/datasets/issues/1844
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat allowing for document-level machine translation (and other document-level stuff which could be cool to have); second, it's possible to have parallel sentences in multiple languages, as they share the same ids across bitexts. I think I should tag @abhishekkrthakur as he's the one who added it in the first place. Thanks!
2021-02-12T17:38:58Z
https://github.com/huggingface/datasets/issues/1844
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1844/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1843/comments
https://api.github.com/repos/huggingface/datasets/issues/1843/timeline
null
null
null
MDU6SXNzdWU4MDM1NjUzOTM=
open
[]
null
1,843
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
MustC Speech Translation
https://api.github.com/repos/huggingface/datasets/issues/1843/events
null
https://api.github.com/repos/huggingface/datasets/issues/1843/labels{/name}
2021-02-08T13:27:45Z
null
false
null
null
803,565,393
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
https://api.github.com/repos/huggingface/datasets/issues/1843
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - all data under "Allowed Training Data" and "Development and Evalutaion Data for TED/How2" - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2021-05-14T14:53:34Z
https://github.com/huggingface/datasets/issues/1843
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1843/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1842/comments
https://api.github.com/repos/huggingface/datasets/issues/1842/timeline
2023-02-28T16:29:22Z
null
completed
MDU6SXNzdWU4MDM1NjMxNDk=
closed
[]
null
1,842
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
Add AMI Corpus
https://api.github.com/repos/huggingface/datasets/issues/1842/events
null
https://api.github.com/repos/huggingface/datasets/issues/1842/labels{/name}
2021-02-08T13:25:00Z
null
false
null
null
803,563,149
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
https://api.github.com/repos/huggingface/datasets/issues/1842
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Adding a Dataset - **Name:** *AMI* - **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/ - **Data:** *http://groups.inf.ed.ac.uk/ami/download/* - Select all cases in 1) and select "Individual Headsets" & "Microphone array" for 2) - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2023-02-28T16:29:22Z
https://github.com/huggingface/datasets/issues/1842
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1842/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1841/comments
https://api.github.com/repos/huggingface/datasets/issues/1841/timeline
2021-03-15T05:59:02Z
null
completed
MDU6SXNzdWU4MDM1NjExMjM=
closed
[]
null
1,841
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
Add ljspeech
https://api.github.com/repos/huggingface/datasets/issues/1841/events
null
https://api.github.com/repos/huggingface/datasets/issues/1841/labels{/name}
2021-02-08T13:22:26Z
null
false
null
null
803,561,123
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
https://api.github.com/repos/huggingface/datasets/issues/1841
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Adding a Dataset - **Name:** *ljspeech* - **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours. The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.)* - **Paper:** *Homepage*: https://keithito.com/LJ-Speech-Dataset/ - **Data:** *https://keithito.com/LJ-Speech-Dataset/* - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/ljspeech If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2021-03-15T05:59:02Z
https://github.com/huggingface/datasets/issues/1841
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1841/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1840/comments
https://api.github.com/repos/huggingface/datasets/issues/1840/timeline
2021-03-15T05:56:21Z
null
completed
MDU6SXNzdWU4MDM1NjAwMzk=
closed
[]
null
1,840
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
Add common voice
https://api.github.com/repos/huggingface/datasets/issues/1840/events
null
https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name}
2021-02-08T13:21:05Z
null
false
null
null
803,560,039
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
https://api.github.com/repos/huggingface/datasets/issues/1840
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Adding a Dataset - **Name:** *common voice* - **Description:** *Mozilla Common Voice Dataset* - **Paper:** Homepage: https://voice.mozilla.org/en/datasets - **Data:** https://voice.mozilla.org/en/datasets - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/common_voice If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2022-03-20T15:23:40Z
https://github.com/huggingface/datasets/issues/1840
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1840/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1839/comments
https://api.github.com/repos/huggingface/datasets/issues/1839/timeline
null
null
null
MDU6SXNzdWU4MDM1NTkxNjQ=
open
[]
null
1,839
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
Add Voxforge
https://api.github.com/repos/huggingface/datasets/issues/1839/events
null
https://api.github.com/repos/huggingface/datasets/issues/1839/labels{/name}
2021-02-08T13:19:56Z
null
false
null
null
803,559,164
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
https://api.github.com/repos/huggingface/datasets/issues/1839
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Adding a Dataset - **Name:** *voxforge* - **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constantly updated, and for the sake of reproducibility, this release contains only recordings submitted prior to 2020-01-01. The samples are splitted between train, validation and testing so that samples from each speaker belongs to exactly one split.* - **Paper:** *Homepage*: http://www.voxforge.org/ - **Data:** *http://www.voxforge.org/home/downloads* - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/voxforge If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2021-02-08T13:28:31Z
https://github.com/huggingface/datasets/issues/1839
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1839/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1838/comments
https://api.github.com/repos/huggingface/datasets/issues/1838/timeline
2022-10-04T14:34:12Z
null
completed
MDU6SXNzdWU4MDM1NTc1MjE=
closed
[]
null
1,838
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
Add tedlium
https://api.github.com/repos/huggingface/datasets/issues/1838/events
null
https://api.github.com/repos/huggingface/datasets/issues/1838/labels{/name}
2021-02-08T13:17:52Z
null
false
null
null
803,557,521
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
https://api.github.com/repos/huggingface/datasets/issues/1838
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Adding a Dataset - **Name:** *tedlium* - **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.* - **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51/ - **Data:** http://www.openslr.org/7/ - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/tedlium If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2022-10-04T14:34:12Z
https://github.com/huggingface/datasets/issues/1838
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1838/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1837/comments
https://api.github.com/repos/huggingface/datasets/issues/1837/timeline
2021-12-28T15:05:08Z
null
completed
MDU6SXNzdWU4MDM1NTU2NTA=
closed
[]
null
1,837
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
Add VCTK
https://api.github.com/repos/huggingface/datasets/issues/1837/events
null
https://api.github.com/repos/huggingface/datasets/issues/1837/labels{/name}
2021-02-08T13:15:28Z
null
false
null
null
803,555,650
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
https://api.github.com/repos/huggingface/datasets/issues/1837
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Adding a Dataset - **Name:** *VCTK* - **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.* - **Paper:** Homepage: https://datashare.ed.ac.uk/handle/10283/3443 - **Data:** https://datashare.ed.ac.uk/handle/10283/3443 - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/vctk If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2021-12-28T15:05:08Z
https://github.com/huggingface/datasets/issues/1837
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1837/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1836/comments
https://api.github.com/repos/huggingface/datasets/issues/1836/timeline
2021-02-10T16:14:58Z
null
completed
MDU6SXNzdWU4MDM1MzE4Mzc=
closed
[]
null
1,836
{ "avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4", "events_url": "https://api.github.com/users/Paethon/events{/privacy}", "followers_url": "https://api.github.com/users/Paethon/followers", "following_url": "https://api.github.com/users/Paethon/following{/other_user}", "gists_url": "https://api.github.com/users/Paethon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Paethon", "id": 237550, "login": "Paethon", "node_id": "MDQ6VXNlcjIzNzU1MA==", "organizations_url": "https://api.github.com/users/Paethon/orgs", "received_events_url": "https://api.github.com/users/Paethon/received_events", "repos_url": "https://api.github.com/users/Paethon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Paethon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Paethon/subscriptions", "type": "User", "url": "https://api.github.com/users/Paethon" }
test.json has been removed from the limit dataset repo (breaks dataset)
https://api.github.com/repos/huggingface/datasets/issues/1836/events
null
https://api.github.com/repos/huggingface/datasets/issues/1836/labels{/name}
2021-02-08T12:45:53Z
null
false
null
null
803,531,837
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
https://api.github.com/repos/huggingface/datasets/issues/1836
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51 The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works: `https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd8848f0f11527c77dcf168fefd2b23/data`
2021-02-10T16:14:58Z
https://github.com/huggingface/datasets/issues/1836
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1836/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1835/comments
https://api.github.com/repos/huggingface/datasets/issues/1835/timeline
null
null
null
MDU6SXNzdWU4MDM1MjQ3OTA=
open
[]
null
1,835
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
Add CHiME4 dataset
https://api.github.com/repos/huggingface/datasets/issues/1835/events
null
https://api.github.com/repos/huggingface/datasets/issues/1835/labels{/name}
2021-02-08T12:36:38Z
null
false
null
null
803,524,790
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "d93f0b", "default": false, "description": "", "id": 2725241052, "name": "speech", "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech" } ]
https://api.github.com/repos/huggingface/datasets/issues/1835
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Adding a Dataset - **Name:** Chime4 - **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR - **Paper:** Dataset comes from a channel: http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/ . Results paper: - **Data:** http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/download.html - **Motivation:** So far there are very little datasets for speech in `datasets`. Only `lbirispeech_asr` so far. If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
2024-02-01T10:25:03Z
https://github.com/huggingface/datasets/issues/1835
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1835/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1834/comments
https://api.github.com/repos/huggingface/datasets/issues/1834/timeline
2021-02-08T12:42:50Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4
closed
[]
false
1,834
{ "avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4", "events_url": "https://api.github.com/users/Paethon/events{/privacy}", "followers_url": "https://api.github.com/users/Paethon/followers", "following_url": "https://api.github.com/users/Paethon/following{/other_user}", "gists_url": "https://api.github.com/users/Paethon/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Paethon", "id": 237550, "login": "Paethon", "node_id": "MDQ6VXNlcjIzNzU1MA==", "organizations_url": "https://api.github.com/users/Paethon/orgs", "received_events_url": "https://api.github.com/users/Paethon/received_events", "repos_url": "https://api.github.com/users/Paethon/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Paethon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Paethon/subscriptions", "type": "User", "url": "https://api.github.com/users/Paethon" }
Fixes base_url of limit dataset
https://api.github.com/repos/huggingface/datasets/issues/1834/events
null
https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name}
2021-02-08T12:26:35Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1834.diff", "html_url": "https://github.com/huggingface/datasets/pull/1834", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1834.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1834" }
803,517,094
[]
https://api.github.com/repos/huggingface/datasets/issues/1834
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
`test.json` is not available in the master branch of the repository anymore. Linking to a specific commit.
2021-02-08T12:42:50Z
https://github.com/huggingface/datasets/pull/1834
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1834/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1833/comments
https://api.github.com/repos/huggingface/datasets/issues/1833/timeline
2021-02-12T14:08:24Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY5MDk5MTUx
closed
[]
false
1,833
{ "avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4", "events_url": "https://api.github.com/users/pjox/events{/privacy}", "followers_url": "https://api.github.com/users/pjox/followers", "following_url": "https://api.github.com/users/pjox/following{/other_user}", "gists_url": "https://api.github.com/users/pjox/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pjox", "id": 635220, "login": "pjox", "node_id": "MDQ6VXNlcjYzNTIyMA==", "organizations_url": "https://api.github.com/users/pjox/orgs", "received_events_url": "https://api.github.com/users/pjox/received_events", "repos_url": "https://api.github.com/users/pjox/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pjox/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pjox/subscriptions", "type": "User", "url": "https://api.github.com/users/pjox" }
Add OSCAR dataset card
https://api.github.com/repos/huggingface/datasets/issues/1833/events
null
https://api.github.com/repos/huggingface/datasets/issues/1833/labels{/name}
2021-02-08T01:39:49Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1833.diff", "html_url": "https://github.com/huggingface/datasets/pull/1833", "merged_at": "2021-02-12T14:08:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/1833.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1833" }
803,120,978
[]
https://api.github.com/repos/huggingface/datasets/issues/1833
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824).
2021-02-12T14:09:25Z
https://github.com/huggingface/datasets/pull/1833
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1833/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1832/comments
https://api.github.com/repos/huggingface/datasets/issues/1832/timeline
2021-02-08T17:27:29Z
null
completed
MDU6SXNzdWU4MDI4ODA4OTc=
closed
[]
null
1,832
{ "avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4", "events_url": "https://api.github.com/users/JimmyJim1/events{/privacy}", "followers_url": "https://api.github.com/users/JimmyJim1/followers", "following_url": "https://api.github.com/users/JimmyJim1/following{/other_user}", "gists_url": "https://api.github.com/users/JimmyJim1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JimmyJim1", "id": 68724553, "login": "JimmyJim1", "node_id": "MDQ6VXNlcjY4NzI0NTUz", "organizations_url": "https://api.github.com/users/JimmyJim1/orgs", "received_events_url": "https://api.github.com/users/JimmyJim1/received_events", "repos_url": "https://api.github.com/users/JimmyJim1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JimmyJim1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JimmyJim1/subscriptions", "type": "User", "url": "https://api.github.com/users/JimmyJim1" }
Looks like nokogumbo is up-to-date now, so this is no longer needed.
https://api.github.com/repos/huggingface/datasets/issues/1832/events
null
https://api.github.com/repos/huggingface/datasets/issues/1832/labels{/name}
2021-02-07T06:52:07Z
null
false
null
null
802,880,897
[]
https://api.github.com/repos/huggingface/datasets/issues/1832
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
Looks like nokogumbo is up-to-date now, so this is no longer needed. __Originally posted by @dependabot in https://github.com/discourse/discourse/pull/11373#issuecomment-738993432__
2021-02-08T17:27:29Z
https://github.com/huggingface/datasets/issues/1832
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1832/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1831/comments
https://api.github.com/repos/huggingface/datasets/issues/1831/timeline
2021-02-25T14:10:18Z
null
completed
MDU6SXNzdWU4MDI4Njg4NTQ=
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
1,831
{ "avatar_url": "https://avatars.githubusercontent.com/u/27874014?v=4", "events_url": "https://api.github.com/users/svjack/events{/privacy}", "followers_url": "https://api.github.com/users/svjack/followers", "following_url": "https://api.github.com/users/svjack/following{/other_user}", "gists_url": "https://api.github.com/users/svjack/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/svjack", "id": 27874014, "login": "svjack", "node_id": "MDQ6VXNlcjI3ODc0MDE0", "organizations_url": "https://api.github.com/users/svjack/orgs", "received_events_url": "https://api.github.com/users/svjack/received_events", "repos_url": "https://api.github.com/users/svjack/repos", "site_admin": false, "starred_url": "https://api.github.com/users/svjack/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/svjack/subscriptions", "type": "User", "url": "https://api.github.com/users/svjack" }
Some question about raw dataset download info in the project .
https://api.github.com/repos/huggingface/datasets/issues/1831/events
null
https://api.github.com/repos/huggingface/datasets/issues/1831/labels{/name}
2021-02-07T05:33:36Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
null
802,868,854
[]
https://api.github.com/repos/huggingface/datasets/issues/1831
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class in load_dataset function My question is that , with this logic it seems that i can not have the raw dataset download location in variable in downloaded_files in _split_generators. If someone also want use huggingface datasets as raw dataset downloader, how can he retrieve the raw dataset download path from attributes in datasets.dataset_dict.DatasetDict ?
2021-02-25T14:10:18Z
https://github.com/huggingface/datasets/issues/1831
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1831/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1830/comments
https://api.github.com/repos/huggingface/datasets/issues/1830/timeline
null
null
null
MDU6SXNzdWU4MDI3OTAwNzU=
open
[]
null
1,830
{ "avatar_url": "https://avatars.githubusercontent.com/u/7662740?v=4", "events_url": "https://api.github.com/users/wumpusman/events{/privacy}", "followers_url": "https://api.github.com/users/wumpusman/followers", "following_url": "https://api.github.com/users/wumpusman/following{/other_user}", "gists_url": "https://api.github.com/users/wumpusman/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wumpusman", "id": 7662740, "login": "wumpusman", "node_id": "MDQ6VXNlcjc2NjI3NDA=", "organizations_url": "https://api.github.com/users/wumpusman/orgs", "received_events_url": "https://api.github.com/users/wumpusman/received_events", "repos_url": "https://api.github.com/users/wumpusman/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wumpusman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wumpusman/subscriptions", "type": "User", "url": "https://api.github.com/users/wumpusman" }
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
https://api.github.com/repos/huggingface/datasets/issues/1830/events
null
https://api.github.com/repos/huggingface/datasets/issues/1830/labels{/name}
2021-02-06T21:00:26Z
null
false
null
null
802,790,075
[]
https://api.github.com/repos/huggingface/datasets/issues/1830
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_unique = set(text.split(" ")) for i in words_unique: original_tokenizer.add_tokens(i) original_tokenizer.save_pretrained(path) tokenizer2 = GPT2Tokenizer.from_pretrained(os.path.join(experiment_path,experiment_name,"tokenizer_squad")) train_set_baby=Dataset.from_dict({"text":[train_set["text"][0][0:50]]}) ```` I then applied the dataset map function on a fairly small set of text: ``` %%time train_set_baby = train_set_baby.map(lambda d:tokenizer2(d["text"]),batched=True) ``` The run time for train_set_baby.map was 6 seconds, and the batch itself was 2.6 seconds **100% 1/1 [00:02<00:00, 2.60s/ba] CPU times: user 5.96 s, sys: 36 ms, total: 5.99 s Wall time: 5.99 s** In comparison using (even after adding additional tokens): ` tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")` ``` %%time train_set_baby = train_set_baby.map(lambda d:tokenizer2(d["text"]),batched=True) ``` The time is **100% 1/1 [00:00<00:00, 34.09ba/s] CPU times: user 68.1 ms, sys: 16 µs, total: 68.1 ms Wall time: 62.9 ms** It seems this might relate to the tokenizer save or load function, however, the issue appears to come up when I apply the loaded tokenizer to the map function. I should also add that playing around with the amount of words I add to the tokenizer before I save it to disk and load it into memory appears to impact the time it takes to run the map function.
2021-02-24T21:56:14Z
https://github.com/huggingface/datasets/issues/1830
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1830/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1829/comments
https://api.github.com/repos/huggingface/datasets/issues/1829/timeline
2021-02-08T13:17:53Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY4NzgzNjA5
closed
[]
false
1,829
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
Add Tweet Eval Dataset
https://api.github.com/repos/huggingface/datasets/issues/1829/events
null
https://api.github.com/repos/huggingface/datasets/issues/1829/labels{/name}
2021-02-06T12:36:25Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1829.diff", "html_url": "https://github.com/huggingface/datasets/pull/1829", "merged_at": "2021-02-08T13:17:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/1829.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1829" }
802,693,600
[]
https://api.github.com/repos/huggingface/datasets/issues/1829
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Closes Draft PR #1407. Notes: 1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels. 2. I have also exluded the textual names for the emojis mentioned in the [mapping](https://github.com/cardiffnlp/tweeteval/blob/main/datasets/emoji/mapping.txt). 3. I do not understand @abhishekkrthakur's example generator on #1407. Maybe he was trying to build up on code from some other dataset. Requesting @lhoestq to review.
2021-02-08T13:17:54Z
https://github.com/huggingface/datasets/pull/1829
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1829/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1828/comments
https://api.github.com/repos/huggingface/datasets/issues/1828/timeline
2021-02-18T14:17:07Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY4NTkwNDM2
closed
[]
true
1,828
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
Add CelebA Dataset
https://api.github.com/repos/huggingface/datasets/issues/1828/events
null
https://api.github.com/repos/huggingface/datasets/issues/1828/labels{/name}
2021-02-05T20:20:55Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1828.diff", "html_url": "https://github.com/huggingface/datasets/pull/1828", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1828.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1828" }
802,449,234
[]
https://api.github.com/repos/huggingface/datasets/issues/1828
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Trying to add CelebA Dataset. Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`. Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]')` still loads all the examples (doesn't stop at 10).
2021-02-18T14:17:07Z
https://github.com/huggingface/datasets/pull/1828
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1828/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1827/comments
https://api.github.com/repos/huggingface/datasets/issues/1827/timeline
2021-02-18T13:55:16Z
null
completed
MDU6SXNzdWU4MDIzNTM5NzQ=
closed
[]
null
1,827
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
Regarding On-the-fly Data Loading
https://api.github.com/repos/huggingface/datasets/issues/1827/events
null
https://api.github.com/repos/huggingface/datasets/issues/1827/labels{/name}
2021-02-05T17:43:48Z
null
false
null
null
802,353,974
[]
https://api.github.com/repos/huggingface/datasets/issues/1827
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Hi, I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point. Thanks, Gunjan
2021-02-18T13:55:16Z
https://github.com/huggingface/datasets/issues/1827
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1827/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1826/comments
https://api.github.com/repos/huggingface/datasets/issues/1826/timeline
2021-02-09T17:39:27Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY4Mjc4OTI2
closed
[]
false
1,826
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
Print error message with filename when malformed CSV
https://api.github.com/repos/huggingface/datasets/issues/1826/events
null
https://api.github.com/repos/huggingface/datasets/issues/1826/labels{/name}
2021-02-05T11:07:59Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1826.diff", "html_url": "https://github.com/huggingface/datasets/pull/1826", "merged_at": "2021-02-09T17:39:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1826.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1826" }
802,074,744
[]
https://api.github.com/repos/huggingface/datasets/issues/1826
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Print error message specifying filename when malformed CSV file. Close #1821
2021-02-09T17:39:27Z
https://github.com/huggingface/datasets/pull/1826
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1826/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1825/comments
https://api.github.com/repos/huggingface/datasets/issues/1825/timeline
2021-03-16T09:44:00Z
null
completed
MDU6SXNzdWU4MDIwNzM5MjU=
closed
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
1,825
{ "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "events_url": "https://api.github.com/users/avacaondata/events{/privacy}", "followers_url": "https://api.github.com/users/avacaondata/followers", "following_url": "https://api.github.com/users/avacaondata/following{/other_user}", "gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avacaondata", "id": 35173563, "login": "avacaondata", "node_id": "MDQ6VXNlcjM1MTczNTYz", "organizations_url": "https://api.github.com/users/avacaondata/orgs", "received_events_url": "https://api.github.com/users/avacaondata/received_events", "repos_url": "https://api.github.com/users/avacaondata/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions", "type": "User", "url": "https://api.github.com/users/avacaondata" }
Datasets library not suitable for huge text datasets.
https://api.github.com/repos/huggingface/datasets/issues/1825/events
null
https://api.github.com/repos/huggingface/datasets/issues/1825/labels{/name}
2021-02-05T11:06:50Z
null
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
null
802,073,925
[]
https://api.github.com/repos/huggingface/datasets/issues/1825
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this big, but for fine-tuning datasets, as this process alone takes so much time, usually in expensive machines (due to the need of tpus - gpus) which is not being used for training. It would possibly be more efficient in such cases to tokenize each batch at training time (receive batch - tokenize batch - train with batch), so that the whole time the machine is up it's being used for training. Moreover, the pyarrow objects created from a 187 GB datasets are huge, I mean, we always receive OOM, or No Space left on device errors when only 10-12% of the dataset has been processed, and only that part occupies 2.1TB in disk, which is so many times the disk usage of the pure text (and this doesn't make sense, as tokenized texts should be lighter than pure texts). Any suggestions??
2021-03-30T14:04:01Z
https://github.com/huggingface/datasets/issues/1825
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1825/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1824/comments
https://api.github.com/repos/huggingface/datasets/issues/1824/timeline
2021-02-08T11:30:33Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY4MjU3MTU3
closed
[]
false
1,824
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Add OSCAR dataset card
https://api.github.com/repos/huggingface/datasets/issues/1824/events
null
https://api.github.com/repos/huggingface/datasets/issues/1824/labels{/name}
2021-02-05T10:30:26Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1824.diff", "html_url": "https://github.com/huggingface/datasets/pull/1824", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1824.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1824" }
802,048,281
[]
https://api.github.com/repos/huggingface/datasets/issues/1824
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
I started adding the dataset card for OSCAR ! For now it's just basic info for all the different configurations in `Dataset Structure`. In particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB. Since the Data Instances section is very long the user has to click to expand the info. I was able to generate it thanks to the tools made by @madlag and @yjernite :D Cc @pjox could you help me with the other sections ? (Dataset Description, Dataset Creation, Considerations for Using the Data, Additional Information)
2021-05-05T18:24:14Z
https://github.com/huggingface/datasets/pull/1824
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/1824/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1823/comments
https://api.github.com/repos/huggingface/datasets/issues/1823/timeline
2021-03-01T10:21:39Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY4MjUyMjIx
closed
[]
false
1,823
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
Add FewRel Dataset
https://api.github.com/repos/huggingface/datasets/issues/1823/events
null
https://api.github.com/repos/huggingface/datasets/issues/1823/labels{/name}
2021-02-05T10:22:03Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1823.diff", "html_url": "https://github.com/huggingface/datasets/pull/1823", "merged_at": "2021-03-01T10:21:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/1823.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1823" }
802,042,181
[]
https://api.github.com/repos/huggingface/datasets/issues/1823
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Hi, This PR closes this [Card](https://github.com/huggingface/datasets/projects/1#card-53285184) and Issue #1757. I wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key as `"relation"` in the dataset. Additionally, for `pubmed_unsupervised`, I kept `"relation":""` in the dictionary. Please recommend better alternatives, if any. Thanks, Gunjan
2021-03-01T11:56:20Z
https://github.com/huggingface/datasets/pull/1823
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1823/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1822/comments
https://api.github.com/repos/huggingface/datasets/issues/1822/timeline
2021-02-15T09:57:39Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY4MjIxMzIz
closed
[]
false
1,822
{ "avatar_url": "https://avatars.githubusercontent.com/u/33565881?v=4", "events_url": "https://api.github.com/users/avinsit123/events{/privacy}", "followers_url": "https://api.github.com/users/avinsit123/followers", "following_url": "https://api.github.com/users/avinsit123/following{/other_user}", "gists_url": "https://api.github.com/users/avinsit123/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/avinsit123", "id": 33565881, "login": "avinsit123", "node_id": "MDQ6VXNlcjMzNTY1ODgx", "organizations_url": "https://api.github.com/users/avinsit123/orgs", "received_events_url": "https://api.github.com/users/avinsit123/received_events", "repos_url": "https://api.github.com/users/avinsit123/repos", "site_admin": false, "starred_url": "https://api.github.com/users/avinsit123/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avinsit123/subscriptions", "type": "User", "url": "https://api.github.com/users/avinsit123" }
Add Hindi Discourse Analysis Natural Language Inference Dataset
https://api.github.com/repos/huggingface/datasets/issues/1822/events
null
https://api.github.com/repos/huggingface/datasets/issues/1822/labels{/name}
2021-02-05T09:30:54Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1822.diff", "html_url": "https://github.com/huggingface/datasets/pull/1822", "merged_at": "2021-02-15T09:57:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/1822.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1822" }
802,003,835
[]
https://api.github.com/repos/huggingface/datasets/issues/1822
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
# Dataset Card for Hindi Discourse Analysis Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - HomePage : https://github.com/midas-research/hindi-nli-data - Paper : https://www.aclweb.org/anthology/2020.aacl-main.71 - Point of Contact : https://github.com/midas-research/hindi-nli-data ### Dataset Summary - Dataset for Natural Language Inference in Hindi Language. Hindi Discourse Analysis (HDA) Dataset consists of textual-entailment pairs. - Each row of the Datasets if made up of 4 columns - Premise, Hypothesis, Label and Topic. - Premise and Hypothesis is written in Hindi while Entailment_Label is in English. - Entailment_label is of 2 types - entailed and not-entailed. - Entailed means that hypotheis can be inferred from premise and not-entailed means vice versa - Dataset can be used to train models for Natural Language Inference tasks in Hindi Language. ### Supported Tasks and Leaderboards - Natural Language Inference for Hindi ### Languages - Dataset is in Hindi ## Dataset Structure - Data is structured in TSV format. - train, test and dev files are in seperate files ### Dataset Instances An example of 'train' looks as follows. ``` {'hypothesis': 'यह एक वर्णनात्मक कथन है।', 'label': 1, 'premise': 'जैसे उस का सारा चेहरा अपना हो और आँखें किसी दूसरे की जो चेहरे पर पपोटों के पीछे महसूर कर दी गईं।', 'topic': 1} ``` ### Data Fields - Each row contatins 4 columns - premise, hypothesis, label and topic. ### Data Splits - Train : 31892 - Valid : 9460 - Test : 9970 ## Dataset Creation - We employ a recasting technique from Poliak et al. (2018a,b) to convert publicly available Hindi Discourse Analysis classification datasets in Hindi and pose them as TE problems - In this recasting process, we build template hypotheses for each class in the label taxonomy - Then, we pair the original annotated sentence with each of the template hypotheses to create TE samples. - For more information on the recasting process, refer to paper https://www.aclweb.org/anthology/2020.aacl-main.71 ### Source Data Source Dataset for the recasting process is the BBC Hindi Headlines Dataset(https://github.com/NirantK/hindi2vec/releases/tag/bbc-hindi-v0.1) #### Initial Data Collection and Normalization - Initial Data was collected by members of MIDAS Lab from Hindi Websites. They crowd sourced the data annotation process and selected two random stories from our corpus and had the three annotators work on them independently and classify each sentence based on the discourse mode. - Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/ - The Discourse is further classified into "Argumentative" , "Descriptive" , "Dialogic" , "Informative" and "Narrative" - 5 Clases. #### Who are the source language producers? Please refer to this paper for detailed information: https://www.aclweb.org/anthology/2020.lrec-1.149/ ### Annotations #### Annotation process Annotation process has been described in Dataset Creation Section. #### Who are the annotators? Annotation is done automatically by machine and corresponding recasting process. ### Personal and Sensitive Information No Personal and Sensitive Information is mentioned in the Datasets. ## Considerations for Using the Data Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Discussion of Biases No known bias exist in the dataset. Pls refer to this paper: https://www.aclweb.org/anthology/2020.aacl-main.71 ### Other Known Limitations No other known limitations . Size of data may not be enough to train large models ## Additional Information Pls refer to this link: https://github.com/midas-research/hindi-nli-data ### Dataset Curators It is written in the repo : https://github.com/midas-research/hindi-nli-data that - This corpus can be used freely for research purposes. - The paper listed below provide details of the creation and use of the corpus. If you use the corpus, then please cite the paper. - If interested in commercial use of the corpus, send email to midas@iiitd.ac.in. - If you use the corpus in a product or application, then please credit the authors and Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi appropriately. Also, if you send us an email, we will be thrilled to know about how you have used the corpus. - Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi, India disclaims any responsibility for the use of the corpus and does not provide technical support. However, the contact listed above will be happy to respond to queries and clarifications. - Rather than redistributing the corpus, please direct interested parties to this page - Please feel free to send us an email: - with feedback regarding the corpus. - with information on how you have used the corpus. - if interested in having us analyze your data for natural language inference. - if interested in a collaborative research project. ### Licensing Information Copyright (C) 2019 Multimodal Digital Media Analysis Lab - Indraprastha Institute of Information Technology, New Delhi (MIDAS, IIIT-Delhi). Pls contact authors for any information on the dataset. ### Citation Information ``` @inproceedings{uppal-etal-2020-two, title = "Two-Step Classification using Recasted Data for Low Resource Settings", author = "Uppal, Shagun and Gupta, Vivek and Swaminathan, Avinash and Zhang, Haimin and Mahata, Debanjan and Gosangi, Rakesh and Shah, Rajiv Ratn and Stent, Amanda", booktitle = "Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing", month = dec, year = "2020", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.aacl-main.71", pages = "706--719", abstract = "An NLP model{'}s ability to reason should be independent of language. Previous works utilize Natural Language Inference (NLI) to understand the reasoning ability of models, mostly focusing on high resource languages like English. To address scarcity of data in low-resource languages such as Hindi, we use data recasting to create NLI datasets for four existing text classification datasets. Through experiments, we show that our recasted dataset is devoid of statistical irregularities and spurious patterns. We further study the consistency in predictions of the textual entailment models and propose a consistency regulariser to remove pairwise-inconsistencies in predictions. We propose a novel two-step classification method which uses textual-entailment predictions for classification task. We further improve the performance by using a joint-objective for classification and textual entailment. We therefore highlight the benefits of data recasting and improvements on classification performance using our approach with supporting experimental results.", } ``` ### Contributions Thanks to [@avinsit123](https://github.com/avinsit123) for adding this dataset.
2021-02-15T09:57:39Z
https://github.com/huggingface/datasets/pull/1822
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1822/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1821/comments
https://api.github.com/repos/huggingface/datasets/issues/1821/timeline
2021-02-09T17:39:27Z
null
completed
MDU6SXNzdWU4MDE3NDc2NDc=
closed
[]
null
1,821
{ "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "events_url": "https://api.github.com/users/david-waterworth/events{/privacy}", "followers_url": "https://api.github.com/users/david-waterworth/followers", "following_url": "https://api.github.com/users/david-waterworth/following{/other_user}", "gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/david-waterworth", "id": 5028974, "login": "david-waterworth", "node_id": "MDQ6VXNlcjUwMjg5NzQ=", "organizations_url": "https://api.github.com/users/david-waterworth/orgs", "received_events_url": "https://api.github.com/users/david-waterworth/received_events", "repos_url": "https://api.github.com/users/david-waterworth/repos", "site_admin": false, "starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions", "type": "User", "url": "https://api.github.com/users/david-waterworth" }
Provide better exception message when one of many files results in an exception
https://api.github.com/repos/huggingface/datasets/issues/1821/events
null
https://api.github.com/repos/huggingface/datasets/issues/1821/labels{/name}
2021-02-05T00:49:03Z
null
false
null
null
801,747,647
[]
https://api.github.com/repos/huggingface/datasets/issues/1821
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
I find when I process many files, i.e. ``` train_files = glob.glob('rain*.csv') validation_files = glob.glob(validation*.csv') datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files)) ``` I sometimes encounter an error due to one of the files being misformed (i.e. no data, or a comma in a field that isn't quoted, etc). For example, this is the tail of an exception which I suspect is due to a stray comma. > File "pandas/_libs/parsers.pyx", line 756, in pandas._libs.parsers.TextReader.read > File "pandas/_libs/parsers.pyx", line 783, in pandas._libs.parsers.TextReader._read_low_memory > File "pandas/_libs/parsers.pyx", line 827, in pandas._libs.parsers.TextReader._read_rows > File "pandas/_libs/parsers.pyx", line 814, in pandas._libs.parsers.TextReader._tokenize_rows > File "pandas/_libs/parsers.pyx", line 1951, in pandas._libs.parsers.raise_parser_error > pandas.errors.ParserError: Error tokenizing data. C error: Expected 2 fields in line 559, saw 3 It would be nice if the exception trace contained the name of the file being processed (I have 250 separate files!)
2021-02-09T17:39:27Z
https://github.com/huggingface/datasets/issues/1821
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1821/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1820/comments
https://api.github.com/repos/huggingface/datasets/issues/1820/timeline
2021-02-05T14:00:00Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY3ODI4OTg1
closed
[]
false
1,820
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Add metrics usage examples and tests
https://api.github.com/repos/huggingface/datasets/issues/1820/events
null
https://api.github.com/repos/huggingface/datasets/issues/1820/labels{/name}
2021-02-04T18:23:50Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1820.diff", "html_url": "https://github.com/huggingface/datasets/pull/1820", "merged_at": "2021-02-05T14:00:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/1820.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1820" }
801,529,936
[]
https://api.github.com/repos/huggingface/datasets/issues/1820
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
All metrics finally have usage examples and proper fast + slow tests :) I added examples of usage for every metric, and I use doctest to make sure they all work as expected. For "slow" metrics such as bert_score or bleurt which require to download + run a transformer model, the download + forward pass are only done in the slow test. In the fast test on the other hand, the download + forward pass are monkey patched. Metrics that need to be installed from github are not added to setup.py because it prevents uploading the `datasets` package to pypi. An additional-test-requirements.txt file is used instead. This file also include `comet` in order to not have to resolve its *impossible* dependencies. Also `comet` is not tested on windows because one of its dependencies (fairseq) can't be installed in the CI for some reason.
2021-02-05T14:00:01Z
https://github.com/huggingface/datasets/pull/1820
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1820/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1819/comments
https://api.github.com/repos/huggingface/datasets/issues/1819/timeline
2021-02-04T16:52:26Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY3NzYyMzI2
closed
[]
false
1,819
{ "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/philschmid", "id": 32632186, "login": "philschmid", "node_id": "MDQ6VXNlcjMyNjMyMTg2", "organizations_url": "https://api.github.com/users/philschmid/orgs", "received_events_url": "https://api.github.com/users/philschmid/received_events", "repos_url": "https://api.github.com/users/philschmid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "type": "User", "url": "https://api.github.com/users/philschmid" }
Fixed spelling `S3Fileystem` to `S3FileSystem`
https://api.github.com/repos/huggingface/datasets/issues/1819/events
null
https://api.github.com/repos/huggingface/datasets/issues/1819/labels{/name}
2021-02-04T16:36:46Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1819.diff", "html_url": "https://github.com/huggingface/datasets/pull/1819", "merged_at": "2021-02-04T16:52:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/1819.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1819" }
801,448,670
[]
https://api.github.com/repos/huggingface/datasets/issues/1819
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
Fixed documentation spelling errors. Wrong `S3Fileystem` Right `S3FileSystem`
2021-02-04T16:52:27Z
https://github.com/huggingface/datasets/pull/1819
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1819/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1818/comments
https://api.github.com/repos/huggingface/datasets/issues/1818/timeline
2022-06-01T15:38:42Z
null
completed
MDU6SXNzdWU4MDA5NTg3NzY=
closed
[]
null
1,818
{ "avatar_url": "https://avatars.githubusercontent.com/u/15032072?v=4", "events_url": "https://api.github.com/users/Alxe1/events{/privacy}", "followers_url": "https://api.github.com/users/Alxe1/followers", "following_url": "https://api.github.com/users/Alxe1/following{/other_user}", "gists_url": "https://api.github.com/users/Alxe1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Alxe1", "id": 15032072, "login": "Alxe1", "node_id": "MDQ6VXNlcjE1MDMyMDcy", "organizations_url": "https://api.github.com/users/Alxe1/orgs", "received_events_url": "https://api.github.com/users/Alxe1/received_events", "repos_url": "https://api.github.com/users/Alxe1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Alxe1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Alxe1/subscriptions", "type": "User", "url": "https://api.github.com/users/Alxe1" }
Loading local dataset raise requests.exceptions.ConnectTimeout
https://api.github.com/repos/huggingface/datasets/issues/1818/events
null
https://api.github.com/repos/huggingface/datasets/issues/1818/labels{/name}
2021-02-04T05:55:23Z
null
false
null
null
800,958,776
[]
https://api.github.com/repos/huggingface/datasets/issues/1818
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
Load local dataset: ``` dataset = load_dataset('json', data_files=["../../data/json.json"]) train = dataset["train"] print(train.features) train1 = train.map(lambda x: {"labels": 1}) print(train1[:2]) ``` but it raised requests.exceptions.ConnectTimeout: ``` /Users/littlely/myvirtual/tf2/bin/python3.7 /Users/littlely/projects/python_projects/pytorch_learning/nlp/dataset/transformers_datasets.py Traceback (most recent call last): File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connection.py", line 160, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/util/connection.py", line 84, in create_connection raise err File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) socket.timeout: timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 677, in urlopen chunked=chunked, File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn conn.connect() File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connection.py", line 309, in connect conn = self._new_conn() File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connection.py", line 167, in _new_conn % (self.host, self.timeout), urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)') During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/connectionpool.py", line 727, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/urllib3/util/retry.py", line 439, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/json/json.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/littlely/projects/python_projects/pytorch_learning/nlp/dataset/transformers_datasets.py", line 12, in <module> dataset = load_dataset('json', data_files=["../../data/json.json"]) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/load.py", line 591, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/load.py", line 263, in prepare_module head_hf_s3(path, filename=name, dataset=dataset, max_retries=download_config.max_retries) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 232, in head_hf_s3 max_retries=max_retries, File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 523, in http_head max_retries=max_retries, File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 458, in _request_with_retry raise err File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 454, in _request_with_retry response = requests.request(verb.upper(), url, **params) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/Users/littlely/myvirtual/tf2/lib/python3.7/site-packages/requests/adapters.py", line 504, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/json/json.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x1181e9940>, 'Connection to s3.amazonaws.com timed out. (connect timeout=10)')) Process finished with exit code 1 ``` Why it want to connect a remote url when I load local datasets, and how can I fix it?
2022-06-01T15:38:42Z
https://github.com/huggingface/datasets/issues/1818
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1818/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1817/comments
https://api.github.com/repos/huggingface/datasets/issues/1817/timeline
2022-10-05T12:42:57Z
null
completed
MDU6SXNzdWU4MDA4NzA2NTI=
closed
[]
null
1,817
{ "avatar_url": "https://avatars.githubusercontent.com/u/9610770?v=4", "events_url": "https://api.github.com/users/LuCeHe/events{/privacy}", "followers_url": "https://api.github.com/users/LuCeHe/followers", "following_url": "https://api.github.com/users/LuCeHe/following{/other_user}", "gists_url": "https://api.github.com/users/LuCeHe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LuCeHe", "id": 9610770, "login": "LuCeHe", "node_id": "MDQ6VXNlcjk2MTA3NzA=", "organizations_url": "https://api.github.com/users/LuCeHe/orgs", "received_events_url": "https://api.github.com/users/LuCeHe/received_events", "repos_url": "https://api.github.com/users/LuCeHe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LuCeHe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LuCeHe/subscriptions", "type": "User", "url": "https://api.github.com/users/LuCeHe" }
pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500
https://api.github.com/repos/huggingface/datasets/issues/1817/events
null
https://api.github.com/repos/huggingface/datasets/issues/1817/labels{/name}
2021-02-04T02:30:23Z
null
false
null
null
800,870,652
[]
https://api.github.com/repos/huggingface/datasets/issues/1817
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
NONE
I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the script that is failing right at the end https://github.com/LuCeHe/GenericTools/blob/master/KerasTools/lm_preprocessing.py In the last iteration of the last dset.map, it gives the error that I copied in the title. Another issue that I have, if I leave the batch_size set as 1000 in the last .map, I'm afraid it's going to lose most text, so I'm considering setting both writer_batch_size and batch_size to 300 K, but I'm not sure it's the best way to go. Can you help me? Thanks!
2022-10-05T12:42:57Z
https://github.com/huggingface/datasets/issues/1817
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1817/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1816/comments
https://api.github.com/repos/huggingface/datasets/issues/1816/timeline
2021-02-15T15:04:33Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY3MTExMjEx
closed
[]
false
1,816
{ "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/songfeng", "id": 2062185, "login": "songfeng", "node_id": "MDQ6VXNlcjIwNjIxODU=", "organizations_url": "https://api.github.com/users/songfeng/orgs", "received_events_url": "https://api.github.com/users/songfeng/received_events", "repos_url": "https://api.github.com/users/songfeng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "type": "User", "url": "https://api.github.com/users/songfeng" }
Doc2dial rc update to latest version
https://api.github.com/repos/huggingface/datasets/issues/1816/events
null
https://api.github.com/repos/huggingface/datasets/issues/1816/labels{/name}
2021-02-03T20:08:54Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1816.diff", "html_url": "https://github.com/huggingface/datasets/pull/1816", "merged_at": "2021-02-15T15:04:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/1816.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1816" }
800,660,995
[]
https://api.github.com/repos/huggingface/datasets/issues/1816
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
2021-02-15T15:15:24Z
https://github.com/huggingface/datasets/pull/1816
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1816/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1815/comments
https://api.github.com/repos/huggingface/datasets/issues/1815/timeline
2021-03-01T10:36:21Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY3MDY3NjU1
closed
[]
false
1,815
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
Add CCAligned Multilingual Dataset
https://api.github.com/repos/huggingface/datasets/issues/1815/events
null
https://api.github.com/repos/huggingface/datasets/issues/1815/labels{/name}
2021-02-03T18:59:52Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1815.diff", "html_url": "https://github.com/huggingface/datasets/pull/1815", "merged_at": "2021-03-01T10:36:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/1815.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1815" }
800,610,017
[]
https://api.github.com/repos/huggingface/datasets/issues/1815
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Hello, I'm trying to add [CCAligned Multilingual Dataset](http://www.statmt.org/cc-aligned/). This has the potential to close #1756. This dataset has two types - Document-Pairs, and Sentence-Pairs. The datasets are huge, so I won't be able to test all of them. At the same time, a user might only want to download one particular language and not all. To provide this feature, `load_dataset`'s `**config_kwargs` should allow some random keyword args, in this case -`language_code`. This will be needed before the dataset is downloaded and extracted. I'm expecting the usage to be something like - `load_dataset('ccaligned_multilingual','documents',language_code='en_XX-af_ZA')`. Ofcourse, at a later stage we can provide just two character language codes. This also has an issue where one language has multiple files (`my_MM` and `my_MM_zaw` on the link), but before that the required functionality must be added to `load_dataset`. It would be great if someone could either tell me an alternative way to do this, or point me to where changes need to be made, if any, apart from the `BuilderConfig` definition. Additionally, I believe the tests will also have to be modified if this change is made, since it would not be possible to test for any random keyword arguments. A decent way to go about this would be to provide all the options in a list/dictionary for `language_code` and use that to test the arguments. In essence, this is similar to the pre-trained checkpoint dictionary as `transformers`. That means writing dataset specific tests, or adding something new to dataset generation script to make it easier for everyone to add keyword arguments without having to worry about the tests. Thanks, Gunjan Requesting @lhoestq / @yjernite to review.
2021-03-01T12:33:03Z
https://github.com/huggingface/datasets/pull/1815
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1815/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1814/comments
https://api.github.com/repos/huggingface/datasets/issues/1814/timeline
2021-02-04T16:21:48Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY2OTg4NTI1
closed
[]
false
1,814
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
Add Freebase QA Dataset
https://api.github.com/repos/huggingface/datasets/issues/1814/events
null
https://api.github.com/repos/huggingface/datasets/issues/1814/labels{/name}
2021-02-03T16:57:49Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1814.diff", "html_url": "https://github.com/huggingface/datasets/pull/1814", "merged_at": "2021-02-04T16:21:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/1814.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1814" }
800,516,236
[]
https://api.github.com/repos/huggingface/datasets/issues/1814
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Closes PR #1435. Fixed issues with PR #1809. Requesting @lhoestq to review.
2021-02-04T19:47:51Z
https://github.com/huggingface/datasets/pull/1814
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1814/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1813/comments
https://api.github.com/repos/huggingface/datasets/issues/1813/timeline
2021-02-05T10:33:47Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY2OTIxNDcz
closed
[]
false
1,813
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
Support future datasets
https://api.github.com/repos/huggingface/datasets/issues/1813/events
null
https://api.github.com/repos/huggingface/datasets/issues/1813/labels{/name}
2021-02-03T15:26:49Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1813.diff", "html_url": "https://github.com/huggingface/datasets/pull/1813", "merged_at": "2021-02-05T10:33:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/1813.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1813" }
800,435,973
[]
https://api.github.com/repos/huggingface/datasets/issues/1813
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
MEMBER
If a dataset is available at the version of the local installation of `datasets` (e.g. 1.2.0), then loading this dataset means loading the script at this version. However when trying to load a dataset that is only available on master, currently users have to specify `script_version="master"` in `load_dataset` to make it work. However we could automatically get the dataset from master instead in this case. I added this feature in this PR. I also added a warning if a dataset is not available at the version of the local installation of `datasets` but is loaded from master: ```python >>> load_dataset("silicone", "dyda_da") Couldn't find file locally at silicone/silicone.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/silicone/silicone.py. The file was picked from the master branch on github instead at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/silicone/silicone.py. Downloading and preparing dataset silicone/dyda_da (download: 8.46 MiB, generated: 9.39 MiB, post-processed: Unknown size, total: 17.86 MiB) to /Users/quentinlhoest/.cache/huggingface/datasets/silicone/dyda_da/1.0.0/d41d8c0b73c6df035b1369c45774418f0051163ea689b5502b8bda783adf6342... ... ```
2021-02-05T10:33:48Z
https://github.com/huggingface/datasets/pull/1813
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/1813/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1812/comments
https://api.github.com/repos/huggingface/datasets/issues/1812/timeline
2021-02-08T10:39:06Z
null
null
MDExOlB1bGxSZXF1ZXN0NTY2MDMxODIy
closed
[]
false
1,812
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
Add CIFAR-100 Dataset
https://api.github.com/repos/huggingface/datasets/issues/1812/events
null
https://api.github.com/repos/huggingface/datasets/issues/1812/labels{/name}
2021-02-02T15:22:59Z
null
false
null
{ "diff_url": "https://github.com/huggingface/datasets/pull/1812.diff", "html_url": "https://github.com/huggingface/datasets/pull/1812", "merged_at": "2021-02-08T10:39:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/1812.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1812" }
799,379,178
[]
https://api.github.com/repos/huggingface/datasets/issues/1812
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
Adding CIFAR-100 Dataset.
2021-02-08T11:10:18Z
https://github.com/huggingface/datasets/pull/1812
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1812/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/1811/comments
https://api.github.com/repos/huggingface/datasets/issues/1811/timeline
2021-02-18T14:16:31Z
null
completed
MDU6SXNzdWU3OTkyMTEwNjA=
closed
[]
null
1,811
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
Unable to add Multi-label Datasets
https://api.github.com/repos/huggingface/datasets/issues/1811/events
null
https://api.github.com/repos/huggingface/datasets/issues/1811/labels{/name}
2021-02-02T11:50:56Z
null
false
null
null
799,211,060
[]
https://api.github.com/repos/huggingface/datasets/issues/1811
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as `supervised_keys=("img", "fine_label")` raises no issue. But trying `supervised_keys=("img", "fine_label","coarse_label")` leads to this error : ```python Traceback (most recent call last): File "test_script.py", line 2, in <module> d = load_dataset('./datasets/cifar100') File "~/datasets/src/datasets/load.py", line 668, in load_dataset **config_kwargs, File "~/datasets/src/datasets/builder.py", line 896, in __init__ super(GeneratorBasedBuilder, self).__init__(*args, **kwargs) File "~/datasets/src/datasets/builder.py", line 247, in __init__ info.update(self._info()) File "~/.cache/huggingface/modules/datasets_modules/datasets/cifar100/61d2489b2d4a4abc34201432541b7380984ec714e290817d9a1ee318e4b74e0f/cifar100.py", line 79, in _info citation=_CITATION, File "<string>", line 19, in __init__ File "~/datasets/src/datasets/info.py", line 136, in __post_init__ self.supervised_keys = SupervisedKeysData(*self.supervised_keys) TypeError: __init__() takes from 1 to 3 positional arguments but 4 were given ``` Is there a way I can fix this? Also, what does adding `supervised_keys` do? Is it necessary? How would I specify `supervised_keys` for a multi-input, multi-label dataset? Thanks, Gunjan
2021-02-18T14:16:31Z
https://github.com/huggingface/datasets/issues/1811
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1811/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/1810/comments
https://api.github.com/repos/huggingface/datasets/issues/1810/timeline
null
null
null
MDU6SXNzdWU3OTkxNjg2NTA=
open
[]
null
1,810
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
Add Hateful Memes Dataset
https://api.github.com/repos/huggingface/datasets/issues/1810/events
null
https://api.github.com/repos/huggingface/datasets/issues/1810/labels{/name}
2021-02-02T10:53:59Z
null
false
null
null
799,168,650
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" }, { "color": "bfdadc", "default": false, "description": "Vision datasets", "id": 3608941089, "name": "vision", "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision" } ]
https://api.github.com/repos/huggingface/datasets/issues/1810
[ "", "" ]
https://api.github.com/repos/huggingface/datasets
CONTRIBUTOR
## Add Hateful Memes Dataset - **Name:** Hateful Memes - **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set) - **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005.04790.pdf) - **Data:** [This link](https://drivendata-competition-fb-hateful-memes-data.s3.amazonaws.com/XjiOc5ycDBRRNwbhRlgH.zip?AWSAccessKeyId=AKIARVBOBDCY4MWEDJKS&Signature=DaUuGgZWUgDHzEPPbyJ2PhSJ56Q%3D&Expires=1612816874) - **Motivation:** Including multi-modal datasets to 🤗 datasets. I will be adding this dataset. It requires the user to sign an agreement on DrivenData. So, it will be used with a manual download. The issue with this dataset is that the images are of different sizes. The image datasets added so far (CIFAR-10 and MNIST) have a uniform shape throughout. So something like ```python datasets.Array2D(shape=(28, 28), dtype="uint8") ``` won't work for the images. How would I add image features then? I checked `datasets/features.py` but couldn't figure out the appropriate class for this. I'm assuming I would want to avoid re-sizing at all since we want the user to be able to access the original images. Also, in case I want to load only a subset of the data, since the actual data is around 8.8GB, how would that be possible? Thanks, Gunjan
2021-12-08T12:03:59Z
https://github.com/huggingface/datasets/issues/1810
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1810/reactions" }
false