id
int64
599M
2.47B
url
stringlengths
58
61
repository_url
stringclasses
1 value
events_url
stringlengths
65
68
labels
listlengths
0
4
active_lock_reason
null
updated_at
stringlengths
20
20
assignees
listlengths
0
4
html_url
stringlengths
46
51
author_association
stringclasses
4 values
state_reason
stringclasses
3 values
draft
bool
2 classes
milestone
dict
comments
sequencelengths
0
30
title
stringlengths
1
290
reactions
dict
node_id
stringlengths
18
32
pull_request
dict
created_at
stringlengths
20
20
comments_url
stringlengths
67
70
body
stringlengths
0
228k
user
dict
labels_url
stringlengths
72
75
timeline_url
stringlengths
67
70
state
stringclasses
2 values
locked
bool
1 class
number
int64
1
7.11k
performed_via_github_app
null
closed_at
stringlengths
20
20
assignee
dict
is_pull_request
bool
2 classes
731,612,430
https://api.github.com/repos/huggingface/datasets/issues/772
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/772/events
[]
null
2020-10-29T09:34:44Z
[]
https://github.com/huggingface/datasets/pull/772
MEMBER
null
false
null
[]
Fix metric with cache dir
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/772/reactions" }
MDExOlB1bGxSZXF1ZXN0NTExNjg4ODMx
{ "diff_url": "https://github.com/huggingface/datasets/pull/772.diff", "html_url": "https://github.com/huggingface/datasets/pull/772", "merged_at": "2020-10-29T09:34:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/772.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/772" }
2020-10-28T16:43:13Z
https://api.github.com/repos/huggingface/datasets/issues/772/comments
The cache_dir provided by the user was concatenated twice and therefore causing FileNotFound errors. The tests didn't cover the case of providing `cache_dir=` for metrics because of a stupid issue (it was not using the right parameter). I remove the double concatenation and I fixed the tests Fix #728
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/772/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/772/timeline
closed
false
772
null
2020-10-29T09:34:43Z
null
true
731,482,213
https://api.github.com/repos/huggingface/datasets/issues/771
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/771/events
[]
null
2023-02-13T20:16:39Z
[]
https://github.com/huggingface/datasets/issues/771
CONTRIBUTOR
completed
null
null
[]
Using `Dataset.map` with `n_proc>1` print multiple progress bars
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/771/reactions" }
MDU6SXNzdWU3MzE0ODIyMTM=
null
2020-10-28T14:13:27Z
https://api.github.com/repos/huggingface/datasets/issues/771/comments
When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
https://api.github.com/repos/huggingface/datasets/issues/771/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/771/timeline
closed
false
771
null
2023-02-13T20:16:39Z
null
false
731,445,222
https://api.github.com/repos/huggingface/datasets/issues/770
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/770/events
[]
null
2020-10-29T09:36:03Z
[]
https://github.com/huggingface/datasets/pull/770
MEMBER
null
false
null
[]
Fix custom builder caching
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/770/reactions" }
MDExOlB1bGxSZXF1ZXN0NTExNTQ5MTg1
{ "diff_url": "https://github.com/huggingface/datasets/pull/770.diff", "html_url": "https://github.com/huggingface/datasets/pull/770", "merged_at": "2020-10-29T09:36:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/770.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/770" }
2020-10-28T13:32:24Z
https://api.github.com/repos/huggingface/datasets/issues/770/comments
The cache directory of a dataset didn't take into account additional parameters that the user could specify such as `features` or any parameter of the builder configuration kwargs (ex: `encoding` for the `text` dataset). To fix that, the cache directory name now has a suffix that depends on all of them. Fix #730 Fix #750
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/770/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/770/timeline
closed
false
770
null
2020-10-29T09:36:01Z
null
true
731,257,104
https://api.github.com/repos/huggingface/datasets/issues/769
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/769/events
[]
null
2022-02-22T12:22:52Z
[]
https://github.com/huggingface/datasets/issues/769
NONE
completed
null
null
[]
How to choose proper download_mode in function load_dataset?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/769/reactions" }
MDU6SXNzdWU3MzEyNTcxMDQ=
null
2020-10-28T09:16:19Z
https://api.github.com/repos/huggingface/datasets/issues/769/comments
Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4 "Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5 ``` First I try to use this command to load my csv file . ``` python dataset=load_dataset('csv', data_files=['sst_test.csv']) ``` It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this. ``` python import pyarrow as pa from pyarrow import csv read_options = csv.ReadOptions(block_size=1024*1024) parse_options = csv.ParseOptions() convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()}) dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options, parse_options=parse_options, convert_options=convert_options) ``` It keeps the same: ```shell Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210) ``` I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right. Is it a bug? How to choose proper download_mode to avoid this issue?
{ "avatar_url": "https://avatars.githubusercontent.com/u/48550398?v=4", "events_url": "https://api.github.com/users/jzq2000/events{/privacy}", "followers_url": "https://api.github.com/users/jzq2000/followers", "following_url": "https://api.github.com/users/jzq2000/following{/other_user}", "gists_url": "https://api.github.com/users/jzq2000/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jzq2000", "id": 48550398, "login": "jzq2000", "node_id": "MDQ6VXNlcjQ4NTUwMzk4", "organizations_url": "https://api.github.com/users/jzq2000/orgs", "received_events_url": "https://api.github.com/users/jzq2000/received_events", "repos_url": "https://api.github.com/users/jzq2000/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jzq2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jzq2000/subscriptions", "type": "User", "url": "https://api.github.com/users/jzq2000" }
https://api.github.com/repos/huggingface/datasets/issues/769/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/769/timeline
closed
false
769
null
2022-02-22T12:22:52Z
null
false
730,908,060
https://api.github.com/repos/huggingface/datasets/issues/768
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/768/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2020-10-28T08:58:13Z
[]
https://github.com/huggingface/datasets/issues/768
CONTRIBUTOR
null
null
null
[]
Add a `lazy_map` method to `Dataset` and `DatasetDict`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/768/reactions" }
MDU6SXNzdWU3MzA5MDgwNjA=
null
2020-10-27T22:33:03Z
https://api.github.com/repos/huggingface/datasets/issues/768/comments
The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases: 1. load image on the fly 2. apply a random function and get different outputs at each epoch (like data augmentation or randomly masking a part of a sentence for BERT-like objectives).
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
https://api.github.com/repos/huggingface/datasets/issues/768/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/768/timeline
open
false
768
null
null
null
false
730,771,610
https://api.github.com/repos/huggingface/datasets/issues/767
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/767/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2020-11-10T14:05:21Z
[]
https://github.com/huggingface/datasets/issues/767
CONTRIBUTOR
null
null
null
[]
Add option for named splits when using ds.train_test_split
{ "+1": 6, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/767/reactions" }
MDU6SXNzdWU3MzA3NzE2MTA=
null
2020-10-27T19:59:44Z
https://api.github.com/repos/huggingface/datasets/issues/767/comments
### Feature Request 🚀 Can we add a way to name your splits when using the `.train_test_split` function? In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `train_test_split`, as it'll just overwrite my real `test` split that I intended to keep. ### Workaround this is my hack for dealin with this, for now :slightly_smiling_face: ```python from datasets import load_dataset ​ ​ ds = load_dataset('imdb') ds['train'], ds['validation'] = ds['train'].train_test_split(.1).values() ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nateraw", "id": 32437151, "login": "nateraw", "node_id": "MDQ6VXNlcjMyNDM3MTUx", "organizations_url": "https://api.github.com/users/nateraw/orgs", "received_events_url": "https://api.github.com/users/nateraw/received_events", "repos_url": "https://api.github.com/users/nateraw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "type": "User", "url": "https://api.github.com/users/nateraw" }
https://api.github.com/repos/huggingface/datasets/issues/767/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/767/timeline
open
false
767
null
null
null
false
730,669,596
https://api.github.com/repos/huggingface/datasets/issues/766
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/766/events
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
null
2020-12-03T13:37:18Z
[]
https://github.com/huggingface/datasets/issues/766
MEMBER
completed
null
null
[]
[GEM] add DART data-to-text generation dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/766/reactions" }
MDU6SXNzdWU3MzA2Njk1OTY=
null
2020-10-27T17:34:04Z
https://api.github.com/repos/huggingface/datasets/issues/766/comments
## Adding a Dataset - **Name:** DART - **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. - **Paper:** https://arxiv.org/abs/2007.02871v1 - **Data:** https://github.com/Yale-LILY/dart - **Motivation:** the dataset will likely be included in the GEM benchmark Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
https://api.github.com/repos/huggingface/datasets/issues/766/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/766/timeline
closed
false
766
null
2020-12-03T13:37:18Z
null
false
730,668,332
https://api.github.com/repos/huggingface/datasets/issues/765
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/765/events
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
null
2020-10-27T17:34:21Z
[]
https://github.com/huggingface/datasets/issues/765
MEMBER
completed
null
null
[]
[GEM] Add DART data-to-text generation dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/765/reactions" }
MDU6SXNzdWU3MzA2NjgzMzI=
null
2020-10-27T17:32:23Z
https://api.github.com/repos/huggingface/datasets/issues/765/comments
## Adding a Dataset - **Name:** DART - **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set. - **Paper:** https://arxiv.org/abs/2007.02871v1 - **Data:** https://github.com/Yale-LILY/dart - **Motivation:** It will likely be included in the GEM generation evaluation benchmark Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
https://api.github.com/repos/huggingface/datasets/issues/765/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/765/timeline
closed
false
765
null
2020-10-27T17:34:21Z
null
false
730,617,828
https://api.github.com/repos/huggingface/datasets/issues/764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/764/events
[]
null
2020-10-27T17:25:26Z
[]
https://github.com/huggingface/datasets/pull/764
MEMBER
null
false
null
[]
Adding Issue Template for Dataset Requests
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/764/reactions" }
MDExOlB1bGxSZXF1ZXN0NTEwODkyMTk2
{ "diff_url": "https://github.com/huggingface/datasets/pull/764.diff", "html_url": "https://github.com/huggingface/datasets/pull/764", "merged_at": "2020-10-27T17:25:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/764.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/764" }
2020-10-27T16:37:08Z
https://api.github.com/repos/huggingface/datasets/issues/764/comments
adding .github/ISSUE_TEMPLATE/add-dataset.md
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
https://api.github.com/repos/huggingface/datasets/issues/764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/764/timeline
closed
false
764
null
2020-10-27T17:25:25Z
null
true
730,593,631
https://api.github.com/repos/huggingface/datasets/issues/763
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/763/events
[]
null
2020-10-28T17:59:25Z
[]
https://github.com/huggingface/datasets/pull/763
CONTRIBUTOR
null
false
null
[]
Fixed errors in bertscore related to custom baseline
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/763/reactions" }
MDExOlB1bGxSZXF1ZXN0NTEwODcyMDYx
{ "diff_url": "https://github.com/huggingface/datasets/pull/763.diff", "html_url": "https://github.com/huggingface/datasets/pull/763", "merged_at": "2020-10-28T17:59:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/763.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/763" }
2020-10-27T16:08:35Z
https://api.github.com/repos/huggingface/datasets/issues/763/comments
[bertscore version 0.3.6 ](https://github.com/Tiiiger/bert_score) added support for custom baseline files. This update added extra argument `baseline_path` to BERTScorer class as well as some extra boolean parameters `use_custom_baseline` in functions like `get_hash(model, num_layers, idf, rescale_with_baseline, use_custom_baseline)`. This PR fix those matching errors in bertscore metric implementation.
{ "avatar_url": "https://avatars.githubusercontent.com/u/36761132?v=4", "events_url": "https://api.github.com/users/juanjucm/events{/privacy}", "followers_url": "https://api.github.com/users/juanjucm/followers", "following_url": "https://api.github.com/users/juanjucm/following{/other_user}", "gists_url": "https://api.github.com/users/juanjucm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/juanjucm", "id": 36761132, "login": "juanjucm", "node_id": "MDQ6VXNlcjM2NzYxMTMy", "organizations_url": "https://api.github.com/users/juanjucm/orgs", "received_events_url": "https://api.github.com/users/juanjucm/received_events", "repos_url": "https://api.github.com/users/juanjucm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/juanjucm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/juanjucm/subscriptions", "type": "User", "url": "https://api.github.com/users/juanjucm" }
https://api.github.com/repos/huggingface/datasets/issues/763/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/763/timeline
closed
false
763
null
2020-10-28T17:59:25Z
null
true
730,586,972
https://api.github.com/repos/huggingface/datasets/issues/762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/762/events
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
null
2020-12-03T13:37:44Z
[]
https://github.com/huggingface/datasets/issues/762
MEMBER
completed
null
null
[]
[GEM] Add Czech Restaurant data-to-text generation dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/762/reactions" }
MDU6SXNzdWU3MzA1ODY5NzI=
null
2020-10-27T16:00:47Z
https://api.github.com/repos/huggingface/datasets/issues/762/comments
- Paper: https://www.aclweb.org/anthology/W19-8670.pdf - Data: https://github.com/UFAL-DSG/cs_restaurant_dataset - The dataset will likely be part of the GEM benchmark
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
https://api.github.com/repos/huggingface/datasets/issues/762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/762/timeline
closed
false
762
null
2020-12-03T13:37:44Z
null
false
729,898,867
https://api.github.com/repos/huggingface/datasets/issues/761
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/761/events
[]
null
2022-02-15T10:32:28Z
[]
https://github.com/huggingface/datasets/issues/761
CONTRIBUTOR
completed
null
null
[]
Downloaded datasets are not usable offline
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/761/reactions" }
MDU6SXNzdWU3Mjk4OTg4Njc=
null
2020-10-26T20:54:46Z
https://api.github.com/repos/huggingface/datasets/issues/761/comments
I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset. Is this the intended behavior ? (Sorry, I wrote the the first version of this issue while still on nlp 0.3.0).
{ "avatar_url": "https://avatars.githubusercontent.com/u/25091538?v=4", "events_url": "https://api.github.com/users/ghazi-f/events{/privacy}", "followers_url": "https://api.github.com/users/ghazi-f/followers", "following_url": "https://api.github.com/users/ghazi-f/following{/other_user}", "gists_url": "https://api.github.com/users/ghazi-f/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghazi-f", "id": 25091538, "login": "ghazi-f", "node_id": "MDQ6VXNlcjI1MDkxNTM4", "organizations_url": "https://api.github.com/users/ghazi-f/orgs", "received_events_url": "https://api.github.com/users/ghazi-f/received_events", "repos_url": "https://api.github.com/users/ghazi-f/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghazi-f/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghazi-f/subscriptions", "type": "User", "url": "https://api.github.com/users/ghazi-f" }
https://api.github.com/repos/huggingface/datasets/issues/761/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/761/timeline
closed
false
761
null
2022-02-15T10:32:28Z
null
false
729,637,917
https://api.github.com/repos/huggingface/datasets/issues/760
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/760/events
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" }, { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
null
2020-12-03T13:38:34Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao" } ]
https://github.com/huggingface/datasets/issues/760
MEMBER
completed
null
null
[]
Add meta-data to the HANS dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/760/reactions" }
MDU6SXNzdWU3Mjk2Mzc5MTc=
null
2020-10-26T14:56:53Z
https://api.github.com/repos/huggingface/datasets/issues/760/comments
The current version of the [HANS dataset](https://github.com/huggingface/datasets/blob/master/datasets/hans/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase.
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
https://api.github.com/repos/huggingface/datasets/issues/760/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/760/timeline
closed
false
760
null
2020-12-03T13:38:34Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TevenLeScao", "id": 26709476, "login": "TevenLeScao", "node_id": "MDQ6VXNlcjI2NzA5NDc2", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "type": "User", "url": "https://api.github.com/users/TevenLeScao" }
false
729,046,916
https://api.github.com/repos/huggingface/datasets/issues/759
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/759/events
[]
null
2023-09-13T23:56:51Z
[]
https://github.com/huggingface/datasets/issues/759
NONE
completed
null
null
[]
(Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/759/reactions" }
MDU6SXNzdWU3MjkwNDY5MTY=
null
2020-10-25T15:34:57Z
https://api.github.com/repos/huggingface/datasets/issues/759/comments
Hey, I want to load the cnn-dailymail dataset for fine-tune. I write the code like this from datasets import load_dataset test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”) And I got the following errors. Traceback (most recent call last): File “test.py”, line 7, in test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“test”) File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 589, in load_dataset module_path, hash = prepare_module( File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\load.py”, line 268, in prepare_module local_path = cached_path(file_path, download_config=download_config) File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 300, in cached_path output_path = get_from_cache( File “C:\Users\666666\AppData\Local\Programs\Python\Python38\lib\site-packages\datasets\utils\file_utils.py”, line 475, in get_from_cache raise ConnectionError(“Couldn’t reach {}”.format(url)) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py How can I fix this ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/63541083?v=4", "events_url": "https://api.github.com/users/AI678/events{/privacy}", "followers_url": "https://api.github.com/users/AI678/followers", "following_url": "https://api.github.com/users/AI678/following{/other_user}", "gists_url": "https://api.github.com/users/AI678/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AI678", "id": 63541083, "login": "AI678", "node_id": "MDQ6VXNlcjYzNTQxMDgz", "organizations_url": "https://api.github.com/users/AI678/orgs", "received_events_url": "https://api.github.com/users/AI678/received_events", "repos_url": "https://api.github.com/users/AI678/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AI678/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AI678/subscriptions", "type": "User", "url": "https://api.github.com/users/AI678" }
https://api.github.com/repos/huggingface/datasets/issues/759/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/759/timeline
closed
false
759
null
2021-08-04T18:10:09Z
null
false
728,638,559
https://api.github.com/repos/huggingface/datasets/issues/758
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/758/events
[]
null
2020-10-28T03:59:46Z
[]
https://github.com/huggingface/datasets/issues/758
NONE
completed
null
null
[]
Process 0 very slow when using num_procs with map to tokenizer
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/758/reactions" }
MDU6SXNzdWU3Mjg2Mzg1NTk=
null
2020-10-24T02:40:20Z
https://api.github.com/repos/huggingface/datasets/issues/758/comments
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png"> The code I am using is ``` dataset = load_dataset("text", data_files=[file_path], split='train') dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), num_proc=8) dataset.set_format(type='torch', columns=['input_ids']) dataset.save_to_disk(file_path+'.arrow') ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4", "events_url": "https://api.github.com/users/ksjae/events{/privacy}", "followers_url": "https://api.github.com/users/ksjae/followers", "following_url": "https://api.github.com/users/ksjae/following{/other_user}", "gists_url": "https://api.github.com/users/ksjae/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ksjae", "id": 17930170, "login": "ksjae", "node_id": "MDQ6VXNlcjE3OTMwMTcw", "organizations_url": "https://api.github.com/users/ksjae/orgs", "received_events_url": "https://api.github.com/users/ksjae/received_events", "repos_url": "https://api.github.com/users/ksjae/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ksjae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ksjae/subscriptions", "type": "User", "url": "https://api.github.com/users/ksjae" }
https://api.github.com/repos/huggingface/datasets/issues/758/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/758/timeline
closed
false
758
null
2020-10-28T03:59:45Z
null
false
728,241,494
https://api.github.com/repos/huggingface/datasets/issues/757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/757/events
[]
null
2020-12-23T14:06:29Z
[]
https://github.com/huggingface/datasets/issues/757
NONE
completed
null
null
[]
CUDA out of memory
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/757/reactions" }
MDU6SXNzdWU3MjgyNDE0OTQ=
null
2020-10-23T13:57:00Z
https://api.github.com/repos/huggingface/datasets/issues/757/comments
In your dataset ,cuda run out of memory as long as the trainer begins: however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47059217?v=4", "events_url": "https://api.github.com/users/li1117heex/events{/privacy}", "followers_url": "https://api.github.com/users/li1117heex/followers", "following_url": "https://api.github.com/users/li1117heex/following{/other_user}", "gists_url": "https://api.github.com/users/li1117heex/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/li1117heex", "id": 47059217, "login": "li1117heex", "node_id": "MDQ6VXNlcjQ3MDU5MjE3", "organizations_url": "https://api.github.com/users/li1117heex/orgs", "received_events_url": "https://api.github.com/users/li1117heex/received_events", "repos_url": "https://api.github.com/users/li1117heex/repos", "site_admin": false, "starred_url": "https://api.github.com/users/li1117heex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/li1117heex/subscriptions", "type": "User", "url": "https://api.github.com/users/li1117heex" }
https://api.github.com/repos/huggingface/datasets/issues/757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/757/timeline
closed
false
757
null
2020-12-23T14:06:29Z
null
false
728,211,373
https://api.github.com/repos/huggingface/datasets/issues/756
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/756/events
[]
null
2020-10-26T12:55:20Z
[]
https://github.com/huggingface/datasets/pull/756
CONTRIBUTOR
null
false
null
[]
Start community-provided dataset docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/756/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA4OTYwNTc3
{ "diff_url": "https://github.com/huggingface/datasets/pull/756.diff", "html_url": "https://github.com/huggingface/datasets/pull/756", "merged_at": "2020-10-26T12:55:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/756.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/756" }
2020-10-23T13:17:41Z
https://api.github.com/repos/huggingface/datasets/issues/756/comments
Continuation of #736 with clean fork. #### Old description This is what I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs. In slack @thomwolf called it a user-namespace dataset, but the docs call it community dataset. I think the first naming is clearer, but I didn't address that here. I didn't add metadata, will try that.
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
https://api.github.com/repos/huggingface/datasets/issues/756/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/756/timeline
closed
false
756
null
2020-10-26T12:55:19Z
null
true
728,203,821
https://api.github.com/repos/huggingface/datasets/issues/755
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/755/events
[]
null
2020-10-23T13:15:37Z
[]
https://github.com/huggingface/datasets/pull/755
CONTRIBUTOR
null
false
null
[]
Start community-provided dataset docs V2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/755/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA4OTU0NDI2
{ "diff_url": "https://github.com/huggingface/datasets/pull/755.diff", "html_url": "https://github.com/huggingface/datasets/pull/755", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/755.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/755" }
2020-10-23T13:07:30Z
https://api.github.com/repos/huggingface/datasets/issues/755/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
https://api.github.com/repos/huggingface/datasets/issues/755/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/755/timeline
closed
false
755
null
2020-10-23T13:15:37Z
null
true
727,863,105
https://api.github.com/repos/huggingface/datasets/issues/754
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/754/events
[]
null
2021-01-01T03:11:56Z
[]
https://github.com/huggingface/datasets/pull/754
CONTRIBUTOR
null
false
null
[]
Use full released xsum dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/754/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA4NjczNzM2
{ "diff_url": "https://github.com/huggingface/datasets/pull/754.diff", "html_url": "https://github.com/huggingface/datasets/pull/754", "merged_at": "2020-10-26T12:56:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/754.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/754" }
2020-10-23T03:29:49Z
https://api.github.com/repos/huggingface/datasets/issues/754/comments
#672 Fix xsum to expand coverage and include IDs Code based on parser from older version of `datasets/xsum/xsum.py` @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "events_url": "https://api.github.com/users/jbragg/events{/privacy}", "followers_url": "https://api.github.com/users/jbragg/followers", "following_url": "https://api.github.com/users/jbragg/following{/other_user}", "gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jbragg", "id": 2238344, "login": "jbragg", "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "organizations_url": "https://api.github.com/users/jbragg/orgs", "received_events_url": "https://api.github.com/users/jbragg/received_events", "repos_url": "https://api.github.com/users/jbragg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbragg/subscriptions", "type": "User", "url": "https://api.github.com/users/jbragg" }
https://api.github.com/repos/huggingface/datasets/issues/754/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/754/timeline
closed
false
754
null
2020-10-26T12:56:58Z
null
true
727,434,935
https://api.github.com/repos/huggingface/datasets/issues/753
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/753/events
[]
null
2020-10-23T08:42:11Z
[]
https://github.com/huggingface/datasets/pull/753
MEMBER
null
false
null
[]
Fix doc links to viewer
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/753/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA4MzI4ODM0
{ "diff_url": "https://github.com/huggingface/datasets/pull/753.diff", "html_url": "https://github.com/huggingface/datasets/pull/753", "merged_at": "2020-10-23T08:42:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/753.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/753" }
2020-10-22T14:20:16Z
https://api.github.com/repos/huggingface/datasets/issues/753/comments
It seems #733 forgot some links in the doc :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/5020707?v=4", "events_url": "https://api.github.com/users/Pierrci/events{/privacy}", "followers_url": "https://api.github.com/users/Pierrci/followers", "following_url": "https://api.github.com/users/Pierrci/following{/other_user}", "gists_url": "https://api.github.com/users/Pierrci/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Pierrci", "id": 5020707, "login": "Pierrci", "node_id": "MDQ6VXNlcjUwMjA3MDc=", "organizations_url": "https://api.github.com/users/Pierrci/orgs", "received_events_url": "https://api.github.com/users/Pierrci/received_events", "repos_url": "https://api.github.com/users/Pierrci/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Pierrci/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pierrci/subscriptions", "type": "User", "url": "https://api.github.com/users/Pierrci" }
https://api.github.com/repos/huggingface/datasets/issues/753/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/753/timeline
closed
false
753
null
2020-10-23T08:42:11Z
null
true
726,917,801
https://api.github.com/repos/huggingface/datasets/issues/752
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/752/events
[]
null
2020-10-22T16:19:42Z
[]
https://github.com/huggingface/datasets/issues/752
NONE
completed
null
null
[]
Clicking on a metric in the search page points to datasets page giving "Missing dataset" warning
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/752/reactions" }
MDU6SXNzdWU3MjY5MTc4MDE=
null
2020-10-21T22:56:23Z
https://api.github.com/repos/huggingface/datasets/issues/752/comments
Hi! Sorry if this isn't the right place to talk about the website, I just didn't exactly where to write this. Searching a metric in https://huggingface.co/metrics gives the right results but clicking on a metric (E.g ROUGE) points to https://huggingface.co/datasets/rouge. Clicking on a metric without searching points to the right page. Thanks for all the great work!
{ "avatar_url": "https://avatars.githubusercontent.com/u/24829397?v=4", "events_url": "https://api.github.com/users/ogabrielluiz/events{/privacy}", "followers_url": "https://api.github.com/users/ogabrielluiz/followers", "following_url": "https://api.github.com/users/ogabrielluiz/following{/other_user}", "gists_url": "https://api.github.com/users/ogabrielluiz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ogabrielluiz", "id": 24829397, "login": "ogabrielluiz", "node_id": "MDQ6VXNlcjI0ODI5Mzk3", "organizations_url": "https://api.github.com/users/ogabrielluiz/orgs", "received_events_url": "https://api.github.com/users/ogabrielluiz/received_events", "repos_url": "https://api.github.com/users/ogabrielluiz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ogabrielluiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ogabrielluiz/subscriptions", "type": "User", "url": "https://api.github.com/users/ogabrielluiz" }
https://api.github.com/repos/huggingface/datasets/issues/752/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/752/timeline
closed
false
752
null
2020-10-22T16:19:42Z
null
false
726,820,191
https://api.github.com/repos/huggingface/datasets/issues/751
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/751/events
[]
null
2020-11-05T01:31:57Z
[]
https://github.com/huggingface/datasets/issues/751
NONE
completed
null
null
[]
Error loading ms_marco v2.1 using load_dataset()
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/751/reactions" }
MDU6SXNzdWU3MjY4MjAxOTE=
null
2020-10-21T19:54:43Z
https://api.github.com/repos/huggingface/datasets/issues/751/comments
Code: `dataset = load_dataset('ms_marco', 'v2.1')` Error: ``` `--------------------------------------------------------------------------- JSONDecodeError Traceback (most recent call last) <ipython-input-16-34378c057212> in <module>() 9 10 # Downloading and loading a dataset ---> 11 dataset = load_dataset('ms_marco', 'v2.1') 10 frames /usr/lib/python3.6/json/decoder.py in raw_decode(self, s, idx) 353 """ 354 try: --> 355 obj, end = self.scan_once(s, idx) 356 except StopIteration as err: 357 raise JSONDecodeError("Expecting value", s, err.value) from None JSONDecodeError: Unterminated string starting at: line 1 column 388988661 (char 388988660) ` ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/30478979?v=4", "events_url": "https://api.github.com/users/JainSahit/events{/privacy}", "followers_url": "https://api.github.com/users/JainSahit/followers", "following_url": "https://api.github.com/users/JainSahit/following{/other_user}", "gists_url": "https://api.github.com/users/JainSahit/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JainSahit", "id": 30478979, "login": "JainSahit", "node_id": "MDQ6VXNlcjMwNDc4OTc5", "organizations_url": "https://api.github.com/users/JainSahit/orgs", "received_events_url": "https://api.github.com/users/JainSahit/received_events", "repos_url": "https://api.github.com/users/JainSahit/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JainSahit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JainSahit/subscriptions", "type": "User", "url": "https://api.github.com/users/JainSahit" }
https://api.github.com/repos/huggingface/datasets/issues/751/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/751/timeline
closed
false
751
null
2020-11-05T01:31:57Z
null
false
726,589,446
https://api.github.com/repos/huggingface/datasets/issues/750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/750/events
[]
null
2020-10-29T09:36:01Z
[]
https://github.com/huggingface/datasets/issues/750
CONTRIBUTOR
completed
null
null
[]
load_dataset doesn't include `features` in its hash
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/750/reactions" }
MDU6SXNzdWU3MjY1ODk0NDY=
null
2020-10-21T15:16:41Z
https://api.github.com/repos/huggingface/datasets/issues/750/comments
It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored. Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of: ``` dataset = load_dataset("glue", "mnli") features = dataset["train"].features features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order dataset = load_dataset("glue", "mnli", features=features) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
https://api.github.com/repos/huggingface/datasets/issues/750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/750/timeline
closed
false
750
null
2020-10-29T09:36:01Z
null
false
726,366,062
https://api.github.com/repos/huggingface/datasets/issues/749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/749/events
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
null
2022-09-30T11:35:30Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" } ]
https://github.com/huggingface/datasets/issues/749
CONTRIBUTOR
completed
null
null
[]
[XGLUE] Adding new dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/749/reactions" }
MDU6SXNzdWU3MjYzNjYwNjI=
null
2020-10-21T10:51:36Z
https://api.github.com/repos/huggingface/datasets/issues/749/comments
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf). I'm planning on adding the dataset to the library myself in a couple of weeks. Also tagging @JetRunner @qiweizhen in case I need some guidance
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://api.github.com/repos/huggingface/datasets/issues/749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/749/timeline
closed
false
749
null
2021-01-06T10:02:55Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
false
726,196,589
https://api.github.com/repos/huggingface/datasets/issues/748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/748/events
[]
null
2020-10-21T08:52:42Z
[]
https://github.com/huggingface/datasets/pull/748
CONTRIBUTOR
null
false
null
[]
New version of CompGuessWhat?! with refined annotations
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/748/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA3MzAyNjE3
{ "diff_url": "https://github.com/huggingface/datasets/pull/748.diff", "html_url": "https://github.com/huggingface/datasets/pull/748", "merged_at": "2020-10-21T08:46:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/748.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/748" }
2020-10-21T06:55:41Z
https://api.github.com/repos/huggingface/datasets/issues/748/comments
This pull request introduces a few fixes to the annotations for VisualGenome in the CompGuessWhat?! original split.
{ "avatar_url": "https://avatars.githubusercontent.com/u/1479733?v=4", "events_url": "https://api.github.com/users/aleSuglia/events{/privacy}", "followers_url": "https://api.github.com/users/aleSuglia/followers", "following_url": "https://api.github.com/users/aleSuglia/following{/other_user}", "gists_url": "https://api.github.com/users/aleSuglia/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aleSuglia", "id": 1479733, "login": "aleSuglia", "node_id": "MDQ6VXNlcjE0Nzk3MzM=", "organizations_url": "https://api.github.com/users/aleSuglia/orgs", "received_events_url": "https://api.github.com/users/aleSuglia/received_events", "repos_url": "https://api.github.com/users/aleSuglia/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aleSuglia/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aleSuglia/subscriptions", "type": "User", "url": "https://api.github.com/users/aleSuglia" }
https://api.github.com/repos/huggingface/datasets/issues/748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/748/timeline
closed
false
748
null
2020-10-21T08:46:19Z
null
true
725,884,704
https://api.github.com/repos/huggingface/datasets/issues/747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/747/events
[]
null
2020-10-21T08:35:15Z
[]
https://github.com/huggingface/datasets/pull/747
CONTRIBUTOR
null
false
null
[]
Add Quail question answering dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/747/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA3MDQ3MDE4
{ "diff_url": "https://github.com/huggingface/datasets/pull/747.diff", "html_url": "https://github.com/huggingface/datasets/pull/747", "merged_at": "2020-10-21T08:35:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/747.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/747" }
2020-10-20T19:33:14Z
https://api.github.com/repos/huggingface/datasets/issues/747/comments
QuAIL is a multi-domain RC dataset featuring news, blogs, fiction and user stories. Each domain is represented by 200 texts, which gives us a 4-way data split. The texts are 300-350 word excerpts from CC-licensed texts that were hand-picked so as to make sense to human readers without larger context. Domain diversity mitigates the issue of possible overlap between training and test data of large pre-trained models, which the current SOTA systems are based on. For instance, BERT is trained on Wikipedia + BookCorpus, and was tested on Wikipedia-based SQuAD (Devlin, Chang, Lee, & Toutanova, 2019). https://text-machine-lab.github.io/blog/2020/quail/ @annargrs
{ "avatar_url": "https://avatars.githubusercontent.com/u/3595526?v=4", "events_url": "https://api.github.com/users/sai-prasanna/events{/privacy}", "followers_url": "https://api.github.com/users/sai-prasanna/followers", "following_url": "https://api.github.com/users/sai-prasanna/following{/other_user}", "gists_url": "https://api.github.com/users/sai-prasanna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sai-prasanna", "id": 3595526, "login": "sai-prasanna", "node_id": "MDQ6VXNlcjM1OTU1MjY=", "organizations_url": "https://api.github.com/users/sai-prasanna/orgs", "received_events_url": "https://api.github.com/users/sai-prasanna/received_events", "repos_url": "https://api.github.com/users/sai-prasanna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sai-prasanna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sai-prasanna/subscriptions", "type": "User", "url": "https://api.github.com/users/sai-prasanna" }
https://api.github.com/repos/huggingface/datasets/issues/747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/747/timeline
closed
false
747
null
2020-10-21T08:35:15Z
null
true
725,627,235
https://api.github.com/repos/huggingface/datasets/issues/746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/746/events
[]
null
2021-03-23T06:19:38Z
[]
https://github.com/huggingface/datasets/pull/746
CONTRIBUTOR
null
false
null
[]
dataset(ngt): add ngt dataset initial loading script
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/746/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA2ODMzNDMw
{ "diff_url": "https://github.com/huggingface/datasets/pull/746.diff", "html_url": "https://github.com/huggingface/datasets/pull/746", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/746.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/746" }
2020-10-20T14:04:58Z
https://api.github.com/repos/huggingface/datasets/issues/746/comments
Currently only making the paths to the annotation ELAN (eaf) file and videos available. This is the first accessible way to download this dataset, which is not manual file-by-file. Only downloading the necessary files, the annotation files are very small, 20MB for all of them, but the video files are large, 100GB in total, saved in `mpg` format. I do not intend to actually store these as an uncompressed array of frames, because it will be huge. Future updates may add pose estimation files for all videos, making it easier to work with this data
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY" }
https://api.github.com/repos/huggingface/datasets/issues/746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/746/timeline
closed
false
746
null
2021-03-23T06:19:38Z
null
true
725,589,352
https://api.github.com/repos/huggingface/datasets/issues/745
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/745/events
[]
null
2021-04-22T14:47:31Z
[]
https://github.com/huggingface/datasets/pull/745
MEMBER
null
false
null
[]
Fix emotion description
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/745/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA2ODAxMTI0
{ "diff_url": "https://github.com/huggingface/datasets/pull/745.diff", "html_url": "https://github.com/huggingface/datasets/pull/745", "merged_at": "2020-10-21T08:38:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/745.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/745" }
2020-10-20T13:28:39Z
https://api.github.com/repos/huggingface/datasets/issues/745/comments
Fixes the description of the emotion dataset to reflect the class names observed in the data, not the ones described in the paper. I also took the liberty to make use of `ClassLabel` for the emotion labels.
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
https://api.github.com/repos/huggingface/datasets/issues/745/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/745/timeline
closed
false
745
null
2020-10-21T08:38:27Z
null
true
724,918,448
https://api.github.com/repos/huggingface/datasets/issues/744
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/744/events
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
null
2020-10-26T16:36:17Z
[]
https://github.com/huggingface/datasets/issues/744
NONE
completed
null
null
[]
Dataset Explorer Doesn't Work for squad_es and squad_it
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/744/reactions" }
MDU6SXNzdWU3MjQ5MTg0NDg=
null
2020-10-19T19:34:12Z
https://api.github.com/repos/huggingface/datasets/issues/744/comments
https://huggingface.co/nlp/viewer/?dataset=squad_es https://huggingface.co/nlp/viewer/?dataset=squad_it Both pages show "OSError: [Errno 28] No space left on device".
{ "avatar_url": "https://avatars.githubusercontent.com/u/22607038?v=4", "events_url": "https://api.github.com/users/gaotongxiao/events{/privacy}", "followers_url": "https://api.github.com/users/gaotongxiao/followers", "following_url": "https://api.github.com/users/gaotongxiao/following{/other_user}", "gists_url": "https://api.github.com/users/gaotongxiao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gaotongxiao", "id": 22607038, "login": "gaotongxiao", "node_id": "MDQ6VXNlcjIyNjA3MDM4", "organizations_url": "https://api.github.com/users/gaotongxiao/orgs", "received_events_url": "https://api.github.com/users/gaotongxiao/received_events", "repos_url": "https://api.github.com/users/gaotongxiao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gaotongxiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaotongxiao/subscriptions", "type": "User", "url": "https://api.github.com/users/gaotongxiao" }
https://api.github.com/repos/huggingface/datasets/issues/744/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/744/timeline
closed
false
744
null
2020-10-26T16:36:17Z
null
false
724,703,980
https://api.github.com/repos/huggingface/datasets/issues/743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/743/events
[]
null
2022-11-28T16:59:36Z
[]
https://github.com/huggingface/datasets/issues/743
CONTRIBUTOR
null
null
null
[]
load_dataset for CSV files not working
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/743/reactions" }
MDU6SXNzdWU3MjQ3MDM5ODA=
null
2020-10-19T14:53:51Z
https://api.github.com/repos/huggingface/datasets/issues/743/comments
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets. ` from datasets import load_dataset ` ` dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master") ` Displayed error: ` ... ArrowInvalid: CSV parse error: Expected 2 columns, got 1 ` I should mention that when I've tried to read data from `https://github.com/lhoestq/transformers/tree/custom-dataset-in-rag-retriever/examples/rag/test_data/my_knowledge_dataset.csv` it worked without a problem. I've read that there might be some problems with /r character, so I've removed them from the custom dataset, but the problem still remains. I've added a colab reproducing the bug, but unfortunately I cannot provide the dataset. https://colab.research.google.com/drive/1Qzu7sC-frZVeniiWOwzoCe_UHZsrlxu8?usp=sharing Are there any work around for it ? Thank you
{ "avatar_url": "https://avatars.githubusercontent.com/u/2815308?v=4", "events_url": "https://api.github.com/users/iliemihai/events{/privacy}", "followers_url": "https://api.github.com/users/iliemihai/followers", "following_url": "https://api.github.com/users/iliemihai/following{/other_user}", "gists_url": "https://api.github.com/users/iliemihai/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iliemihai", "id": 2815308, "login": "iliemihai", "node_id": "MDQ6VXNlcjI4MTUzMDg=", "organizations_url": "https://api.github.com/users/iliemihai/orgs", "received_events_url": "https://api.github.com/users/iliemihai/received_events", "repos_url": "https://api.github.com/users/iliemihai/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iliemihai/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliemihai/subscriptions", "type": "User", "url": "https://api.github.com/users/iliemihai" }
https://api.github.com/repos/huggingface/datasets/issues/743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/743/timeline
open
false
743
null
null
null
false
724,509,974
https://api.github.com/repos/huggingface/datasets/issues/742
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/742/events
[]
null
2020-10-22T16:19:49Z
[]
https://github.com/huggingface/datasets/pull/742
CONTRIBUTOR
null
false
null
[]
Add OCNLI, a new CLUE dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/742/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA1ODgzNjI3
{ "diff_url": "https://github.com/huggingface/datasets/pull/742.diff", "html_url": "https://github.com/huggingface/datasets/pull/742", "merged_at": "2020-10-22T16:19:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/742.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/742" }
2020-10-19T11:06:33Z
https://api.github.com/repos/huggingface/datasets/issues/742/comments
OCNLI stands for Original Chinese Natural Language Inference. It is a corpus for Chinese Natural Language Inference, collected following closely the procedures of MNLI, but with enhanced strategies aiming for more challenging inference pairs. We want to emphasize we did not use human/machine translation in creating the dataset, and thus our Chinese texts are original and not translated.
{ "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "events_url": "https://api.github.com/users/JetRunner/events{/privacy}", "followers_url": "https://api.github.com/users/JetRunner/followers", "following_url": "https://api.github.com/users/JetRunner/following{/other_user}", "gists_url": "https://api.github.com/users/JetRunner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JetRunner", "id": 22514219, "login": "JetRunner", "node_id": "MDQ6VXNlcjIyNTE0MjE5", "organizations_url": "https://api.github.com/users/JetRunner/orgs", "received_events_url": "https://api.github.com/users/JetRunner/received_events", "repos_url": "https://api.github.com/users/JetRunner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JetRunner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JetRunner/subscriptions", "type": "User", "url": "https://api.github.com/users/JetRunner" }
https://api.github.com/repos/huggingface/datasets/issues/742/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/742/timeline
closed
false
742
null
2020-10-22T16:19:48Z
null
true
723,924,275
https://api.github.com/repos/huggingface/datasets/issues/741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/741/events
[]
null
2022-02-15T17:03:10Z
[]
https://github.com/huggingface/datasets/issues/741
CONTRIBUTOR
completed
null
null
[]
Creating dataset consumes too much memory
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/741/reactions" }
MDU6SXNzdWU3MjM5MjQyNzU=
null
2020-10-18T06:07:06Z
https://api.github.com/repos/huggingface/datasets/issues/741/comments
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue. Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400): ```python def _generate_examples(self, base_path, split): """ Yields examples. """ filepath = os.path.join(base_path, "annotations", "manual", "PHOENIX-2014-T." + split + ".corpus.csv") images_path = os.path.join(base_path, "features", "fullFrame-210x260px", split) with open(filepath, "r", encoding="utf-8") as f: data = csv.DictReader(f, delimiter="|", quoting=csv.QUOTE_NONE) for row in data: frames_path = os.path.join(images_path, row["video"])[:-7] np_frames = [] for frame_name in os.listdir(frames_path): frame_path = os.path.join(frames_path, frame_name) im = Image.open(frame_path) np_frames.append(np.asarray(im)) im.close() yield row["name"], {"video": np_frames} ``` The dataset creation process goes out of memory on a machine with 500GB RAM. I was under the impression that the "generator" here is exactly for that, to avoid memory constraints. However, even if you want the entire dataset in memory, it would be in the worst case `260x210x3 x 400 max length x 7000 samples` in bytes (uint8) = 458.64 gigabytes So I'm not sure why it's taking more than 500GB. And the dataset creation fails after 170 examples on a machine with 120gb RAM, and after 672 examples on a machine with 500GB RAM. --- ## Info that might help: Iterating over examples is extremely slow. ![image](https://user-images.githubusercontent.com/5757359/96359590-3c666780-111d-11eb-9347-1f833ad982a9.png) If I perform this iteration in my own, custom loop (Without saving to file), it runs at 8-9 examples/sec And you can see at this state it is using 94% of the memory: ![image](https://user-images.githubusercontent.com/5757359/96359606-7afc2200-111d-11eb-8c11-0afbdba1a6a3.png) And it is only using one CPU core, which is probably why it's so slow: ![image](https://user-images.githubusercontent.com/5757359/96359630-a3841c00-111d-11eb-9ba0-7fd3cdf51d26.png)
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY" }
https://api.github.com/repos/huggingface/datasets/issues/741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/741/timeline
closed
false
741
null
2022-02-15T17:03:10Z
null
false
723,047,958
https://api.github.com/repos/huggingface/datasets/issues/740
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/740/events
[]
null
2020-10-19T08:54:37Z
[]
https://github.com/huggingface/datasets/pull/740
MEMBER
null
false
null
[]
Fix TREC urls
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/740/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA0NzAyNTc0
{ "diff_url": "https://github.com/huggingface/datasets/pull/740.diff", "html_url": "https://github.com/huggingface/datasets/pull/740", "merged_at": "2020-10-19T08:54:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/740.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/740" }
2020-10-16T09:11:28Z
https://api.github.com/repos/huggingface/datasets/issues/740/comments
The old TREC urls are now redirections. I updated the urls to the new ones, since we don't support redirections for downloads. Fix #737
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/740/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/740/timeline
closed
false
740
null
2020-10-19T08:54:36Z
null
true
723,044,066
https://api.github.com/repos/huggingface/datasets/issues/739
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/739/events
[]
null
2020-11-26T14:02:50Z
[]
https://github.com/huggingface/datasets/pull/739
MEMBER
null
false
null
[]
Add wiki dpr multiset embeddings
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/739/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA0Njk5NTY3
{ "diff_url": "https://github.com/huggingface/datasets/pull/739.diff", "html_url": "https://github.com/huggingface/datasets/pull/739", "merged_at": "2020-11-26T14:02:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/739.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/739" }
2020-10-16T09:05:49Z
https://api.github.com/repos/huggingface/datasets/issues/739/comments
There are two DPR encoders, one trained on Natural Questions and one trained on a multiset/hybrid dataset. Previously only the embeddings from the encoder trained on NQ were available. I'm adding the ones from the encoder trained on the multiset/hybrid dataset. In the configuration you can now specify `embeddings_name="nq"` or `embeddings_name="multiset"`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/739/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/739/timeline
closed
false
739
null
2020-11-26T14:02:49Z
null
true
723,033,923
https://api.github.com/repos/huggingface/datasets/issues/738
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/738/events
[]
null
2021-01-21T16:07:15Z
[]
https://github.com/huggingface/datasets/pull/738
CONTRIBUTOR
null
false
null
[]
Replace seqeval code with original classification_report for simplicity
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/738/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA0NjkxNjM4
{ "diff_url": "https://github.com/huggingface/datasets/pull/738.diff", "html_url": "https://github.com/huggingface/datasets/pull/738", "merged_at": "2020-10-19T10:31:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/738.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/738" }
2020-10-16T08:51:45Z
https://api.github.com/repos/huggingface/datasets/issues/738/comments
Recently, the original seqeval has enabled us to get per type scores and overall scores as a dictionary. This PR replaces the current code with the original function(`classification_report`) to simplify it. Also, the original code has been updated to fix #352. - Related issue: https://github.com/chakki-works/seqeval/pull/38 ```python from datasets import load_metric metric = load_metric("seqeval") y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']] metric.compute(predictions=y_pred, references=y_true) # Output: {'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8} ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/6737785?v=4", "events_url": "https://api.github.com/users/Hironsan/events{/privacy}", "followers_url": "https://api.github.com/users/Hironsan/followers", "following_url": "https://api.github.com/users/Hironsan/following{/other_user}", "gists_url": "https://api.github.com/users/Hironsan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Hironsan", "id": 6737785, "login": "Hironsan", "node_id": "MDQ6VXNlcjY3Mzc3ODU=", "organizations_url": "https://api.github.com/users/Hironsan/orgs", "received_events_url": "https://api.github.com/users/Hironsan/received_events", "repos_url": "https://api.github.com/users/Hironsan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Hironsan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hironsan/subscriptions", "type": "User", "url": "https://api.github.com/users/Hironsan" }
https://api.github.com/repos/huggingface/datasets/issues/738/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/738/timeline
closed
false
738
null
2020-10-19T10:31:12Z
null
true
722,463,923
https://api.github.com/repos/huggingface/datasets/issues/737
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/737/events
[]
null
2020-10-19T08:54:36Z
[]
https://github.com/huggingface/datasets/issues/737
NONE
completed
null
null
[]
Trec Dataset Connection Error
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/737/reactions" }
MDU6SXNzdWU3MjI0NjM5MjM=
null
2020-10-15T15:57:53Z
https://api.github.com/repos/huggingface/datasets/issues/737/comments
**Datasets Version:** 1.1.2 **Python Version:** 3.6/3.7 **Code:** ```python from datasets import load_dataset load_dataset("trec") ``` **Expected behavior:** Download Trec dataset and load Dataset object **Current Behavior:** Get a connection error saying it couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label (but the link doesn't seem broken) <details> <summary>Error Logs</summary> Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /root/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-8-66bf1242096e> in <module>() ----> 1 load_dataset("trec") 10 frames /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label </details>
{ "avatar_url": "https://avatars.githubusercontent.com/u/10554495?v=4", "events_url": "https://api.github.com/users/aychang95/events{/privacy}", "followers_url": "https://api.github.com/users/aychang95/followers", "following_url": "https://api.github.com/users/aychang95/following{/other_user}", "gists_url": "https://api.github.com/users/aychang95/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aychang95", "id": 10554495, "login": "aychang95", "node_id": "MDQ6VXNlcjEwNTU0NDk1", "organizations_url": "https://api.github.com/users/aychang95/orgs", "received_events_url": "https://api.github.com/users/aychang95/received_events", "repos_url": "https://api.github.com/users/aychang95/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aychang95/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aychang95/subscriptions", "type": "User", "url": "https://api.github.com/users/aychang95" }
https://api.github.com/repos/huggingface/datasets/issues/737/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/737/timeline
closed
false
737
null
2020-10-19T08:54:36Z
null
false
722,348,191
https://api.github.com/repos/huggingface/datasets/issues/736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/736/events
[]
null
2020-10-23T13:15:28Z
[]
https://github.com/huggingface/datasets/pull/736
CONTRIBUTOR
null
false
null
[]
Start community-provided dataset docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/736/reactions" }
MDExOlB1bGxSZXF1ZXN0NTA0MTE0MjMy
{ "diff_url": "https://github.com/huggingface/datasets/pull/736.diff", "html_url": "https://github.com/huggingface/datasets/pull/736", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/736.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/736" }
2020-10-15T13:41:39Z
https://api.github.com/repos/huggingface/datasets/issues/736/comments
This is one I did to get the pseudo-labels updated. Not sure if it generalizes, but I figured I would write it down. It was pretty easy because all I had to do was make properly formatted directories and change URLs. + In slack @thomwolf called it a `user-namespace` dataset, but the docs call it `community dataset`. I think the first naming is clearer, but I didn't address that here. + I didn't add metadata, will try that.
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
https://api.github.com/repos/huggingface/datasets/issues/736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/736/timeline
closed
false
736
null
2020-10-23T13:15:28Z
null
true
722,225,270
https://api.github.com/repos/huggingface/datasets/issues/735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/735/events
[]
null
2020-10-30T13:23:52Z
[]
https://github.com/huggingface/datasets/issues/735
CONTRIBUTOR
completed
null
null
[]
Throw error when an unexpected key is used in data_files
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/735/reactions" }
MDU6SXNzdWU3MjIyMjUyNzA=
null
2020-10-15T10:55:27Z
https://api.github.com/repos/huggingface/datasets/issues/735/comments
I have found that only "train", "validation" and "test" are valid keys in the `data_files` argument. When you use any other ones, those attached files are silently ignored - leading to unexpected behaviour for the users. So the following, unintuitively, returns only one key (namely `train`). ```python datasets = load_dataset("text", data_files={"train": train_f, "valid": valid_f}) print(datasets.keys()) # dict_keys(['train']) ``` whereas using `validation` instead, does return the expected result: ```python datasets = load_dataset("text", data_files={"train": train_f, "validation": valid_f}) print(datasets.keys()) # dict_keys(['train', 'validation']) ``` I would like to see more freedom in which keys one can use, but if that is not possible at least an error should be thrown when using an unexpected key.
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BramVanroy", "id": 2779410, "login": "BramVanroy", "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "repos_url": "https://api.github.com/users/BramVanroy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "type": "User", "url": "https://api.github.com/users/BramVanroy" }
https://api.github.com/repos/huggingface/datasets/issues/735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/735/timeline
closed
false
735
null
2020-10-30T13:23:52Z
null
false
721,767,848
https://api.github.com/repos/huggingface/datasets/issues/734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/734/events
[]
null
2020-10-15T09:27:43Z
[]
https://github.com/huggingface/datasets/pull/734
CONTRIBUTOR
null
false
null
[]
Fix GLUE metric description
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/734/reactions" }
MDExOlB1bGxSZXF1ZXN0NTAzNjMwMDcz
{ "diff_url": "https://github.com/huggingface/datasets/pull/734.diff", "html_url": "https://github.com/huggingface/datasets/pull/734", "merged_at": "2020-10-15T09:27:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/734.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/734" }
2020-10-14T20:44:14Z
https://api.github.com/repos/huggingface/datasets/issues/734/comments
Small typo: the description says translation instead of prediction.
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
https://api.github.com/repos/huggingface/datasets/issues/734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/734/timeline
closed
false
734
null
2020-10-15T09:27:42Z
null
true
721,366,744
https://api.github.com/repos/huggingface/datasets/issues/733
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/733/events
[]
null
2020-10-14T14:07:31Z
[]
https://github.com/huggingface/datasets/pull/733
CONTRIBUTOR
null
false
null
[]
Update link to dataset viewer
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/733/reactions" }
MDExOlB1bGxSZXF1ZXN0NTAzMjk2NDQw
{ "diff_url": "https://github.com/huggingface/datasets/pull/733.diff", "html_url": "https://github.com/huggingface/datasets/pull/733", "merged_at": "2020-10-14T14:07:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/733.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/733" }
2020-10-14T11:13:23Z
https://api.github.com/repos/huggingface/datasets/issues/733/comments
Change 404 error links in quick tour to working ones
{ "avatar_url": "https://avatars.githubusercontent.com/u/12969168?v=4", "events_url": "https://api.github.com/users/negedng/events{/privacy}", "followers_url": "https://api.github.com/users/negedng/followers", "following_url": "https://api.github.com/users/negedng/following{/other_user}", "gists_url": "https://api.github.com/users/negedng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/negedng", "id": 12969168, "login": "negedng", "node_id": "MDQ6VXNlcjEyOTY5MTY4", "organizations_url": "https://api.github.com/users/negedng/orgs", "received_events_url": "https://api.github.com/users/negedng/received_events", "repos_url": "https://api.github.com/users/negedng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/negedng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/negedng/subscriptions", "type": "User", "url": "https://api.github.com/users/negedng" }
https://api.github.com/repos/huggingface/datasets/issues/733/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/733/timeline
closed
false
733
null
2020-10-14T14:07:31Z
null
true
721,359,448
https://api.github.com/repos/huggingface/datasets/issues/732
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/732/events
[]
null
2021-03-23T06:19:43Z
[]
https://github.com/huggingface/datasets/pull/732
CONTRIBUTOR
null
false
null
[]
dataset(wlasl): initial loading script
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/732/reactions" }
MDExOlB1bGxSZXF1ZXN0NTAzMjkwMjEy
{ "diff_url": "https://github.com/huggingface/datasets/pull/732.diff", "html_url": "https://github.com/huggingface/datasets/pull/732", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/732.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/732" }
2020-10-14T11:01:42Z
https://api.github.com/repos/huggingface/datasets/issues/732/comments
takes like 9-10 hours to download all of the videos for the dataset, but it does finish :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY" }
https://api.github.com/repos/huggingface/datasets/issues/732/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/732/timeline
closed
false
732
null
2021-03-23T06:19:43Z
null
true
721,142,985
https://api.github.com/repos/huggingface/datasets/issues/731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/731/events
[]
null
2020-10-28T15:27:06Z
[]
https://github.com/huggingface/datasets/pull/731
CONTRIBUTOR
null
false
null
[]
dataset(aslg_pc12): initial loading script
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/731/reactions" }
MDExOlB1bGxSZXF1ZXN0NTAzMTExNzc4
{ "diff_url": "https://github.com/huggingface/datasets/pull/731.diff", "html_url": "https://github.com/huggingface/datasets/pull/731", "merged_at": "2020-10-28T15:27:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/731.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/731" }
2020-10-14T05:14:37Z
https://api.github.com/repos/huggingface/datasets/issues/731/comments
This contains the only current public part of this corpus. The rest of the corpus is not yet been made public, but this sample is still being used by researchers.
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY" }
https://api.github.com/repos/huggingface/datasets/issues/731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/731/timeline
closed
false
731
null
2020-10-28T15:27:06Z
null
true
721,073,812
https://api.github.com/repos/huggingface/datasets/issues/730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/730/events
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
null
2022-11-22T01:45:54Z
[]
https://github.com/huggingface/datasets/issues/730
NONE
completed
null
null
[]
Possible caching bug
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/730/reactions" }
MDU6SXNzdWU3MjEwNzM4MTI=
null
2020-10-14T02:02:34Z
https://api.github.com/repos/huggingface/datasets/issues/730/comments
The following code with `test1.txt` containing just "🤗🤗🤗": ``` dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1") print(dataset[0]) dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8") print(dataset[0]) ``` produces this output: ``` Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155... Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data. {'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'} Using custom data configuration default Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155) {'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'} ``` Just changing the order (and deleting the temp files): ``` dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8") print(dataset[0]) dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1") print(dataset[0]) ``` produces this: ``` Using custom data configuration default Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155... Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data. {'text': '🤗🤗🤗'} Using custom data configuration default Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155) {'text': '🤗🤗🤗'} ``` Is it intended that the cache path does not depend on the config entries? tested with datasets==1.1.2 and python==3.8.5
{ "avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4", "events_url": "https://api.github.com/users/ArneBinder/events{/privacy}", "followers_url": "https://api.github.com/users/ArneBinder/followers", "following_url": "https://api.github.com/users/ArneBinder/following{/other_user}", "gists_url": "https://api.github.com/users/ArneBinder/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArneBinder", "id": 3375489, "login": "ArneBinder", "node_id": "MDQ6VXNlcjMzNzU0ODk=", "organizations_url": "https://api.github.com/users/ArneBinder/orgs", "received_events_url": "https://api.github.com/users/ArneBinder/received_events", "repos_url": "https://api.github.com/users/ArneBinder/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArneBinder/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArneBinder/subscriptions", "type": "User", "url": "https://api.github.com/users/ArneBinder" }
https://api.github.com/repos/huggingface/datasets/issues/730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/730/timeline
closed
false
730
null
2020-10-29T09:36:01Z
null
false
719,558,876
https://api.github.com/repos/huggingface/datasets/issues/729
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/729/events
[]
null
2020-10-29T15:18:24Z
[]
https://github.com/huggingface/datasets/issues/729
CONTRIBUTOR
completed
null
null
[]
Better error message when one forgets to call `add_batch` before `compute`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/729/reactions" }
MDU6SXNzdWU3MTk1NTg4NzY=
null
2020-10-12T17:59:22Z
https://api.github.com/repos/huggingface/datasets/issues/729/comments
When using metrics, if for some reason a user forgets to call `add_batch` to a metric before `compute` (with no arguments), the error message is a bit cryptic and could probably be made clearer. ## Reproducer ```python import datasets import torch from datasets import Metric class GatherMetric(Metric): def _info(self): return datasets.MetricInfo( description="description", citation="citation", inputs_description="kwargs", features=datasets.Features({ 'predictions': datasets.Value('int64'), 'references': datasets.Value('int64'), }), codebase_urls=[], reference_urls=[], format='numpy' ) def _compute(self, predictions, references): return {"predictions": predictions, "labels": references} metric = GatherMetric(cache_dir="test-metric") inputs = torch.randint(0, 2, (1024,)) targets = torch.randint(0, 2, (1024,)) batch_size = 8 for i in range(0, 1024, batch_size): pass # User forgets to call `add_batch` result = metric.compute() ``` ## Stack trace: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-13-267729d187fa> in <module> 3 pass 4 # metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size]) ----> 5 result = metric.compute() ~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs) 380 if predictions is not None: 381 self.add_batch(predictions=predictions, references=references) --> 382 self._finalize() 383 384 self.cache_file_name = None ~/git/datasets/src/datasets/metric.py in _finalize(self) 343 elif self.process_id == 0: 344 # Let's acquire a lock on each node files to be sure they are finished writing --> 345 file_paths, filelocks = self._get_all_cache_files() 346 347 # Read the predictions and references ~/git/datasets/src/datasets/metric.py in _get_all_cache_files(self) 280 filelocks = [] 281 for process_id, file_path in enumerate(file_paths): --> 282 filelock = FileLock(file_path + ".lock") 283 try: 284 filelock.acquire(timeout=self.timeout) TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
https://api.github.com/repos/huggingface/datasets/issues/729/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/729/timeline
closed
false
729
null
2020-10-29T15:18:24Z
null
false
719,555,780
https://api.github.com/repos/huggingface/datasets/issues/728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/728/events
[]
null
2020-10-29T09:34:42Z
[]
https://github.com/huggingface/datasets/issues/728
CONTRIBUTOR
completed
null
null
[]
Passing `cache_dir` to a metric does not work
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/728/reactions" }
MDU6SXNzdWU3MTk1NTU3ODA=
null
2020-10-12T17:55:14Z
https://api.github.com/repos/huggingface/datasets/issues/728/comments
When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError: ## Reproducer ```python import datasets import torch from datasets import Metric class GatherMetric(Metric): def _info(self): return datasets.MetricInfo( description="description", citation="citation", inputs_description="kwargs", features=datasets.Features({ 'predictions': datasets.Value('int64'), 'references': datasets.Value('int64'), }), codebase_urls=[], reference_urls=[], format='numpy' ) def _compute(self, predictions, references): return {"predictions": predictions, "labels": references} metric = GatherMetric(cache_dir="test-metric") inputs = torch.randint(0, 2, (1024,)) targets = torch.randint(0, 2, (1024,)) batch_size = 8 for i in range(0, 1024, batch_size): metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size]) result = metric.compute() ``` ## Stack trace: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) ~/git/datasets/src/datasets/metric.py in _finalize(self) 349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features)) --> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths])) 351 except FileNotFoundError: ~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions) 227 # Prepend path to filename --> 228 pa_table = self._read_files(files) 229 files = copy.deepcopy(files) ~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files) 166 for f_dict in files: --> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict) 168 pa_tables.append(pa_table) ~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take) 291 ) --> 292 mmap = pa.memory_map(filename) 293 f = pa.ipc.open_stream(mmap) ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map() ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open() ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory During handling of the above exception, another exception occurred: ValueError Traceback (most recent call last) <ipython-input-17-e42d43cc981f> in <module> 2 for i in range(0, 1024, batch_size): 3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size]) ----> 4 result = metric.compute() ~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs) 380 if predictions is not None: 381 self.add_batch(predictions=predictions, references=references) --> 382 self._finalize() 383 384 self.cache_file_name = None ~/git/datasets/src/datasets/metric.py in _finalize(self) 351 except FileNotFoundError: 352 raise ValueError( --> 353 "Error in finalize: another metric instance is already using the local cache file. " 354 "Please specify an experiment_id to avoid colision between distributed metric instances." 355 ) ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances. ``` The code works when we remove the `cache_dir=...` from the metric.
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
https://api.github.com/repos/huggingface/datasets/issues/728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/728/timeline
closed
false
728
null
2020-10-29T09:34:42Z
null
false
719,386,366
https://api.github.com/repos/huggingface/datasets/issues/727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/727/events
[]
null
2020-10-12T13:36:05Z
[]
https://github.com/huggingface/datasets/issues/727
MEMBER
null
null
null
[]
Parallel downloads progress bar flickers
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/727/reactions" }
MDU6SXNzdWU3MTkzODYzNjY=
null
2020-10-12T13:36:05Z
https://api.github.com/repos/huggingface/datasets/issues/727/comments
When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line. To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar. Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/727/timeline
open
false
727
null
null
null
false
719,313,754
https://api.github.com/repos/huggingface/datasets/issues/726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/726/events
[]
null
2022-02-17T17:53:54Z
[]
https://github.com/huggingface/datasets/issues/726
NONE
completed
null
null
[]
"Checksums didn't match for dataset source files" error while loading openwebtext dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 2, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/726/reactions" }
MDU6SXNzdWU3MTkzMTM3NTQ=
null
2020-10-12T11:45:10Z
https://api.github.com/repos/huggingface/datasets/issues/726/comments
Hi, I have encountered this problem during loading the openwebtext dataset: ``` >>> dataset = load_dataset('openwebtext') Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/openwebtext/plain_text/1.0.0/5c636399c7155da97c982d0d70ecdce30fbca66a4eb4fc768ad91f8331edac02... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 536, in _download_and_prepare self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" File "/home/admin/workspace/anaconda3/envs/torch1.6-py3.7/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://zenodo.org/record/3834942/files/openwebtext.tar.xz'] ``` I think this problem is caused because the released dataset has changed. Or I should download the dataset manually? Sorry for release the unfinised issue by mistake.
{ "avatar_url": "https://avatars.githubusercontent.com/u/16469472?v=4", "events_url": "https://api.github.com/users/SparkJiao/events{/privacy}", "followers_url": "https://api.github.com/users/SparkJiao/followers", "following_url": "https://api.github.com/users/SparkJiao/following{/other_user}", "gists_url": "https://api.github.com/users/SparkJiao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SparkJiao", "id": 16469472, "login": "SparkJiao", "node_id": "MDQ6VXNlcjE2NDY5NDcy", "organizations_url": "https://api.github.com/users/SparkJiao/orgs", "received_events_url": "https://api.github.com/users/SparkJiao/received_events", "repos_url": "https://api.github.com/users/SparkJiao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SparkJiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SparkJiao/subscriptions", "type": "User", "url": "https://api.github.com/users/SparkJiao" }
https://api.github.com/repos/huggingface/datasets/issues/726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/726/timeline
closed
false
726
null
2022-02-15T10:38:57Z
null
false
718,985,641
https://api.github.com/repos/huggingface/datasets/issues/725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/725/events
[]
null
2020-10-23T16:24:35Z
[]
https://github.com/huggingface/datasets/pull/725
CONTRIBUTOR
null
false
null
[]
pretty print dataset objects
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/725/reactions" }
MDExOlB1bGxSZXF1ZXN0NTAxMjUxODI1
{ "diff_url": "https://github.com/huggingface/datasets/pull/725.diff", "html_url": "https://github.com/huggingface/datasets/pull/725", "merged_at": "2020-10-23T09:00:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/725.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/725" }
2020-10-12T02:03:46Z
https://api.github.com/repos/huggingface/datasets/issues/725/comments
Currently, if I do: ``` from datasets import load_dataset load_dataset("wikihow", 'all', data_dir="/hf/pegasus-datasets/wikihow/") ``` I get: ``` DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)}, num_rows: 157252), 'validation': Dataset(features: {'text': Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)}, num_rows: 5599), 'test': Dataset(features: {'text': Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)}, num_rows: 5577)}) ``` This is not very readable. Can we either have a better `__repr__` or have a custom method to nicely pprint the dataset object? Here is my very simple attempt. With this PR, it produces: ``` DatasetDict({ train: Dataset({ features: ['text', 'headline', 'title'], num_rows: 157252 }) validation: Dataset({ features: ['text', 'headline', 'title'], num_rows: 5599 }) test: Dataset({ features: ['text', 'headline', 'title'], num_rows: 5577 }) }) ``` I did omit the data types on purpose to make it more readable, but it shouldn't be too difficult to integrate those too. note that this PR also fixes the inconsistency in output that in master misses enclosing `{}` for Dataset, but it is there for `DatasetDict` - or perhaps it was by design. I'm totally not attached to this format, just wanting something more readable. One approach could be to serialize to `json.dumps` or something similar. It'd make the indentation simpler. Thank you.
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
https://api.github.com/repos/huggingface/datasets/issues/725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/725/timeline
closed
false
725
null
2020-10-23T09:00:46Z
null
true
718,947,700
https://api.github.com/repos/huggingface/datasets/issues/724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/724/events
[]
null
2020-10-14T17:00:12Z
[]
https://github.com/huggingface/datasets/issues/724
CONTRIBUTOR
completed
null
null
[]
need to redirect /nlp to /datasets and remove outdated info
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/724/reactions" }
MDU6SXNzdWU3MTg5NDc3MDA=
null
2020-10-11T23:12:12Z
https://api.github.com/repos/huggingface/datasets/issues/724/comments
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all should probably redirect to: https://huggingface.co/datasets/wikihow also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable).
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
https://api.github.com/repos/huggingface/datasets/issues/724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/724/timeline
closed
false
724
null
2020-10-14T17:00:12Z
null
false
718,926,723
https://api.github.com/repos/huggingface/datasets/issues/723
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/723/events
[]
null
2021-08-03T05:11:51Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" } ]
https://github.com/huggingface/datasets/issues/723
CONTRIBUTOR
completed
null
null
[]
Adding pseudo-labels to datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/723/reactions" }
MDU6SXNzdWU3MTg5MjY3MjM=
null
2020-10-11T21:05:45Z
https://api.github.com/repos/huggingface/datasets/issues/723/comments
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo. Since pseudo-labels are just a large model's generations on an existing dataset, what is the right way to structure this contribution. I read https://huggingface.co/docs/datasets/add_dataset.html, but it doesn't really cover this type of contribution. I could, for example, make a new directory, `xsum_bart_pseudolabels` for each set of pseudolabels or add some sort of parametrization to `xsum.py`: https://github.com/huggingface/datasets/blob/5f4c6e830f603830117877b8990a0e65a2386aa6/datasets/xsum/xsum.py What do you think @lhoestq ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
https://api.github.com/repos/huggingface/datasets/issues/723/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/723/timeline
closed
false
723
null
2021-08-03T05:11:51Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "events_url": "https://api.github.com/users/sshleifer/events{/privacy}", "followers_url": "https://api.github.com/users/sshleifer/followers", "following_url": "https://api.github.com/users/sshleifer/following{/other_user}", "gists_url": "https://api.github.com/users/sshleifer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sshleifer", "id": 6045025, "login": "sshleifer", "node_id": "MDQ6VXNlcjYwNDUwMjU=", "organizations_url": "https://api.github.com/users/sshleifer/orgs", "received_events_url": "https://api.github.com/users/sshleifer/received_events", "repos_url": "https://api.github.com/users/sshleifer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sshleifer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sshleifer/subscriptions", "type": "User", "url": "https://api.github.com/users/sshleifer" }
false
718,689,117
https://api.github.com/repos/huggingface/datasets/issues/722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/722/events
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
null
2022-09-30T14:53:37Z
[]
https://github.com/huggingface/datasets/pull/722
CONTRIBUTOR
null
false
null
[]
datasets(RWTH-PHOENIX-Weather 2014 T): add initial loading script
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/722/reactions" }
MDExOlB1bGxSZXF1ZXN0NTAxMDI3NjAw
{ "diff_url": "https://github.com/huggingface/datasets/pull/722.diff", "html_url": "https://github.com/huggingface/datasets/pull/722", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/722.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/722" }
2020-10-10T19:44:08Z
https://api.github.com/repos/huggingface/datasets/issues/722/comments
This is the first sign language dataset in this repo as far as I know. Following an old issue I opened https://github.com/huggingface/datasets/issues/302. I added the dataset official REAMDE file, but I see it's not very standard, so it can be removed.
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY" }
https://api.github.com/repos/huggingface/datasets/issues/722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/722/timeline
closed
false
722
null
2022-09-30T14:53:37Z
null
true
718,647,147
https://api.github.com/repos/huggingface/datasets/issues/721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/721/events
[]
null
2022-02-15T10:44:44Z
[]
https://github.com/huggingface/datasets/issues/721
CONTRIBUTOR
completed
null
null
[]
feat(dl_manager): add support for ftp downloads
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/721/reactions" }
MDU6SXNzdWU3MTg2NDcxNDc=
null
2020-10-10T15:50:20Z
https://api.github.com/repos/huggingface/datasets/issues/721/comments
I am working on a new dataset (#302) and encounter a problem downloading it. ```python # This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/ _URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz" dl_manager.download_and_extract(_URL) ``` I get an error: > ValueError: unable to parse ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz as a URL or as a local path I checked, and indeed you don't consider `ftp` as a remote file. https://github.com/huggingface/datasets/blob/4c2af707a6955cf4b45f83ac67990395327c5725/src/datasets/utils/file_utils.py#L188 Adding `ftp` to that list does not immediately solve the issue, so there probably needs to be some extra work.
{ "avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4", "events_url": "https://api.github.com/users/AmitMY/events{/privacy}", "followers_url": "https://api.github.com/users/AmitMY/followers", "following_url": "https://api.github.com/users/AmitMY/following{/other_user}", "gists_url": "https://api.github.com/users/AmitMY/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AmitMY", "id": 5757359, "login": "AmitMY", "node_id": "MDQ6VXNlcjU3NTczNTk=", "organizations_url": "https://api.github.com/users/AmitMY/orgs", "received_events_url": "https://api.github.com/users/AmitMY/received_events", "repos_url": "https://api.github.com/users/AmitMY/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AmitMY/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AmitMY/subscriptions", "type": "User", "url": "https://api.github.com/users/AmitMY" }
https://api.github.com/repos/huggingface/datasets/issues/721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/721/timeline
closed
false
721
null
2022-02-15T10:44:43Z
null
false
716,581,266
https://api.github.com/repos/huggingface/datasets/issues/720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/720/events
[]
null
2020-12-23T14:04:31Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/720
NONE
completed
null
null
[]
OSError: Cannot find data file when not using the dummy dataset in RAG
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/720/reactions" }
MDU6SXNzdWU3MTY1ODEyNjY=
null
2020-10-07T14:27:13Z
https://api.github.com/repos/huggingface/datasets/issues/720/comments
## Environment info transformers version: 3.3.1 Platform: Linux-4.19 Python version: 3.7.7 PyTorch version (GPU?): 1.6.0 Tensorflow version (GPU?): No Using GPU in script?: Yes Using distributed or parallel set-up in script?: No ## To reproduce Steps to reproduce the behaviour: ``` import os os.environ['HF_DATASETS_CACHE'] = '/workspace/notebooks/POCs/cache' from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) ``` Plese note that I'm using the whole dataset: **use_dummy_dataset=False** After around 4 hours (downloading and some other things) this is returned: ``` Downloading and preparing dataset wiki_dpr/psgs_w100.nq.exact (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /workspace/notebooks/POCs/cache/wiki_dpr/psgs_w100.nq.exact/0.0.0/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2... --------------------------------------------------------------------------- UnpicklingError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding) 459 try: --> 460 return pickle.load(fid, **pickle_kwargs) 461 except Exception: UnpicklingError: pickle data was truncated During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 552 # Prepare split will record examples associated to the split --> 553 self._prepare_split(split_generator, **prepare_split_kwargs) 554 except OSError: /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 840 for key, record in utils.tqdm( --> 841 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose 842 ): /opt/conda/lib/python3.7/site-packages/tqdm/notebook.py in __iter__(self, *args, **kwargs) 217 try: --> 218 for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): 219 # return super(tqdm...) will not catch exception /opt/conda/lib/python3.7/site-packages/tqdm/std.py in __iter__(self) 1128 try: -> 1129 for obj in iterable: 1130 yield obj ~/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/14b973bf2a456087ff69c0fd34526684eed22e48e0dfce4338f9a22b965ce7c2/wiki_dpr.py in _generate_examples(self, data_file, vectors_files) 131 break --> 132 vecs = np.load(open(vectors_files.pop(0), "rb"), allow_pickle=True) 133 vec_idx = 0 /opt/conda/lib/python3.7/site-packages/numpy/lib/npyio.py in load(file, mmap_mode, allow_pickle, fix_imports, encoding) 462 raise IOError( --> 463 "Failed to interpret file %s as a pickle" % repr(file)) 464 finally: OSError: Failed to interpret file <_io.BufferedReader name='/workspace/notebooks/POCs/cache/downloads/f34d5f091294259b4ca90e813631e69a6ded660d71b6cbedf89ddba50df94448'> as a pickle During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-10-f28df370ac47> in <module> 1 # ln -s /workspace/notebooks/POCs/cache /root/.cache/huggingface/datasets ----> 2 retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=False) /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs) 307 generator_tokenizer = rag_tokenizer.generator 308 return cls( --> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer 310 ) 311 /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer) 298 self.config = config 299 if self._init_retrieval: --> 300 self.init_retrieval() 301 302 @classmethod /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_retrieval(self) 324 325 logger.info("initializing retrieval") --> 326 self.index.init_index() 327 328 def postprocess_docs(self, docs, input_strings, prefix, n_docs, return_tensors=None): /opt/conda/lib/python3.7/site-packages/transformers/retrieval_rag.py in init_index(self) 238 split=self.dataset_split, 239 index_name=self.index_name, --> 240 dummy=self.use_dummy_dataset, 241 ) 242 self.dataset.set_format("numpy", columns=["embeddings"], output_all_columns=True) /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 474 if not downloaded_from_gcs: 475 self._download_and_prepare( --> 476 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 477 ) 478 # Sync info /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 553 self._prepare_split(split_generator, **prepare_split_kwargs) 554 except OSError: --> 555 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) 556 557 if verify_infos: OSError: Cannot find data file. ``` Thanks
{ "avatar_url": "https://avatars.githubusercontent.com/u/4112135?v=4", "events_url": "https://api.github.com/users/josemlopez/events{/privacy}", "followers_url": "https://api.github.com/users/josemlopez/followers", "following_url": "https://api.github.com/users/josemlopez/following{/other_user}", "gists_url": "https://api.github.com/users/josemlopez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/josemlopez", "id": 4112135, "login": "josemlopez", "node_id": "MDQ6VXNlcjQxMTIxMzU=", "organizations_url": "https://api.github.com/users/josemlopez/orgs", "received_events_url": "https://api.github.com/users/josemlopez/received_events", "repos_url": "https://api.github.com/users/josemlopez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/josemlopez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/josemlopez/subscriptions", "type": "User", "url": "https://api.github.com/users/josemlopez" }
https://api.github.com/repos/huggingface/datasets/issues/720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/720/timeline
closed
false
720
null
2020-12-23T14:04:31Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
716,492,263
https://api.github.com/repos/huggingface/datasets/issues/719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/719/events
[]
null
2020-10-07T13:38:08Z
[]
https://github.com/huggingface/datasets/pull/719
MEMBER
null
false
null
[]
Fix train_test_split output format
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/719/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk5MjE5Mjg2
{ "diff_url": "https://github.com/huggingface/datasets/pull/719.diff", "html_url": "https://github.com/huggingface/datasets/pull/719", "merged_at": "2020-10-07T13:38:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/719.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/719" }
2020-10-07T12:39:01Z
https://api.github.com/repos/huggingface/datasets/issues/719/comments
There was an issue in the `transmit_format` wrapper that returned bad formats when using train_test_split. This was due to `column_names` being handled as a List[str] instead of Dict[str, List[str]] when the dataset transform (train_test_split) returns a DatasetDict (one set of column names per split). This should fix @timothyjlaurent 's issue in #620 and fix #676 I added tests for `transmit_format` so that it doesn't happen again
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/719/timeline
closed
false
719
null
2020-10-07T13:38:06Z
null
true
715,694,709
https://api.github.com/repos/huggingface/datasets/issues/718
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/718/events
[]
null
2020-10-06T13:49:24Z
[]
https://github.com/huggingface/datasets/pull/718
MEMBER
null
false
null
[]
Don't use tqdm 4.50.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/718/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk4NTU5MDcw
{ "diff_url": "https://github.com/huggingface/datasets/pull/718.diff", "html_url": "https://github.com/huggingface/datasets/pull/718", "merged_at": "2020-10-06T13:49:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/718.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/718" }
2020-10-06T13:45:53Z
https://api.github.com/repos/huggingface/datasets/issues/718/comments
tqdm 4.50.0 introduced permission errors on windows see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111) for the error details. For now I just added `<4.50.0` in the setup.py Hopefully we can find what's wrong with this version soon
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/718/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/718/timeline
closed
false
718
null
2020-10-06T13:49:22Z
null
true
714,959,268
https://api.github.com/repos/huggingface/datasets/issues/717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/717/events
[]
null
2020-10-06T06:31:43Z
[]
https://github.com/huggingface/datasets/pull/717
CONTRIBUTOR
null
false
null
[]
Fixes #712 Error in the Overview.ipynb notebook
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/717/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk3OTUwOTA2
{ "diff_url": "https://github.com/huggingface/datasets/pull/717.diff", "html_url": "https://github.com/huggingface/datasets/pull/717", "merged_at": "2020-10-05T16:25:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/717.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/717" }
2020-10-05T15:50:41Z
https://api.github.com/repos/huggingface/datasets/issues/717/comments
Fixes #712 Error in the Overview.ipynb notebook by adding `with_details=True` parameter to `list_datasets` function in Cell 3 of **overview** notebook
{ "avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4", "events_url": "https://api.github.com/users/subhrm/events{/privacy}", "followers_url": "https://api.github.com/users/subhrm/followers", "following_url": "https://api.github.com/users/subhrm/following{/other_user}", "gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/subhrm", "id": 850012, "login": "subhrm", "node_id": "MDQ6VXNlcjg1MDAxMg==", "organizations_url": "https://api.github.com/users/subhrm/orgs", "received_events_url": "https://api.github.com/users/subhrm/received_events", "repos_url": "https://api.github.com/users/subhrm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/subhrm/subscriptions", "type": "User", "url": "https://api.github.com/users/subhrm" }
https://api.github.com/repos/huggingface/datasets/issues/717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/717/timeline
closed
false
717
null
2020-10-05T16:25:41Z
null
true
714,952,888
https://api.github.com/repos/huggingface/datasets/issues/716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/716/events
[]
null
2020-10-05T15:46:38Z
[]
https://github.com/huggingface/datasets/pull/716
CONTRIBUTOR
null
false
null
[]
Fixes #712 Attribute error in cell 3 of the overview notebook
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/716/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk3OTQ1ODAw
{ "diff_url": "https://github.com/huggingface/datasets/pull/716.diff", "html_url": "https://github.com/huggingface/datasets/pull/716", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/716.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/716" }
2020-10-05T15:42:09Z
https://api.github.com/repos/huggingface/datasets/issues/716/comments
Fixes the Attribute error in cell 3 of the overview notebook
{ "avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4", "events_url": "https://api.github.com/users/subhrm/events{/privacy}", "followers_url": "https://api.github.com/users/subhrm/followers", "following_url": "https://api.github.com/users/subhrm/following{/other_user}", "gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/subhrm", "id": 850012, "login": "subhrm", "node_id": "MDQ6VXNlcjg1MDAxMg==", "organizations_url": "https://api.github.com/users/subhrm/orgs", "received_events_url": "https://api.github.com/users/subhrm/received_events", "repos_url": "https://api.github.com/users/subhrm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/subhrm/subscriptions", "type": "User", "url": "https://api.github.com/users/subhrm" }
https://api.github.com/repos/huggingface/datasets/issues/716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/716/timeline
closed
false
716
null
2020-10-05T15:46:32Z
null
true
714,690,192
https://api.github.com/repos/huggingface/datasets/issues/715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/715/events
[]
null
2020-10-05T13:13:18Z
[]
https://github.com/huggingface/datasets/pull/715
MEMBER
null
false
null
[]
Use python read for text dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 3, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/715/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk3NzMwMDQ2
{ "diff_url": "https://github.com/huggingface/datasets/pull/715.diff", "html_url": "https://github.com/huggingface/datasets/pull/715", "merged_at": "2020-10-05T13:13:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/715.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/715" }
2020-10-05T09:47:55Z
https://api.github.com/repos/huggingface/datasets/issues/715/comments
As mentioned in #622 the pandas reader used for text dataset doesn't work properly when there are \r characters in the text file. Instead I switched to pure python using `open` and `read`. From my benchmark on a 100MB text file, it's the same speed as the previous pandas reader.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/715/timeline
closed
false
715
null
2020-10-05T13:13:17Z
null
true
714,487,881
https://api.github.com/repos/huggingface/datasets/issues/714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/714/events
[]
null
2020-10-12T11:49:21Z
[]
https://github.com/huggingface/datasets/pull/714
NONE
null
false
null
[]
Add the official dependabot implementation
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/714/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk3NTYzNjAx
{ "diff_url": "https://github.com/huggingface/datasets/pull/714.diff", "html_url": "https://github.com/huggingface/datasets/pull/714", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/714.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/714" }
2020-10-05T03:49:45Z
https://api.github.com/repos/huggingface/datasets/issues/714/comments
This will keep dependencies up to date. This will require a pr label `dependencies` being created in order to function correctly.
{ "avatar_url": "https://avatars.githubusercontent.com/u/12804673?v=4", "events_url": "https://api.github.com/users/ALazyMeme/events{/privacy}", "followers_url": "https://api.github.com/users/ALazyMeme/followers", "following_url": "https://api.github.com/users/ALazyMeme/following{/other_user}", "gists_url": "https://api.github.com/users/ALazyMeme/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ALazyMeme", "id": 12804673, "login": "ALazyMeme", "node_id": "MDQ6VXNlcjEyODA0Njcz", "organizations_url": "https://api.github.com/users/ALazyMeme/orgs", "received_events_url": "https://api.github.com/users/ALazyMeme/received_events", "repos_url": "https://api.github.com/users/ALazyMeme/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ALazyMeme/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ALazyMeme/subscriptions", "type": "User", "url": "https://api.github.com/users/ALazyMeme" }
https://api.github.com/repos/huggingface/datasets/issues/714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/714/timeline
closed
false
714
null
2020-10-12T11:49:21Z
null
true
714,475,732
https://api.github.com/repos/huggingface/datasets/issues/713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/713/events
[]
null
2020-10-09T05:58:25Z
[]
https://github.com/huggingface/datasets/pull/713
NONE
null
false
null
[]
Fix reading text files with carriage return symbols
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/713/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk3NTUzOTUy
{ "diff_url": "https://github.com/huggingface/datasets/pull/713.diff", "html_url": "https://github.com/huggingface/datasets/pull/713", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/713.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/713" }
2020-10-05T03:07:03Z
https://api.github.com/repos/huggingface/datasets/issues/713/comments
The new pandas-based text reader isn't able to work properly with files that contain carriage return symbols (`\r`). It fails with the following error message: ``` ... File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._read_low_memory File "pandas/_libs/parsers.pyx", line 918, in pandas._libs.parsers.TextReader._read_rows File "pandas/_libs/parsers.pyx", line 905, in pandas._libs.parsers.TextReader._tokenize_rows File "pandas/_libs/parsers.pyx", line 2042, in pandas._libs.parsers.raise_parser_error pandas.errors.ParserError: Error tokenizing data. C error: Buffer overflow caught - possible malformed input file. ``` ___ I figured out the pandas uses those symbols as line terminators and this eventually causes the error. Explicitly specifying the `lineterminator` fixes that issue and everything works fine. Please, consider this PR as it seems to be a common issue to solve.
{ "avatar_url": "https://avatars.githubusercontent.com/u/6762769?v=4", "events_url": "https://api.github.com/users/mozharovsky/events{/privacy}", "followers_url": "https://api.github.com/users/mozharovsky/followers", "following_url": "https://api.github.com/users/mozharovsky/following{/other_user}", "gists_url": "https://api.github.com/users/mozharovsky/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mozharovsky", "id": 6762769, "login": "mozharovsky", "node_id": "MDQ6VXNlcjY3NjI3Njk=", "organizations_url": "https://api.github.com/users/mozharovsky/orgs", "received_events_url": "https://api.github.com/users/mozharovsky/received_events", "repos_url": "https://api.github.com/users/mozharovsky/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mozharovsky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mozharovsky/subscriptions", "type": "User", "url": "https://api.github.com/users/mozharovsky" }
https://api.github.com/repos/huggingface/datasets/issues/713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/713/timeline
closed
false
713
null
2020-10-05T13:49:29Z
null
true
714,242,316
https://api.github.com/repos/huggingface/datasets/issues/712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/712/events
[]
null
2020-10-05T16:25:40Z
[]
https://github.com/huggingface/datasets/issues/712
CONTRIBUTOR
completed
null
null
[]
Error in the notebooks/Overview.ipynb notebook
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/712/reactions" }
MDU6SXNzdWU3MTQyNDIzMTY=
null
2020-10-04T05:58:31Z
https://api.github.com/repos/huggingface/datasets/issues/712/comments
Hi, I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab. ```python # You can access various attributes of the datasets before downloading them squad_dataset = list_datasets()[datasets.index('squad')] pprint(squad_dataset.__dict__) # It's a simple python dataclass ``` Error message ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-8dc805c4949c> in <module>() 2 squad_dataset = list_datasets()[datasets.index('squad')] 3 ----> 4 pprint(squad_dataset.__dict__) # It's a simple python dataclass AttributeError: 'str' object has no attribute '__dict__' ``` The object `squad_dataset` is a `str` not a `dataclass` .
{ "avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4", "events_url": "https://api.github.com/users/subhrm/events{/privacy}", "followers_url": "https://api.github.com/users/subhrm/followers", "following_url": "https://api.github.com/users/subhrm/following{/other_user}", "gists_url": "https://api.github.com/users/subhrm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/subhrm", "id": 850012, "login": "subhrm", "node_id": "MDQ6VXNlcjg1MDAxMg==", "organizations_url": "https://api.github.com/users/subhrm/orgs", "received_events_url": "https://api.github.com/users/subhrm/received_events", "repos_url": "https://api.github.com/users/subhrm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/subhrm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/subhrm/subscriptions", "type": "User", "url": "https://api.github.com/users/subhrm" }
https://api.github.com/repos/huggingface/datasets/issues/712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/712/timeline
closed
false
712
null
2020-10-05T16:25:40Z
null
false
714,236,408
https://api.github.com/repos/huggingface/datasets/issues/711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/711/events
[]
null
2020-10-05T16:26:51Z
[]
https://github.com/huggingface/datasets/pull/711
CONTRIBUTOR
null
false
null
[]
New Update bertscore.py
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/711/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk3Mzc3NzU3
{ "diff_url": "https://github.com/huggingface/datasets/pull/711.diff", "html_url": "https://github.com/huggingface/datasets/pull/711", "merged_at": "2020-10-05T16:26:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/711.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/711" }
2020-10-04T05:13:09Z
https://api.github.com/repos/huggingface/datasets/issues/711/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/51692618?v=4", "events_url": "https://api.github.com/users/PassionateLooker/events{/privacy}", "followers_url": "https://api.github.com/users/PassionateLooker/followers", "following_url": "https://api.github.com/users/PassionateLooker/following{/other_user}", "gists_url": "https://api.github.com/users/PassionateLooker/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PassionateLooker", "id": 51692618, "login": "PassionateLooker", "node_id": "MDQ6VXNlcjUxNjkyNjE4", "organizations_url": "https://api.github.com/users/PassionateLooker/orgs", "received_events_url": "https://api.github.com/users/PassionateLooker/received_events", "repos_url": "https://api.github.com/users/PassionateLooker/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PassionateLooker/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PassionateLooker/subscriptions", "type": "User", "url": "https://api.github.com/users/PassionateLooker" }
https://api.github.com/repos/huggingface/datasets/issues/711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/711/timeline
closed
false
711
null
2020-10-05T16:26:51Z
null
true
714,186,999
https://api.github.com/repos/huggingface/datasets/issues/710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/710/events
[]
null
2020-10-17T09:52:45Z
[]
https://github.com/huggingface/datasets/pull/710
CONTRIBUTOR
null
false
null
[]
fix README typos/ consistency
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/710/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk3MzQ1NjQ0
{ "diff_url": "https://github.com/huggingface/datasets/pull/710.diff", "html_url": "https://github.com/huggingface/datasets/pull/710", "merged_at": "2020-10-17T09:52:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/710.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/710" }
2020-10-03T22:20:56Z
https://api.github.com/repos/huggingface/datasets/issues/710/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/7703961?v=4", "events_url": "https://api.github.com/users/discdiver/events{/privacy}", "followers_url": "https://api.github.com/users/discdiver/followers", "following_url": "https://api.github.com/users/discdiver/following{/other_user}", "gists_url": "https://api.github.com/users/discdiver/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/discdiver", "id": 7703961, "login": "discdiver", "node_id": "MDQ6VXNlcjc3MDM5NjE=", "organizations_url": "https://api.github.com/users/discdiver/orgs", "received_events_url": "https://api.github.com/users/discdiver/received_events", "repos_url": "https://api.github.com/users/discdiver/repos", "site_admin": false, "starred_url": "https://api.github.com/users/discdiver/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/discdiver/subscriptions", "type": "User", "url": "https://api.github.com/users/discdiver" }
https://api.github.com/repos/huggingface/datasets/issues/710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/710/timeline
closed
false
710
null
2020-10-17T09:52:45Z
null
true
714,067,902
https://api.github.com/repos/huggingface/datasets/issues/709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/709/events
[]
null
2022-10-04T17:19:37Z
[]
https://github.com/huggingface/datasets/issues/709
NONE
completed
null
null
[]
How to use similarity settings other then "BM25" in Elasticsearch index ?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/709/reactions" }
MDU6SXNzdWU3MTQwNjc5MDI=
null
2020-10-03T11:18:49Z
https://api.github.com/repos/huggingface/datasets/issues/709/comments
**QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?** **ES Reference** https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html **HF doc reference:** https://huggingface.co/docs/datasets/faiss_and_ea.html **context :** ======== I used the latest Elasticsearch server version 7.9.2 When I set DFR which is one of the other similarity algorithms supported by elasticsearch in the mapping, I get an error For example DFR that I had tried in the first instance in mappings as below., `"mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "DFR"}}},` I get the following error RequestError: RequestError(400, 'mapper_parsing_exception', 'Unknown Similarity type [DFR] for field [text]') The other thing as another option I had tried was to declare "similarity": "my_similarity" within settings and then assigning "my_similarity" inside the mappings as below `es_config = { "settings": { "number_of_shards": 1, **"similarity": "my_similarity"**: { "type": "DFR", "basic_model": "g", "after_effect": "l", "normalization": "h2", "normalization.h2.c": "3.0" } , "analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}}, }, "mappings": {"properties": {"text": {"type": "text", "analyzer": "standard", "similarity": "my_similarity"}}}, }` For this , I got the following error RequestError: RequestError(400, 'illegal_argument_exception', 'unknown setting [index.similarity] please check that any required plugins are installed, or check the breaking changes documentation for removed settings')
{ "avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4", "events_url": "https://api.github.com/users/nsankar/events{/privacy}", "followers_url": "https://api.github.com/users/nsankar/followers", "following_url": "https://api.github.com/users/nsankar/following{/other_user}", "gists_url": "https://api.github.com/users/nsankar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nsankar", "id": 431890, "login": "nsankar", "node_id": "MDQ6VXNlcjQzMTg5MA==", "organizations_url": "https://api.github.com/users/nsankar/orgs", "received_events_url": "https://api.github.com/users/nsankar/received_events", "repos_url": "https://api.github.com/users/nsankar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nsankar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nsankar/subscriptions", "type": "User", "url": "https://api.github.com/users/nsankar" }
https://api.github.com/repos/huggingface/datasets/issues/709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/709/timeline
closed
false
709
null
2022-10-04T17:19:37Z
null
false
714,020,953
https://api.github.com/repos/huggingface/datasets/issues/708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/708/events
[]
null
2021-02-12T14:13:28Z
[]
https://github.com/huggingface/datasets/issues/708
NONE
completed
null
null
[]
Datasets performance slow? - 6.4x slower than in memory dataset
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/708/reactions" }
MDU6SXNzdWU3MTQwMjA5NTM=
null
2020-10-03T06:44:07Z
https://api.github.com/repos/huggingface/datasets/issues/708/comments
I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset. Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower. For example, in the `yelp_polarity` dataset (560000 datapoints, or 17500 batches of 32), it was taking me 3:31 to just get process the data and get it on the GPU (no model involved). Whereas, the equivalent in-memory dataset would finish in just 0:33. Is this expected? Given that one of the goals of this project is also accelerate dataset processing, this seems a bit slower than I would expect. I understand the advantages of being able to work on datasets that exceed memory, and that's very exciting to me, but thought I'd open this issue to discuss. For reference I'm running a AMD Ryzen Threadripper 1900X 8-Core Processor CPU, with 128 GB of RAM and an NVME SSD Samsung 960 EVO. I'm running with an RTX Titan 24GB GPU. I can see with `iotop` that the dataset gets quickly loaded into the system read buffers, and thus doesn't incur any additional IO reads. Thus in theory, all the data *should* be in RAM, but in my benchmark code below it's still 6.4 times slower. What am I doing wrong? And is there a way to force the datasets to completely load into memory instead of being memory mapped in cases where you want maximum performance? At 3:31 for 17500 batches, that's 12ms per batch. Does this 12ms just become insignificant as a proportion of forward and backward passes in practice, and thus it's not worth worrying about this in practice? In any case, here's my code `benchmark.py`. If you run it with an argument of `memory` it will copy the data into memory before executing the same test. ``` py import sys from datasets import load_dataset from transformers import DataCollatorWithPadding, BertTokenizerFast from torch.utils.data import DataLoader from tqdm import tqdm if __name__ == '__main__': tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased') collate_fn = DataCollatorWithPadding(tokenizer, padding=True) ds = load_dataset('yelp_polarity') def do_tokenize(x): return tokenizer(x['text'], truncation=True) ds = ds.map(do_tokenize, batched=True) ds.set_format('torch', ['input_ids', 'token_type_ids', 'attention_mask']) if len(sys.argv) == 2 and sys.argv[1] == 'memory': # copy to memory - probably a faster way to do this - but demonstrates the point # approximately 530 batches per second - 17500 batches in 0:33 print('using memory') _ds = [data for data in tqdm(ds['train'])] else: # approximately 83 batches per second - 17500 batches in 3:31 print('using datasets') _ds = ds['train'] dl = DataLoader(_ds, shuffle=True, collate_fn=collate_fn, batch_size=32, num_workers=4) for data in tqdm(dl): for k, v in data.items(): data[k] = v.to('cuda') ``` For reference, my conda environment is [here](https://gist.github.com/05b6101518ff70ed42a858b302a0405d) Once again, I'm very excited about this library, and how easy it is to load datasets, and to do so without worrying about system memory constraints. Thanks for all your great work.
{ "avatar_url": "https://avatars.githubusercontent.com/u/38154?v=4", "events_url": "https://api.github.com/users/eugeneware/events{/privacy}", "followers_url": "https://api.github.com/users/eugeneware/followers", "following_url": "https://api.github.com/users/eugeneware/following{/other_user}", "gists_url": "https://api.github.com/users/eugeneware/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eugeneware", "id": 38154, "login": "eugeneware", "node_id": "MDQ6VXNlcjM4MTU0", "organizations_url": "https://api.github.com/users/eugeneware/orgs", "received_events_url": "https://api.github.com/users/eugeneware/received_events", "repos_url": "https://api.github.com/users/eugeneware/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eugeneware/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eugeneware/subscriptions", "type": "User", "url": "https://api.github.com/users/eugeneware" }
https://api.github.com/repos/huggingface/datasets/issues/708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/708/timeline
closed
false
708
null
2021-02-12T14:13:28Z
null
false
713,954,666
https://api.github.com/repos/huggingface/datasets/issues/707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/707/events
[]
null
2020-12-04T08:22:39Z
[]
https://github.com/huggingface/datasets/issues/707
NONE
completed
null
null
[]
Requirements should specify pyarrow<1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/707/reactions" }
MDU6SXNzdWU3MTM5NTQ2NjY=
null
2020-10-02T23:39:39Z
https://api.github.com/repos/huggingface/datasets/issues/707/comments
I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error, ``` module 'pyarrow' has no attribute 'PyExtensionType' ``` I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinning in the setup file. https://github.com/huggingface/datasets/blob/e86a2a8f869b91654e782c9133d810bb82783200/setup.py#L68 Downgrading by installing `pip install "pyarrow<1"` resolved the issue.
{ "avatar_url": "https://avatars.githubusercontent.com/u/918541?v=4", "events_url": "https://api.github.com/users/mathcass/events{/privacy}", "followers_url": "https://api.github.com/users/mathcass/followers", "following_url": "https://api.github.com/users/mathcass/following{/other_user}", "gists_url": "https://api.github.com/users/mathcass/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mathcass", "id": 918541, "login": "mathcass", "node_id": "MDQ6VXNlcjkxODU0MQ==", "organizations_url": "https://api.github.com/users/mathcass/orgs", "received_events_url": "https://api.github.com/users/mathcass/received_events", "repos_url": "https://api.github.com/users/mathcass/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mathcass/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mathcass/subscriptions", "type": "User", "url": "https://api.github.com/users/mathcass" }
https://api.github.com/repos/huggingface/datasets/issues/707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/707/timeline
closed
false
707
null
2020-10-04T20:50:28Z
null
false
713,721,959
https://api.github.com/repos/huggingface/datasets/issues/706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/706/events
[]
null
2020-10-05T08:15:00Z
[]
https://github.com/huggingface/datasets/pull/706
MEMBER
null
false
null
[]
Fix config creation for data files with NamedSplit
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/706/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk2OTkwMDA0
{ "diff_url": "https://github.com/huggingface/datasets/pull/706.diff", "html_url": "https://github.com/huggingface/datasets/pull/706", "merged_at": "2020-10-05T08:14:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/706.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/706" }
2020-10-02T15:46:49Z
https://api.github.com/repos/huggingface/datasets/issues/706/comments
During config creation, we need to iterate through the data files of all the splits to compute a hash. To make sure the hash is unique given a certain combination of files/splits, we sort the split names. However the `NamedSplit` objects can't be passed to `sorted` and currently it raises an error: we need to sort the string of their names instead. Fix #705
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/706/timeline
closed
false
706
null
2020-10-05T08:14:59Z
null
true
713,709,100
https://api.github.com/repos/huggingface/datasets/issues/705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/705/events
[]
null
2020-10-05T08:14:59Z
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
https://github.com/huggingface/datasets/issues/705
NONE
completed
null
null
[]
TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit'
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/705/reactions" }
MDU6SXNzdWU3MTM3MDkxMDA=
null
2020-10-02T15:27:55Z
https://api.github.com/repos/huggingface/datasets/issues/705/comments
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 (installed from master) - `datasets` version: 1.0.2 (installed as a dependency from transformers) - Platform: Linux-4.15.0-118-generic-x86_64-with-debian-stretch-sid - Python version: 3.7.9 I'm testing my own text classification dataset using [this example](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) from transformers. The dataset is split into train / dev / test, and in csv format, containing just a text and a label columns, using comma as sep. Here's a sample: ``` text,label "Registra-se a presença do acadêmico <name> . <REL_SEP> Ao me deparar com a descrição de dois autores no polo ativo da ação junto ao PJe , margem esquerda foi informado pela procuradora do reclamante que se trata de uma reclamação trabalhista individual . <REL_SEP> Diante disso , face a ausência injustificada do autor <name> , determina-se o ARQUIVAMENTO do presente processo , com relação a este , nos termos do [[ art . 844 da CLT ]] . <REL_SEP> CUSTAS AUTOR - DISPENSADO <REL_SEP> Custas pelo autor no importe de R $326,82 , calculadas sobre R $16.341,03 , dispensadas na forma da lei , em virtude da concessão dos benefícios da Justiça Gratuita , ora deferida . <REL_SEP> Cientes os presentes . <REL_SEP> Audiência encerrada às 8h42min . <REL_SEP> <name> <REL_SEP> Juíza do Trabalho <REL_SEP> Ata redigida por << <name> >> , Secretário de Audiência .",NO_RELATION ``` However, @Santosh-Gupta reported in #7351 that he had the exact same problem using the ChemProt dataset. His colab notebook is referenced in the following section. ## To reproduce Steps to reproduce the behavior: 1. Created a new conda environment using conda env -n transformers python=3.7 2. Cloned transformers master, `cd` into it and installed using pip install --editable . -r examples/requirements.txt 3. Installed tensorflow with `pip install tensorflow` 3. Ran `run_tf_text_classification.py` with the following parameters: ``` --train_file <DATASET_PATH>/train.csv \ --dev_file <DATASET_PATH>/dev.csv \ --test_file <DATASET_PATH>/test.csv \ --label_column_id 1 \ --model_name_or_path neuralmind/bert-base-portuguese-cased \ --output_dir <OUTPUT_PATH> \ --num_train_epochs 4 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --do_train \ --do_eval \ --do_predict \ --logging_steps 1000 \ --evaluate_during_training \ --save_steps 1000 \ --overwrite_output_dir \ --overwrite_cache ``` I have also copied [@Santosh-Gupta 's colab notebook](https://colab.research.google.com/drive/11APei6GjphCZbH5wD9yVlfGvpIkh8pwr?usp=sharing) as a reference. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Here is the stack trace: ``` 2020-10-02 07:33:41.622011: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 /media/discoD/repositorios/transformers_pedro/src/transformers/training_args.py:333: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options) FutureWarning, 2020-10-02 07:33:43.471648: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1 2020-10-02 07:33:43.471791: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.472664: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.472684: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.472765: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.472809: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.472848: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.474209: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.474276: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.561219: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.561397: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.562345: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.563219: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:43.563595: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2020-10-02 07:33:43.570091: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3591830000 Hz 2020-10-02 07:33:43.570494: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x560842432400 initialized for platform Host (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:43.570511: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version 2020-10-02 07:33:43.570702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.571599: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: pciBusID: 0000:01:00.0 name: GeForce GTX 1070 computeCapability: 6.1 coreClock: 1.7085GHz coreCount: 15 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 238.66GiB/s 2020-10-02 07:33:43.571633: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 2020-10-02 07:33:43.571645: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10 2020-10-02 07:33:43.571654: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10 2020-10-02 07:33:43.571664: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10 2020-10-02 07:33:43.571691: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10 2020-10-02 07:33:43.571704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10 2020-10-02 07:33:43.571718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7 2020-10-02 07:33:43.571770: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.572641: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:43.573475: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0 2020-10-02 07:33:47.139227: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix: 2020-10-02 07:33:47.139265: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 2020-10-02 07:33:47.139272: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N 2020-10-02 07:33:47.140323: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.141248: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142085: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2020-10-02 07:33:47.142854: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 5371 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1) 2020-10-02 07:33:47.146317: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5608b95dc5c0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2020-10-02 07:33:47.146336: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1070, Compute Capability 6.1 10/02/2020 07:33:47 - INFO - __main__ - n_replicas: 1, distributed training: False, 16-bits training: False 10/02/2020 07:33:47 - INFO - __main__ - Training/evaluation parameters TFTrainingArguments(output_dir='/media/discoD/models/datalawyer/pedidos/transformers_tf', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=True, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=4.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Oct02_07-33-43_user-XPS-8700', logging_first_step=False, logging_steps=1000, save_steps=1000, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=1000, dataloader_num_workers=0, past_index=-1, run_name='/media/discoD/models/datalawyer/pedidos/transformers_tf', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=False, tpu_name=None, xla=False) 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 acquired on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock 10/02/2020 07:33:53 - INFO - filelock - Lock 140407857405776 released on /home/user/.cache/huggingface/datasets/e0f1e9ed46db1e2429189f06b479cbd4075c0976104c1aacf8f77d9a53d2ad87.03756fef6da334f50a7ff73608e21b5018229944ca250416ce7352e25d84a552.py.lock Using custom data configuration default Traceback (most recent call last): File "run_tf_text_classification.py", line 283, in <module> main() File "run_tf_text_classification.py", line 222, in main max_seq_length=data_args.max_seq_length, File "run_tf_text_classification.py", line 43, in get_tfds ds = datasets.load_dataset("csv", data_files=files) File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/load.py", line 604, in load_dataset **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 158, in __init__ **config_kwargs, File "/media/discoD/anaconda3/envs/transformers/lib/python3.7/site-packages/datasets/builder.py", line 269, in _create_builder_config for key in sorted(data_files.keys()): TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' ``` ## Expected behavior Should be able to run the text-classification example as described in [https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow](https://github.com/huggingface/transformers/tree/master/examples/text-classification#run-generic-text-classification-script-in-tensorflow) Originally opened this issue at transformers' repository: [https://github.com/huggingface/transformers/issues/7535](https://github.com/huggingface/transformers/issues/7535). @jplu instructed me to open here, since according to [this](https://github.com/huggingface/transformers/issues/7535#issuecomment-702778885) evidence, the problem is from datasets. Thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/12713359?v=4", "events_url": "https://api.github.com/users/pvcastro/events{/privacy}", "followers_url": "https://api.github.com/users/pvcastro/followers", "following_url": "https://api.github.com/users/pvcastro/following{/other_user}", "gists_url": "https://api.github.com/users/pvcastro/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pvcastro", "id": 12713359, "login": "pvcastro", "node_id": "MDQ6VXNlcjEyNzEzMzU5", "organizations_url": "https://api.github.com/users/pvcastro/orgs", "received_events_url": "https://api.github.com/users/pvcastro/received_events", "repos_url": "https://api.github.com/users/pvcastro/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pvcastro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pvcastro/subscriptions", "type": "User", "url": "https://api.github.com/users/pvcastro" }
https://api.github.com/repos/huggingface/datasets/issues/705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/705/timeline
closed
false
705
null
2020-10-05T08:14:59Z
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
false
713,572,556
https://api.github.com/repos/huggingface/datasets/issues/704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/704/events
[]
null
2020-10-02T12:12:02Z
[]
https://github.com/huggingface/datasets/pull/704
MEMBER
null
false
null
[]
Fix remote tests for new datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/704/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk2ODY2NTQ0
{ "diff_url": "https://github.com/huggingface/datasets/pull/704.diff", "html_url": "https://github.com/huggingface/datasets/pull/704", "merged_at": "2020-10-02T12:12:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/704.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/704" }
2020-10-02T12:08:04Z
https://api.github.com/repos/huggingface/datasets/issues/704/comments
When adding a new dataset, the remote tests fail because they try to get the new dataset from the master branch (i.e., where the dataset doesn't exist yet) To fix that I reverted to the use of the HF API that fetch the available datasets on S3 that is synced with the master branch
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/704/timeline
closed
false
704
null
2020-10-02T12:12:01Z
null
true
713,559,718
https://api.github.com/repos/huggingface/datasets/issues/703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/703/events
[]
null
2020-10-02T12:54:41Z
[]
https://github.com/huggingface/datasets/pull/703
CONTRIBUTOR
null
false
null
[]
Add hotpot QA
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/703/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk2ODU1OTQ5
{ "diff_url": "https://github.com/huggingface/datasets/pull/703.diff", "html_url": "https://github.com/huggingface/datasets/pull/703", "merged_at": "2020-10-02T12:54:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/703.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/703" }
2020-10-02T11:44:28Z
https://api.github.com/repos/huggingface/datasets/issues/703/comments
Added the [HotpotQA](https://github.com/hotpotqa/hotpot) multi-hop question answering dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson" }
https://api.github.com/repos/huggingface/datasets/issues/703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/703/timeline
closed
false
703
null
2020-10-02T12:54:41Z
null
true
713,499,628
https://api.github.com/repos/huggingface/datasets/issues/702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/702/events
[]
null
2020-10-02T10:11:04Z
[]
https://github.com/huggingface/datasets/pull/702
MEMBER
null
false
null
[]
Complete rouge kwargs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/702/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk2ODA3Mjg4
{ "diff_url": "https://github.com/huggingface/datasets/pull/702.diff", "html_url": "https://github.com/huggingface/datasets/pull/702", "merged_at": "2020-10-02T10:11:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/702.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/702" }
2020-10-02T09:59:01Z
https://api.github.com/repos/huggingface/datasets/issues/702/comments
In #701 we noticed that some kwargs were missing for rouge
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/702/timeline
closed
false
702
null
2020-10-02T10:11:03Z
null
true
713,485,757
https://api.github.com/repos/huggingface/datasets/issues/701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/701/events
[]
null
2020-10-02T09:55:14Z
[]
https://github.com/huggingface/datasets/pull/701
MEMBER
null
false
null
[]
Add rouge 2 and rouge Lsum to rouge metric outputs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/701/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk2Nzk2MTQ1
{ "diff_url": "https://github.com/huggingface/datasets/pull/701.diff", "html_url": "https://github.com/huggingface/datasets/pull/701", "merged_at": "2020-10-02T09:52:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/701.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/701" }
2020-10-02T09:35:46Z
https://api.github.com/repos/huggingface/datasets/issues/701/comments
Continuation of #700 Rouge 2 and Rouge Lsum were missing in Rouge's outputs. Rouge Lsum is also useful to evaluate Rouge L for sentences with `\n` Fix #617
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/701/timeline
closed
false
701
null
2020-10-02T09:52:18Z
null
true
713,450,295
https://api.github.com/repos/huggingface/datasets/issues/700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/700/events
[]
null
2020-10-02T11:08:49Z
[]
https://github.com/huggingface/datasets/pull/700
NONE
null
false
null
[]
Add rouge-2 in rouge_types for metric calculation
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/700/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk2NzY3MTMz
{ "diff_url": "https://github.com/huggingface/datasets/pull/700.diff", "html_url": "https://github.com/huggingface/datasets/pull/700", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/700.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/700" }
2020-10-02T08:36:45Z
https://api.github.com/repos/huggingface/datasets/issues/700/comments
The description of the ROUGE metric says, ``` _KWARGS_DESCRIPTION = """ Calculates average rouge scores for a list of hypotheses and references Args: predictions: list of predictions to score. Each predictions should be a string with tokens separated by spaces. references: list of reference for each prediction. Each reference should be a string with tokens separated by spaces. Returns: rouge1: rouge_1 f1, rouge2: rouge_2 f1, rougeL: rouge_l f1, rougeLsum: rouge_l precision """ ``` but the `rouge_types` argument defaults to `rouge_types = ["rouge1", "rougeL"]`, this PR updates and add `rouge2` to the list so as to reflect the description card.
{ "avatar_url": "https://avatars.githubusercontent.com/u/18056781?v=4", "events_url": "https://api.github.com/users/Shashi456/events{/privacy}", "followers_url": "https://api.github.com/users/Shashi456/followers", "following_url": "https://api.github.com/users/Shashi456/following{/other_user}", "gists_url": "https://api.github.com/users/Shashi456/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Shashi456", "id": 18056781, "login": "Shashi456", "node_id": "MDQ6VXNlcjE4MDU2Nzgx", "organizations_url": "https://api.github.com/users/Shashi456/orgs", "received_events_url": "https://api.github.com/users/Shashi456/received_events", "repos_url": "https://api.github.com/users/Shashi456/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Shashi456/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Shashi456/subscriptions", "type": "User", "url": "https://api.github.com/users/Shashi456" }
https://api.github.com/repos/huggingface/datasets/issues/700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/700/timeline
closed
false
700
null
2020-10-02T09:59:05Z
null
true
713,395,642
https://api.github.com/repos/huggingface/datasets/issues/699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/699/events
[]
null
2020-10-03T17:45:52Z
[]
https://github.com/huggingface/datasets/issues/699
NONE
completed
null
null
[]
XNLI dataset is not loading
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/699/reactions" }
MDU6SXNzdWU3MTMzOTU2NDI=
null
2020-10-02T06:53:16Z
https://api.github.com/repos/huggingface/datasets/issues/699/comments
`dataset = datasets.load_dataset(path='xnli')` showing below error ``` /opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 39 logger.info("All the checksums matched successfully" + for_verification_name) 40 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip'] ``` I think URL is now changed to "https://cims.nyu.edu/~sbowman/xnli/XNLI-MT-1.0.zip"
{ "avatar_url": "https://avatars.githubusercontent.com/u/14936525?v=4", "events_url": "https://api.github.com/users/imadarsh1001/events{/privacy}", "followers_url": "https://api.github.com/users/imadarsh1001/followers", "following_url": "https://api.github.com/users/imadarsh1001/following{/other_user}", "gists_url": "https://api.github.com/users/imadarsh1001/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/imadarsh1001", "id": 14936525, "login": "imadarsh1001", "node_id": "MDQ6VXNlcjE0OTM2NTI1", "organizations_url": "https://api.github.com/users/imadarsh1001/orgs", "received_events_url": "https://api.github.com/users/imadarsh1001/received_events", "repos_url": "https://api.github.com/users/imadarsh1001/repos", "site_admin": false, "starred_url": "https://api.github.com/users/imadarsh1001/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/imadarsh1001/subscriptions", "type": "User", "url": "https://api.github.com/users/imadarsh1001" }
https://api.github.com/repos/huggingface/datasets/issues/699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/699/timeline
closed
false
699
null
2020-10-03T17:43:37Z
null
false
712,979,029
https://api.github.com/repos/huggingface/datasets/issues/697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/697/events
[]
null
2020-10-01T16:12:00Z
[]
https://github.com/huggingface/datasets/pull/697
NONE
null
false
null
[]
Update README.md
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/697/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk2MzczNDU5
{ "diff_url": "https://github.com/huggingface/datasets/pull/697.diff", "html_url": "https://github.com/huggingface/datasets/pull/697", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/697.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/697" }
2020-10-01T16:02:42Z
https://api.github.com/repos/huggingface/datasets/issues/697/comments
Hey I was just telling my subscribers to check out your repositories Thank you
{ "avatar_url": "https://avatars.githubusercontent.com/u/71011306?v=4", "events_url": "https://api.github.com/users/bishug/events{/privacy}", "followers_url": "https://api.github.com/users/bishug/followers", "following_url": "https://api.github.com/users/bishug/following{/other_user}", "gists_url": "https://api.github.com/users/bishug/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bishug", "id": 71011306, "login": "bishug", "node_id": "MDQ6VXNlcjcxMDExMzA2", "organizations_url": "https://api.github.com/users/bishug/orgs", "received_events_url": "https://api.github.com/users/bishug/received_events", "repos_url": "https://api.github.com/users/bishug/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bishug/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bishug/subscriptions", "type": "User", "url": "https://api.github.com/users/bishug" }
https://api.github.com/repos/huggingface/datasets/issues/697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/697/timeline
closed
false
697
null
2020-10-01T16:12:00Z
null
true
712,942,977
https://api.github.com/repos/huggingface/datasets/issues/696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/696/events
[]
null
2020-10-02T07:48:19Z
[]
https://github.com/huggingface/datasets/pull/696
MEMBER
null
false
null
[]
Elasticsearch index docs
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/696/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk2MzQzMjEy
{ "diff_url": "https://github.com/huggingface/datasets/pull/696.diff", "html_url": "https://github.com/huggingface/datasets/pull/696", "merged_at": "2020-10-02T07:48:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/696.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/696" }
2020-10-01T15:18:58Z
https://api.github.com/repos/huggingface/datasets/issues/696/comments
I added the docs for ES indexes. I also added a `load_elasticsearch_index` method to load an index that has already been built. I checked the tests for the ES index and we have tests that mock ElasticSearch. I think this is good for now but at some point it would be cool to have an end-to-end test with a real ES running.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/696/timeline
closed
false
696
null
2020-10-02T07:48:18Z
null
true
712,843,949
https://api.github.com/repos/huggingface/datasets/issues/695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/695/events
[]
null
2020-10-01T14:01:15Z
[]
https://github.com/huggingface/datasets/pull/695
MEMBER
null
false
null
[]
Update XNLI download link
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/695/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk2MjU5NTM0
{ "diff_url": "https://github.com/huggingface/datasets/pull/695.diff", "html_url": "https://github.com/huggingface/datasets/pull/695", "merged_at": "2020-10-01T14:01:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/695.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/695" }
2020-10-01T13:27:22Z
https://api.github.com/repos/huggingface/datasets/issues/695/comments
The old link isn't working anymore. I updated it with the new official link. Fix #690
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/695/timeline
closed
false
695
null
2020-10-01T14:01:14Z
null
true
712,827,751
https://api.github.com/repos/huggingface/datasets/issues/694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/694/events
[]
null
2020-10-02T07:47:28Z
[]
https://github.com/huggingface/datasets/pull/694
MEMBER
null
false
null
[]
Use GitHub instead of aws in remote dataset tests
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/694/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk2MjQ1NzU0
{ "diff_url": "https://github.com/huggingface/datasets/pull/694.diff", "html_url": "https://github.com/huggingface/datasets/pull/694", "merged_at": "2020-10-02T07:47:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/694.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/694" }
2020-10-01T13:07:50Z
https://api.github.com/repos/huggingface/datasets/issues/694/comments
Recently we switched from aws s3 to github to download dataset scripts. However in the tests, the dummy data were still downloaded from s3. So I changed that to download them from github instead, in the MockDownloadManager. Moreover I noticed that `anli`'s dummy data were quite heavy (18MB compressed, i.e. the entire dataset) so I replaced them with dummy data with few examples.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/694/timeline
closed
false
694
null
2020-10-02T07:47:27Z
null
true
712,822,200
https://api.github.com/repos/huggingface/datasets/issues/693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/693/events
[]
null
2023-09-24T09:48:23Z
[]
https://github.com/huggingface/datasets/pull/693
NONE
null
false
null
[]
Rachel ker add dataset/mlsum
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/693/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk2MjQxMjUw
{ "diff_url": "https://github.com/huggingface/datasets/pull/693.diff", "html_url": "https://github.com/huggingface/datasets/pull/693", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/693.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/693" }
2020-10-01T13:01:10Z
https://api.github.com/repos/huggingface/datasets/issues/693/comments
.
{ "avatar_url": "https://avatars.githubusercontent.com/u/32742136?v=4", "events_url": "https://api.github.com/users/pdhg/events{/privacy}", "followers_url": "https://api.github.com/users/pdhg/followers", "following_url": "https://api.github.com/users/pdhg/following{/other_user}", "gists_url": "https://api.github.com/users/pdhg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pdhg", "id": 32742136, "login": "pdhg", "node_id": "MDQ6VXNlcjMyNzQyMTM2", "organizations_url": "https://api.github.com/users/pdhg/orgs", "received_events_url": "https://api.github.com/users/pdhg/received_events", "repos_url": "https://api.github.com/users/pdhg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pdhg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdhg/subscriptions", "type": "User", "url": "https://api.github.com/users/pdhg" }
https://api.github.com/repos/huggingface/datasets/issues/693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/693/timeline
closed
false
693
null
2020-10-01T17:01:13Z
null
true
712,818,968
https://api.github.com/repos/huggingface/datasets/issues/692
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/692/events
[]
null
2020-10-02T11:01:59Z
[]
https://github.com/huggingface/datasets/pull/692
NONE
null
false
null
[]
Update README.md
{ "+1": 0, "-1": 4, "confused": 2, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 6, "url": "https://api.github.com/repos/huggingface/datasets/issues/692/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk2MjM4NzIw
{ "diff_url": "https://github.com/huggingface/datasets/pull/692.diff", "html_url": "https://github.com/huggingface/datasets/pull/692", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/692.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/692" }
2020-10-01T12:57:22Z
https://api.github.com/repos/huggingface/datasets/issues/692/comments
{ "avatar_url": "https://avatars.githubusercontent.com/u/62796466?v=4", "events_url": "https://api.github.com/users/mayank1897/events{/privacy}", "followers_url": "https://api.github.com/users/mayank1897/followers", "following_url": "https://api.github.com/users/mayank1897/following{/other_user}", "gists_url": "https://api.github.com/users/mayank1897/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mayank1897", "id": 62796466, "login": "mayank1897", "node_id": "MDQ6VXNlcjYyNzk2NDY2", "organizations_url": "https://api.github.com/users/mayank1897/orgs", "received_events_url": "https://api.github.com/users/mayank1897/received_events", "repos_url": "https://api.github.com/users/mayank1897/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mayank1897/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mayank1897/subscriptions", "type": "User", "url": "https://api.github.com/users/mayank1897" }
https://api.github.com/repos/huggingface/datasets/issues/692/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/692/timeline
closed
false
692
null
2020-10-02T11:01:59Z
null
true
712,389,499
https://api.github.com/repos/huggingface/datasets/issues/691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/691/events
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
null
2022-02-15T10:46:50Z
[]
https://github.com/huggingface/datasets/issues/691
NONE
completed
null
null
[]
Add UI filter to filter datasets based on task
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/691/reactions" }
MDU6SXNzdWU3MTIzODk0OTk=
null
2020-10-01T00:56:18Z
https://api.github.com/repos/huggingface/datasets/issues/691/comments
This is great work, so huge shoutout to contributors and huggingface. The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following tasks (non exhaustive list) - Classification - Multi label - Multi class - Q&A - Summarization - Translation I believe this feature might have some value, for folks trying to find datasets for a particular task, and then testing their model capabilities. Thank you :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/7589415?v=4", "events_url": "https://api.github.com/users/praateekmahajan/events{/privacy}", "followers_url": "https://api.github.com/users/praateekmahajan/followers", "following_url": "https://api.github.com/users/praateekmahajan/following{/other_user}", "gists_url": "https://api.github.com/users/praateekmahajan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/praateekmahajan", "id": 7589415, "login": "praateekmahajan", "node_id": "MDQ6VXNlcjc1ODk0MTU=", "organizations_url": "https://api.github.com/users/praateekmahajan/orgs", "received_events_url": "https://api.github.com/users/praateekmahajan/received_events", "repos_url": "https://api.github.com/users/praateekmahajan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/praateekmahajan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/praateekmahajan/subscriptions", "type": "User", "url": "https://api.github.com/users/praateekmahajan" }
https://api.github.com/repos/huggingface/datasets/issues/691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/691/timeline
closed
false
691
null
2022-02-15T10:46:50Z
null
false
712,150,321
https://api.github.com/repos/huggingface/datasets/issues/690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/690/events
[]
null
2020-10-01T17:15:08Z
[]
https://github.com/huggingface/datasets/issues/690
NONE
completed
null
null
[]
XNLI dataset: NonMatchingChecksumError
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/690/reactions" }
MDU6SXNzdWU3MTIxNTAzMjE=
null
2020-09-30T17:50:03Z
https://api.github.com/repos/huggingface/datasets/issues/690/comments
Hi, I tried to download "xnli" dataset in colab using `xnli = load_dataset(path='xnli')` but got 'NonMatchingChecksumError' error `NonMatchingChecksumError Traceback (most recent call last) <ipython-input-27-a87bedc82eeb> in <module>() ----> 1 xnli = load_dataset(path='xnli') 3 frames /usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']` The same code worked well several days ago in colab but stopped working now. Thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/13307358?v=4", "events_url": "https://api.github.com/users/xiey1/events{/privacy}", "followers_url": "https://api.github.com/users/xiey1/followers", "following_url": "https://api.github.com/users/xiey1/following{/other_user}", "gists_url": "https://api.github.com/users/xiey1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/xiey1", "id": 13307358, "login": "xiey1", "node_id": "MDQ6VXNlcjEzMzA3MzU4", "organizations_url": "https://api.github.com/users/xiey1/orgs", "received_events_url": "https://api.github.com/users/xiey1/received_events", "repos_url": "https://api.github.com/users/xiey1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/xiey1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/xiey1/subscriptions", "type": "User", "url": "https://api.github.com/users/xiey1" }
https://api.github.com/repos/huggingface/datasets/issues/690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/690/timeline
closed
false
690
null
2020-10-01T14:01:14Z
null
false
712,095,262
https://api.github.com/repos/huggingface/datasets/issues/689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/689/events
[]
null
2020-09-30T16:45:32Z
[]
https://github.com/huggingface/datasets/pull/689
MEMBER
null
false
null
[]
Switch to pandas reader for text dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/689/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk1NjMzNjMy
{ "diff_url": "https://github.com/huggingface/datasets/pull/689.diff", "html_url": "https://github.com/huggingface/datasets/pull/689", "merged_at": "2020-09-30T16:45:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/689.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/689" }
2020-09-30T16:28:12Z
https://api.github.com/repos/huggingface/datasets/issues/689/comments
Following the discussion in #622 , it appears that there's no appropriate ways to use the payrrow csv reader to read text files because of the separator. In this PR I switched to pandas to read the file. Moreover pandas allows to read the file by chunk, which means that you can build the arrow dataset from a text file that is bigger than RAM (we used to have to shard text files an mentioned in https://github.com/huggingface/datasets/issues/610#issuecomment-691672919) From a test that I did locally on a 1GB text file, the pyarrow reader used to run in 150ms while the new one takes 650ms (multithreading off for pyarrow). This is probably due to chunking since I am having the same speed difference by calling `read()` and calling `read(chunksize)` + `readline()` to read the text file.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/689/timeline
closed
false
689
null
2020-09-30T16:45:31Z
null
true
711,804,828
https://api.github.com/repos/huggingface/datasets/issues/688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/688/events
[]
null
2020-10-01T08:45:46Z
[]
https://github.com/huggingface/datasets/pull/688
MEMBER
null
false
null
[]
Disable tokenizers parallelism in multiprocessed map
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/688/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk1MzkwMTc1
{ "diff_url": "https://github.com/huggingface/datasets/pull/688.diff", "html_url": "https://github.com/huggingface/datasets/pull/688", "merged_at": "2020-10-01T08:45:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/688.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/688" }
2020-09-30T09:53:34Z
https://api.github.com/repos/huggingface/datasets/issues/688/comments
It was reported in #620 that using multiprocessing with a tokenizers shows this message: ``` The current process just got forked. Disabling parallelism to avoid deadlocks... To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false) ``` This message is shown when TOKENIZERS_PARALLELISM is unset. Moreover if it is set to `true`, then the program just hangs. To hide the message (if TOKENIZERS_PARALLELISM is unset) and avoid hanging (if TOKENIZERS_PARALLELISM is `true`), then I set TOKENIZERS_PARALLELISM to `false` when forking the process. After forking is gets back to its original value. Also I added a warning if TOKENIZERS_PARALLELISM was `true` and is set to `false`: ``` Setting TOKENIZERS_PARALLELISM=false for forked processes. ``` cc @n1t0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/688/timeline
closed
false
688
null
2020-10-01T08:45:45Z
null
true
711,664,810
https://api.github.com/repos/huggingface/datasets/issues/687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/687/events
[]
null
2020-09-30T09:53:03Z
[]
https://github.com/huggingface/datasets/issues/687
NONE
completed
null
null
[]
`ArrowInvalid` occurs while running `Dataset.map()` function
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/687/reactions" }
MDU6SXNzdWU3MTE2NjQ4MTA=
null
2020-09-30T06:16:50Z
https://api.github.com/repos/huggingface/datasets/issues/687/comments
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error. Code: ```python # train_ds = Dataset(features: { # 'title': Value(dtype='string', id=None), # 'score': Value(dtype='float64', id=None) # }, num_rows: 99999) # suggested in #665 class PicklableTokenizer(BertJapaneseTokenizer): def __getstate__(self): state = dict(self.__dict__) state['do_lower_case'] = self.word_tokenizer.do_lower_case state['never_split'] = self.word_tokenizer.never_split del state['word_tokenizer'] return state def __setstate(self): do_lower_case = state.pop('do_lower_case') never_split = state.pop('never_split') self.__dict__ = state self.word_tokenizer = MecabTokenizer( do_lower_case=do_lower_case, never_split=never_split ) t = PicklableTokenizer.from_pretrained('bert-base-japanese-whole-word-masking') encoded = train_ds.map( lambda examples: {'tokens': t.encode(examples['title'], max_length=1000)}, batched=True, batch_size=1000 ) ``` Error Message: ``` 99% 99/100 [00:22<00:00, 39.07ba/s] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <timed exec> in <module> /usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1242 fn_kwargs=fn_kwargs, 1243 new_fingerprint=new_fingerprint, -> 1244 update_data=update_data, 1245 ) 1246 else: /usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 151 "output_all_columns": self._output_all_columns, 152 } --> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 154 if new_format["columns"] is not None: 155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names)) /usr/local/lib/python3.6/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 161 # Call actual function 162 --> 163 out = func(self, *args, **kwargs) 164 165 # Update fingerprint of in-place transforms + update in-place history of transforms /usr/local/lib/python3.6/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data) 1496 if update_data: 1497 batch = cast_to_python_objects(batch) -> 1498 writer.write_batch(batch) 1499 if update_data: 1500 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file /usr/local/lib/python3.6/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 271 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type) 272 typed_sequence_examples[col] = typed_sequence --> 273 pa_table = pa.Table.from_pydict(typed_sequence_examples) 274 self.write_table(pa_table) 275 /usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict() /usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays() /usr/local/lib/python3.6/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate() /usr/local/lib/python3.6/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Column 4 named tokens expected length 999 but got length 1000 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/5601012?v=4", "events_url": "https://api.github.com/users/peinan/events{/privacy}", "followers_url": "https://api.github.com/users/peinan/followers", "following_url": "https://api.github.com/users/peinan/following{/other_user}", "gists_url": "https://api.github.com/users/peinan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/peinan", "id": 5601012, "login": "peinan", "node_id": "MDQ6VXNlcjU2MDEwMTI=", "organizations_url": "https://api.github.com/users/peinan/orgs", "received_events_url": "https://api.github.com/users/peinan/received_events", "repos_url": "https://api.github.com/users/peinan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/peinan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/peinan/subscriptions", "type": "User", "url": "https://api.github.com/users/peinan" }
https://api.github.com/repos/huggingface/datasets/issues/687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/687/timeline
closed
false
687
null
2020-09-30T09:53:03Z
null
false
711,385,739
https://api.github.com/repos/huggingface/datasets/issues/686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/686/events
[]
null
2021-01-08T18:29:26Z
[]
https://github.com/huggingface/datasets/issues/686
CONTRIBUTOR
completed
null
null
[]
Dataset browser url is still https://huggingface.co/nlp/viewer/
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/686/reactions" }
MDU6SXNzdWU3MTEzODU3Mzk=
null
2020-09-29T19:21:52Z
https://api.github.com/repos/huggingface/datasets/issues/686/comments
Might be worth updating to https://huggingface.co/datasets/viewer/
{ "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "events_url": "https://api.github.com/users/jarednielsen/events{/privacy}", "followers_url": "https://api.github.com/users/jarednielsen/followers", "following_url": "https://api.github.com/users/jarednielsen/following{/other_user}", "gists_url": "https://api.github.com/users/jarednielsen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jarednielsen", "id": 4564897, "login": "jarednielsen", "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "organizations_url": "https://api.github.com/users/jarednielsen/orgs", "received_events_url": "https://api.github.com/users/jarednielsen/received_events", "repos_url": "https://api.github.com/users/jarednielsen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jarednielsen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jarednielsen/subscriptions", "type": "User", "url": "https://api.github.com/users/jarednielsen" }
https://api.github.com/repos/huggingface/datasets/issues/686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/686/timeline
closed
false
686
null
2021-01-08T18:29:26Z
null
false
711,182,185
https://api.github.com/repos/huggingface/datasets/issues/685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/685/events
[]
null
2020-09-30T08:39:56Z
[]
https://github.com/huggingface/datasets/pull/685
MEMBER
null
false
null
[]
Add features parameter to CSV
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/685/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk0ODg1NjIz
{ "diff_url": "https://github.com/huggingface/datasets/pull/685.diff", "html_url": "https://github.com/huggingface/datasets/pull/685", "merged_at": "2020-09-30T08:39:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/685.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/685" }
2020-09-29T14:43:36Z
https://api.github.com/repos/huggingface/datasets/issues/685/comments
Add support for the `features` parameter when loading a csv dataset: ```python from datasets import load_dataset, Features features = Features({...}) csv_dataset = load_dataset("csv", data_files=["path/to/my/file.csv"], features=features) ``` I added tests to make sure that it is also compatible with the caching system Fix #623
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/685/timeline
closed
false
685
null
2020-09-30T08:39:54Z
null
true
711,080,947
https://api.github.com/repos/huggingface/datasets/issues/684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/684/events
[]
null
2020-09-29T15:56:46Z
[]
https://github.com/huggingface/datasets/pull/684
MEMBER
null
false
null
[]
Fix column order issue in cast
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/684/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk0ODA2NjE1
{ "diff_url": "https://github.com/huggingface/datasets/pull/684.diff", "html_url": "https://github.com/huggingface/datasets/pull/684", "merged_at": "2020-09-29T15:56:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/684.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/684" }
2020-09-29T12:49:13Z
https://api.github.com/repos/huggingface/datasets/issues/684/comments
Previously, the order of the columns in the features passes to `cast_` mattered. However even though features passed to `cast_` had the same order as the dataset features, it could fail because the schema that was built was always in alphabetical order. This issue was reported by @lewtun in #623 To fix that I fixed the schema to follow the order of the arrow table columns. I also added the possibility to give features that are not ordered the same way as the dataset features.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/684/timeline
closed
false
684
null
2020-09-29T15:56:45Z
null
true
710,942,704
https://api.github.com/repos/huggingface/datasets/issues/683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/683/events
[]
null
2021-05-05T18:24:31Z
[]
https://github.com/huggingface/datasets/pull/683
MEMBER
null
false
null
[]
Fix wrong delimiter in text dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/683/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk0NzAwNzY1
{ "diff_url": "https://github.com/huggingface/datasets/pull/683.diff", "html_url": "https://github.com/huggingface/datasets/pull/683", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/683.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/683" }
2020-09-29T09:43:24Z
https://api.github.com/repos/huggingface/datasets/issues/683/comments
The delimiter is set to the bell character as it is used nowhere is text files usually. However in the text dataset the delimiter was set to `\b` which is backspace in python, while the bell character is `\a`. I replace \b by \a Hopefully it fixes issues mentioned by some users in #622
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/683/timeline
closed
false
683
null
2020-09-29T09:44:06Z
null
true
710,325,399
https://api.github.com/repos/huggingface/datasets/issues/682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/682/events
[]
null
2020-09-28T17:30:13Z
[]
https://github.com/huggingface/datasets/pull/682
MEMBER
null
false
null
[]
Update navbar chapter titles color
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/682/reactions" }
MDExOlB1bGxSZXF1ZXN0NDk0MTkzMzEw
{ "diff_url": "https://github.com/huggingface/datasets/pull/682.diff", "html_url": "https://github.com/huggingface/datasets/pull/682", "merged_at": "2020-09-28T17:30:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/682.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/682" }
2020-09-28T14:35:17Z
https://api.github.com/repos/huggingface/datasets/issues/682/comments
Consistency with the color change that was done in transformers at https://github.com/huggingface/transformers/pull/7423 It makes the background-color of the chapter titles in the docs navbar darker, to differentiate them from the inner sections. see changes [here](https://691-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/682/timeline
closed
false
682
null
2020-09-28T17:30:12Z
null
true
710,075,721
https://api.github.com/repos/huggingface/datasets/issues/681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/681/events
[]
null
2020-09-28T10:26:13Z
[]
https://github.com/huggingface/datasets/pull/681
CONTRIBUTOR
null
false
null
[]
Adding missing @property (+2 small flake8 fixes).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/681/reactions" }
MDExOlB1bGxSZXF1ZXN0NDkzOTkwMjEz
{ "diff_url": "https://github.com/huggingface/datasets/pull/681.diff", "html_url": "https://github.com/huggingface/datasets/pull/681", "merged_at": "2020-09-28T10:26:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/681.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/681" }
2020-09-28T08:53:53Z
https://api.github.com/repos/huggingface/datasets/issues/681/comments
Fixes #678
{ "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Narsil", "id": 204321, "login": "Narsil", "node_id": "MDQ6VXNlcjIwNDMyMQ==", "organizations_url": "https://api.github.com/users/Narsil/orgs", "received_events_url": "https://api.github.com/users/Narsil/received_events", "repos_url": "https://api.github.com/users/Narsil/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "type": "User", "url": "https://api.github.com/users/Narsil" }
https://api.github.com/repos/huggingface/datasets/issues/681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/681/timeline
closed
false
681
null
2020-09-28T10:26:09Z
null
true
710,066,138
https://api.github.com/repos/huggingface/datasets/issues/680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/680/events
[]
null
2020-09-29T15:54:47Z
[]
https://github.com/huggingface/datasets/pull/680
CONTRIBUTOR
null
false
null
[]
Fix bug related to boolean in GAP dataset.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/680/reactions" }
MDExOlB1bGxSZXF1ZXN0NDkzOTgyMjY4
{ "diff_url": "https://github.com/huggingface/datasets/pull/680.diff", "html_url": "https://github.com/huggingface/datasets/pull/680", "merged_at": "2020-09-29T15:54:47Z", "patch_url": "https://github.com/huggingface/datasets/pull/680.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/680" }
2020-09-28T08:39:39Z
https://api.github.com/repos/huggingface/datasets/issues/680/comments
### Why I did The value in `row["A-coref"]` and `row["B-coref"]` is `'TRUE'` or `'FALSE'`. This type is `string`, then `bool('FALSE')` is equal to `True` in Python. So, both rows are transformed into `True` now. So, I modified this problem. ### What I did I modified `bool(row["A-coref"])` and `bool(row["B-coref"])` to `row["A-coref"] == "TRUE"` and `row["B-coref"] == "TRUE"`. Thank you!
{ "avatar_url": "https://avatars.githubusercontent.com/u/14996977?v=4", "events_url": "https://api.github.com/users/otakumesi/events{/privacy}", "followers_url": "https://api.github.com/users/otakumesi/followers", "following_url": "https://api.github.com/users/otakumesi/following{/other_user}", "gists_url": "https://api.github.com/users/otakumesi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/otakumesi", "id": 14996977, "login": "otakumesi", "node_id": "MDQ6VXNlcjE0OTk2OTc3", "organizations_url": "https://api.github.com/users/otakumesi/orgs", "received_events_url": "https://api.github.com/users/otakumesi/received_events", "repos_url": "https://api.github.com/users/otakumesi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/otakumesi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/otakumesi/subscriptions", "type": "User", "url": "https://api.github.com/users/otakumesi" }
https://api.github.com/repos/huggingface/datasets/issues/680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/680/timeline
closed
false
680
null
2020-09-29T15:54:47Z
null
true
710,065,838
https://api.github.com/repos/huggingface/datasets/issues/679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/679/events
[]
null
2020-09-28T14:42:20Z
[]
https://github.com/huggingface/datasets/pull/679
MEMBER
null
false
null
[]
Fix negative ids when slicing with an array
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/679/reactions" }
MDExOlB1bGxSZXF1ZXN0NDkzOTgyMDMx
{ "diff_url": "https://github.com/huggingface/datasets/pull/679.diff", "html_url": "https://github.com/huggingface/datasets/pull/679", "merged_at": "2020-09-28T14:42:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/679.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/679" }
2020-09-28T08:39:08Z
https://api.github.com/repos/huggingface/datasets/issues/679/comments
```python from datasets import Dataset d = ds.Dataset.from_dict({"a": range(10)}) print(d[[0, -1]]) # OverflowError ``` raises an error because of the negative id. This PR fixes that. Fix #668
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/679/timeline
closed
false
679
null
2020-09-28T14:42:19Z
null
true
710,060,497
https://api.github.com/repos/huggingface/datasets/issues/678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/678/events
[]
null
2020-09-28T10:26:09Z
[]
https://github.com/huggingface/datasets/issues/678
CONTRIBUTOR
completed
null
null
[]
The download instructions for c4 datasets are not contained in the error message
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/678/reactions" }
MDU6SXNzdWU3MTAwNjA0OTc=
null
2020-09-28T08:30:54Z
https://api.github.com/repos/huggingface/datasets/issues/678/comments
The manual download instructions are not clear ```The dataset c4 with config en requires manual data. Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff8c5969760>>. Manual data can be loaded with `datasets.load_dataset(c4, data_dir='<path/to/manual/data>') ``` Either `@property` could be added to C4.manual_download_instrcutions (or make it a real property), or the manual_download_instructions function needs to be called I think. Let me know if you want a PR for this, but I'm not sure which possible fix is the correct one.
{ "avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4", "events_url": "https://api.github.com/users/Narsil/events{/privacy}", "followers_url": "https://api.github.com/users/Narsil/followers", "following_url": "https://api.github.com/users/Narsil/following{/other_user}", "gists_url": "https://api.github.com/users/Narsil/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Narsil", "id": 204321, "login": "Narsil", "node_id": "MDQ6VXNlcjIwNDMyMQ==", "organizations_url": "https://api.github.com/users/Narsil/orgs", "received_events_url": "https://api.github.com/users/Narsil/received_events", "repos_url": "https://api.github.com/users/Narsil/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Narsil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Narsil/subscriptions", "type": "User", "url": "https://api.github.com/users/Narsil" }
https://api.github.com/repos/huggingface/datasets/issues/678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/678/timeline
closed
false
678
null
2020-09-28T10:26:09Z
null
false
710,055,239
https://api.github.com/repos/huggingface/datasets/issues/677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/677/events
[]
null
2020-09-28T14:42:43Z
[]
https://github.com/huggingface/datasets/pull/677
MEMBER
null
false
null
[]
Move cache dir root creation in builder's init
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/677/reactions" }
MDExOlB1bGxSZXF1ZXN0NDkzOTczNDE3
{ "diff_url": "https://github.com/huggingface/datasets/pull/677.diff", "html_url": "https://github.com/huggingface/datasets/pull/677", "merged_at": "2020-09-28T14:42:42Z", "patch_url": "https://github.com/huggingface/datasets/pull/677.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/677" }
2020-09-28T08:22:46Z
https://api.github.com/repos/huggingface/datasets/issues/677/comments
We use lock files in the builder initialization but sometimes the cache directory where they're supposed to be was not created. To fix that I moved the builder's cache dir root creation in the builder's init. Fix #671
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://api.github.com/repos/huggingface/datasets/issues/677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/677/timeline
closed
false
677
null
2020-09-28T14:42:42Z
null
true
710,014,319
https://api.github.com/repos/huggingface/datasets/issues/676
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/676/events
[]
null
2020-10-07T13:46:33Z
[]
https://github.com/huggingface/datasets/issues/676
NONE
completed
null
null
[]
train_test_split returns empty dataset item
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/676/reactions" }
MDU6SXNzdWU3MTAwMTQzMTk=
null
2020-09-28T07:19:33Z
https://api.github.com/repos/huggingface/datasets/issues/676/comments
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty. The codes: ``` yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp') print(yelp_data[0]) yelp_data = yelp_data.train_test_split(test_size=0.1) print(yelp_data) print(yelp_data['test']) print(yelp_data['test'][0]) ``` The outputs: ``` {'stars': 2.0, 'text': 'xxxx'} Loading cached split indices for dataset at /home/ssd4/huanglianzhe/test_yelp/cache-f9b22d8b9d5a7346.arrow and /home/ssd4/huanglianzhe/test_yelp/cache-4aa26fa4005059d1.arrow DatasetDict({'train': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 7219009), 'test': Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113)}) Dataset(features: {'stars': Value(dtype='float64', id=None), 'text': Value(dtype='string', id=None)}, num_rows: 802113) {} # yelp_data['test'][0] is empty ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/26648528?v=4", "events_url": "https://api.github.com/users/mojave-pku/events{/privacy}", "followers_url": "https://api.github.com/users/mojave-pku/followers", "following_url": "https://api.github.com/users/mojave-pku/following{/other_user}", "gists_url": "https://api.github.com/users/mojave-pku/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mojave-pku", "id": 26648528, "login": "mojave-pku", "node_id": "MDQ6VXNlcjI2NjQ4NTI4", "organizations_url": "https://api.github.com/users/mojave-pku/orgs", "received_events_url": "https://api.github.com/users/mojave-pku/received_events", "repos_url": "https://api.github.com/users/mojave-pku/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mojave-pku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mojave-pku/subscriptions", "type": "User", "url": "https://api.github.com/users/mojave-pku" }
https://api.github.com/repos/huggingface/datasets/issues/676/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/676/timeline
closed
false
676
null
2020-10-07T13:38:06Z
null
false
709,818,725
https://api.github.com/repos/huggingface/datasets/issues/675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/675/events
[]
null
2020-10-20T09:08:49Z
[]
https://github.com/huggingface/datasets/issues/675
CONTRIBUTOR
completed
null
null
[]
Add custom dataset to NLP?
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/675/reactions" }
MDU6SXNzdWU3MDk4MTg3MjU=
null
2020-09-27T21:22:50Z
https://api.github.com/repos/huggingface/datasets/issues/675/comments
Is it possible to add a custom dataset such as a .csv to the NLP library? Thanks.
{ "avatar_url": "https://avatars.githubusercontent.com/u/6556710?v=4", "events_url": "https://api.github.com/users/timpal0l/events{/privacy}", "followers_url": "https://api.github.com/users/timpal0l/followers", "following_url": "https://api.github.com/users/timpal0l/following{/other_user}", "gists_url": "https://api.github.com/users/timpal0l/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/timpal0l", "id": 6556710, "login": "timpal0l", "node_id": "MDQ6VXNlcjY1NTY3MTA=", "organizations_url": "https://api.github.com/users/timpal0l/orgs", "received_events_url": "https://api.github.com/users/timpal0l/received_events", "repos_url": "https://api.github.com/users/timpal0l/repos", "site_admin": false, "starred_url": "https://api.github.com/users/timpal0l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timpal0l/subscriptions", "type": "User", "url": "https://api.github.com/users/timpal0l" }
https://api.github.com/repos/huggingface/datasets/issues/675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/675/timeline
closed
false
675
null
2020-10-20T09:08:49Z
null
false
709,661,006
https://api.github.com/repos/huggingface/datasets/issues/674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/674/events
[]
null
2020-10-05T08:28:18Z
[]
https://github.com/huggingface/datasets/issues/674
NONE
completed
null
null
[]
load_dataset() won't download in Windows
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/674/reactions" }
MDU6SXNzdWU3MDk2NjEwMDY=
null
2020-09-27T03:56:25Z
https://api.github.com/repos/huggingface/datasets/issues/674/comments
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've waited upwards of 18 hours to download the 'multi-news' dataset (which isn't very big), and still nothing. I've tried running it through different IDE's and the command line, but it had the same behavior. I've also tried it with all virus and malware protection turned off. I've made sure python and all IDE's are exceptions to the firewall and all the requisite permissions are enabled. Additionally, I checked to see if other packages could download content such as an nltk corpus, and they could. I've also run the same script using Ubuntu and it downloaded fine (and quickly). When I copied the downloaded datasets from my Ubuntu drive to my Windows .cache folder it worked fine by reusing the already-downloaded dataset, but it's cumbersome to do that for every dataset I want to try in my Windows environment. Could this be a bug, or is there something I'm doing wrong or not thinking of? Thanks.
{ "avatar_url": "https://avatars.githubusercontent.com/u/34422661?v=4", "events_url": "https://api.github.com/users/ThisDavehead/events{/privacy}", "followers_url": "https://api.github.com/users/ThisDavehead/followers", "following_url": "https://api.github.com/users/ThisDavehead/following{/other_user}", "gists_url": "https://api.github.com/users/ThisDavehead/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ThisDavehead", "id": 34422661, "login": "ThisDavehead", "node_id": "MDQ6VXNlcjM0NDIyNjYx", "organizations_url": "https://api.github.com/users/ThisDavehead/orgs", "received_events_url": "https://api.github.com/users/ThisDavehead/received_events", "repos_url": "https://api.github.com/users/ThisDavehead/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ThisDavehead/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ThisDavehead/subscriptions", "type": "User", "url": "https://api.github.com/users/ThisDavehead" }
https://api.github.com/repos/huggingface/datasets/issues/674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/674/timeline
closed
false
674
null
2020-10-05T08:28:18Z
null
false
709,603,989
https://api.github.com/repos/huggingface/datasets/issues/673
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/673/events
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
null
2022-02-15T10:47:58Z
[]
https://github.com/huggingface/datasets/issues/673
NONE
completed
null
null
[]
blog_authorship_corpus crashed
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/673/reactions" }
MDU6SXNzdWU3MDk2MDM5ODk=
null
2020-09-26T20:15:28Z
https://api.github.com/repos/huggingface/datasets/issues/673/comments
This is just to report that When I pick blog_authorship_corpus in https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus I get this: ![image](https://user-images.githubusercontent.com/7553188/94349542-4364f300-0013-11eb-897d-b25660a449f0.png)
{ "avatar_url": "https://avatars.githubusercontent.com/u/7553188?v=4", "events_url": "https://api.github.com/users/Moshiii/events{/privacy}", "followers_url": "https://api.github.com/users/Moshiii/followers", "following_url": "https://api.github.com/users/Moshiii/following{/other_user}", "gists_url": "https://api.github.com/users/Moshiii/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Moshiii", "id": 7553188, "login": "Moshiii", "node_id": "MDQ6VXNlcjc1NTMxODg=", "organizations_url": "https://api.github.com/users/Moshiii/orgs", "received_events_url": "https://api.github.com/users/Moshiii/received_events", "repos_url": "https://api.github.com/users/Moshiii/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Moshiii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Moshiii/subscriptions", "type": "User", "url": "https://api.github.com/users/Moshiii" }
https://api.github.com/repos/huggingface/datasets/issues/673/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/673/timeline
closed
false
673
null
2022-02-15T10:47:58Z
null
false
709,575,527
https://api.github.com/repos/huggingface/datasets/issues/672
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/672/events
[]
null
2022-10-04T17:30:17Z
[]
https://github.com/huggingface/datasets/issues/672
CONTRIBUTOR
completed
null
null
[]
Questions about XSUM
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/672/reactions" }
MDU6SXNzdWU3MDk1NzU1Mjc=
null
2020-09-26T17:16:24Z
https://api.github.com/repos/huggingface/datasets/issues/672/comments
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 204017) >>> data['test'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, num_rows: 11333) ``` The first issue is, the instance counts don’t match what I see on [the dataset's website](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset#what-builds-the-xsum-dataset) (11,333 vs 11,334 for test set; 204,017 vs 204,045 for training set) ``` … training (90%, 204,045), validation (5%, 11,332), and test (5%, 11,334) set. ``` Any thoughts why? Perhaps @mariamabarham could help here, since she recently had a PR on this dataaset https://github.com/huggingface/datasets/pull/289 (reviewed by @patrickvonplaten) Another issue is that the instances don't seem to have IDs. The original datasets provides IDs for the instances: https://github.com/EdinburghNLP/XSum/blob/master/XSum-Dataset/XSum-TRAINING-DEV-TEST-SPLIT-90-5-5.json but to be able to use them, the dataset sizes need to match. CC @jbragg
{ "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danyaljj", "id": 2441454, "login": "danyaljj", "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "repos_url": "https://api.github.com/users/danyaljj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "type": "User", "url": "https://api.github.com/users/danyaljj" }
https://api.github.com/repos/huggingface/datasets/issues/672/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/672/timeline
closed
false
672
null
2022-10-04T17:30:17Z
null
false