state
stringclasses
2 values
created_at
stringlengths
20
20
active_lock_reason
null
url
stringlengths
61
61
assignee
dict
reactions
dict
draft
bool
2 classes
labels_url
stringlengths
75
75
user
dict
html_url
stringlengths
49
51
assignees
list
locked
bool
1 class
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
milestone
dict
comments
sequence
state_reason
stringclasses
3 values
labels
list
title
stringlengths
1
290
author_association
stringclasses
3 values
timeline_url
stringlengths
70
70
body
stringlengths
0
228k
repository_url
stringclasses
1 value
pull_request
dict
id
int64
773M
2.11B
comments_url
stringlengths
70
70
node_id
stringlengths
18
32
performed_via_github_app
null
number
int64
1.62k
6.64k
events_url
stringlengths
68
68
is_pull_request
bool
2 classes
closed
2021-03-29T06:48:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/2129
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2129/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2129/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/836541?v=4", "events_url": "https://api.github.com/users/jnishi/events{/privacy}", "followers_url": "https://api.github.com/users/jnishi/followers", "following_url": "https://api.github.com/users/jnishi/following{/other_user}", "gists_url": "https://api.github.com/users/jnishi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jnishi", "id": 836541, "login": "jnishi", "node_id": "MDQ6VXNlcjgzNjU0MQ==", "organizations_url": "https://api.github.com/users/jnishi/orgs", "received_events_url": "https://api.github.com/users/jnishi/received_events", "repos_url": "https://api.github.com/users/jnishi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jnishi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jnishi/subscriptions", "type": "User", "url": "https://api.github.com/users/jnishi" }
https://github.com/huggingface/datasets/issues/2129
[]
false
2021-04-01T04:58:40Z
2021-04-01T04:58:40Z
null
[ "Hi !\r\nWe're not using `TextDatasetForNextSentencePrediction` in `datasets`.\r\nAlthough you can probably use the `TextDatasetForNextSentencePrediction.create_examples_from_document` on a dataset to prepare it for next sentence prediction.", "Thanks.\r\n\r\nDo you mean that `TextDatasetForNextSentencePrediction.create_exapmles_from_document` can be applied to dataset object other than `TextDatasetForNextSentencePrediction` e.g. a `Dataset` object which is loaded by `datasets.load_dataset`?", "It would probably require a bit of tweaking, but you can apply it to a dataset, yes.\r\nThis should give you a new dataset with sentence pairs you can train a model on.\r\n\r\nYou can find the documentation about dataset processing here:\r\nhttps://huggingface.co/docs/datasets/processing.html#processing-data-with-map", "Thank you for detail information.\r\n\r\nI'll try to apply `create_examples_from_document` to `Dataset` object.\r\n" ]
completed
[]
How to train BERT model with next sentence prediction?
NONE
https://api.github.com/repos/huggingface/datasets/issues/2129/timeline
Hello. I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
https://api.github.com/repos/huggingface/datasets
null
843,033,656
https://api.github.com/repos/huggingface/datasets/issues/2129/comments
MDU6SXNzdWU4NDMwMzM2NTY=
null
2,129
https://api.github.com/repos/huggingface/datasets/issues/2129/events
false
closed
2021-03-29T06:34:02Z
null
https://api.github.com/repos/huggingface/datasets/issues/2128
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2128/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2128/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4", "events_url": "https://api.github.com/users/adamlin120/events{/privacy}", "followers_url": "https://api.github.com/users/adamlin120/followers", "following_url": "https://api.github.com/users/adamlin120/following{/other_user}", "gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adamlin120", "id": 31605305, "login": "adamlin120", "node_id": "MDQ6VXNlcjMxNjA1MzA1", "organizations_url": "https://api.github.com/users/adamlin120/orgs", "received_events_url": "https://api.github.com/users/adamlin120/received_events", "repos_url": "https://api.github.com/users/adamlin120/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions", "type": "User", "url": "https://api.github.com/users/adamlin120" }
https://github.com/huggingface/datasets/issues/2128
[]
false
2021-03-31T12:48:01Z
2021-03-31T12:48:01Z
null
[ "Hi\r\nGood catch ! Thanks for reporting\r\n\r\nIf you are interested in contributing, feel free to open a PR to fix this :) " ]
completed
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
Dialogue action slot name and value are reversed in MultiWoZ 2.2
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2128/timeline
Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial! I spot an error that the order of Dialogue action slot names and values are reversed. https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.py#L251-L262
https://api.github.com/repos/huggingface/datasets
null
843,023,910
https://api.github.com/repos/huggingface/datasets/issues/2128/comments
MDU6SXNzdWU4NDMwMjM5MTA=
null
2,128
https://api.github.com/repos/huggingface/datasets/issues/2128/events
false
closed
2021-03-29T06:24:06Z
null
https://api.github.com/repos/huggingface/datasets/issues/2127
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2127/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2127/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/philschmid", "id": 32632186, "login": "philschmid", "node_id": "MDQ6VXNlcjMyNjMyMTg2", "organizations_url": "https://api.github.com/users/philschmid/orgs", "received_events_url": "https://api.github.com/users/philschmid/received_events", "repos_url": "https://api.github.com/users/philschmid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "type": "User", "url": "https://api.github.com/users/philschmid" }
https://github.com/huggingface/datasets/pull/2127
[]
false
2021-03-29T12:16:24Z
2021-03-29T12:16:24Z
null
[]
null
[]
make documentation more clear to use different cloud storage
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2127/timeline
This PR extends the cloud storage documentation. To show you can use a different `fsspec` implementation.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2127.diff", "html_url": "https://github.com/huggingface/datasets/pull/2127", "merged_at": "2021-03-29T12:16:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/2127.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2127" }
843,017,199
https://api.github.com/repos/huggingface/datasets/issues/2127/comments
MDExOlB1bGxSZXF1ZXN0NjAyNDYxMzc3
null
2,127
https://api.github.com/repos/huggingface/datasets/issues/2127/events
true
closed
2021-03-28T16:57:30Z
null
https://api.github.com/repos/huggingface/datasets/issues/2126
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2126/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2126/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/2126
[]
false
2021-03-29T09:27:14Z
2021-03-29T09:27:13Z
null
[]
null
[]
Replace legacy torch.Tensor constructor with torch.tensor
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2126/timeline
The title says it all (motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the pytorch repo).
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2126.diff", "html_url": "https://github.com/huggingface/datasets/pull/2126", "merged_at": "2021-03-29T09:27:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/2126.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2126" }
842,779,966
https://api.github.com/repos/huggingface/datasets/issues/2126/comments
MDExOlB1bGxSZXF1ZXN0NjAyMjcyMjg4
null
2,126
https://api.github.com/repos/huggingface/datasets/issues/2126/events
true
closed
2021-03-28T08:30:18Z
null
https://api.github.com/repos/huggingface/datasets/issues/2125
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2125/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2125/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42398050?v=4", "events_url": "https://api.github.com/users/kosuke-kitahara/events{/privacy}", "followers_url": "https://api.github.com/users/kosuke-kitahara/followers", "following_url": "https://api.github.com/users/kosuke-kitahara/following{/other_user}", "gists_url": "https://api.github.com/users/kosuke-kitahara/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kosuke-kitahara", "id": 42398050, "login": "kosuke-kitahara", "node_id": "MDQ6VXNlcjQyMzk4MDUw", "organizations_url": "https://api.github.com/users/kosuke-kitahara/orgs", "received_events_url": "https://api.github.com/users/kosuke-kitahara/received_events", "repos_url": "https://api.github.com/users/kosuke-kitahara/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kosuke-kitahara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kosuke-kitahara/subscriptions", "type": "User", "url": "https://api.github.com/users/kosuke-kitahara" }
https://github.com/huggingface/datasets/issues/2125
[]
false
2021-03-28T12:29:25Z
2021-03-28T12:29:25Z
null
[ "Hi,\r\n\r\nthanks for the report, but this is a duplicate of #2052. ", "@mariosasko \r\nThank you for your quick response! Following #2052, I've fixed the problem." ]
completed
[]
Is dataset timit_asr broken?
NONE
https://api.github.com/repos/huggingface/datasets/issues/2125/timeline
Using `timit_asr` dataset, I saw all records are the same. ``` python from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) display(HTML(df.to_html())) show_random_elements(timit['train'].remove_columns(["file", "phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"]), num_examples=20) ``` `output` <img width="312" alt="Screen Shot 2021-03-28 at 17 29 04" src="https://user-images.githubusercontent.com/42398050/112746646-21acee80-8feb-11eb-84f3-dbb5d4269724.png"> I double-checked it [here](https://huggingface.co/datasets/viewer/), and met the same problem. <img width="1374" alt="Screen Shot 2021-03-28 at 17 32 07" src="https://user-images.githubusercontent.com/42398050/112746698-9bdd7300-8feb-11eb-97ed-5babead385f4.png">
https://api.github.com/repos/huggingface/datasets
null
842,690,570
https://api.github.com/repos/huggingface/datasets/issues/2125/comments
MDU6SXNzdWU4NDI2OTA1NzA=
null
2,125
https://api.github.com/repos/huggingface/datasets/issues/2125/events
false
open
2021-03-28T00:07:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/2124
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2124/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2124/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez" }
https://github.com/huggingface/datasets/issues/2124
[]
false
2021-03-29T13:23:43Z
null
null
[ "I haven't played with it (yet) but it sounds really cool !\r\n" ]
null
[]
Adding ScaNN library to do MIPS?
NONE
https://api.github.com/repos/huggingface/datasets/issues/2124/timeline
@lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors. https://github.com/google-research/google-research/tree/master/scann ![image](https://user-images.githubusercontent.com/16892570/112738294-78ec9800-8fc6-11eb-9a5f-3d7ee5818e76.png)
https://api.github.com/repos/huggingface/datasets
null
842,627,729
https://api.github.com/repos/huggingface/datasets/issues/2124/comments
MDU6SXNzdWU4NDI2Mjc3Mjk=
null
2,124
https://api.github.com/repos/huggingface/datasets/issues/2124/events
false
closed
2021-03-27T18:41:28Z
null
https://api.github.com/repos/huggingface/datasets/issues/2123
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2123/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2123/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29705940?v=4", "events_url": "https://api.github.com/users/mille-s/events{/privacy}", "followers_url": "https://api.github.com/users/mille-s/followers", "following_url": "https://api.github.com/users/mille-s/following{/other_user}", "gists_url": "https://api.github.com/users/mille-s/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mille-s", "id": 29705940, "login": "mille-s", "node_id": "MDQ6VXNlcjI5NzA1OTQw", "organizations_url": "https://api.github.com/users/mille-s/orgs", "received_events_url": "https://api.github.com/users/mille-s/received_events", "repos_url": "https://api.github.com/users/mille-s/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mille-s/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mille-s/subscriptions", "type": "User", "url": "https://api.github.com/users/mille-s" }
https://github.com/huggingface/datasets/issues/2123
[]
false
2021-05-12T16:15:18Z
2021-05-12T16:15:17Z
null
[ "Hi,\r\n\r\nsadly I can't replicate the problem on my Windows machine. Try to update the library to the newest version with:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n``` ", "Thanks for the answer! I updated the library but unfortunately it didn't solve the problem.", "Is there an error message ?\r\nWhat stacktrace do you get if you interrupt the execution of the program while downloading ?", "Sorry for the long time since my last comment, I tried again and don't seem to have the problem anymore, thanks for your support!", "Great ! I'm closing the issue then. Feel free to re-open if you experience this issue again" ]
completed
[]
Problem downloading GEM wiki_auto_asset_turk dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2123/timeline
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') dataset = load_dataset('gem', 'wiki_auto_asset_turk') ``` **Expected behavior:** I expect the dataset to start downloading (download bar appears and progresses toward 100%) **Actual behavior:** Instead of seeing the download bar appearing, nothing happens; the following appears in the console as expected, but nothing more: Downloading: 36.6kB [00:00, 37.2MB/s] Downloading: 41.7kB [00:00, ?B/s] Downloading and preparing dataset gem/wiki_auto_asset_turk (download: 121.37 MiB, generated: 145.69 MiB, post-processed: Unknown size, total: 267.07 MiB) to C:\Users\sfmil\.cache\huggingface\datasets\gem\wiki_auto_asset_turk\1.0.0\f252756d7f1b8f019aac71a1623b2950acfe10d25d956668ac4eae4e93c58b8d... ### Is this a regression? No, it was the first time I was trying to download this dataset (same for the other ones). ### Debug info - Python version: Python 3.8.2 - OS version: Windows 10 Family
https://api.github.com/repos/huggingface/datasets
null
842,577,285
https://api.github.com/repos/huggingface/datasets/issues/2123/comments
MDU6SXNzdWU4NDI1NzcyODU=
null
2,123
https://api.github.com/repos/huggingface/datasets/issues/2123/events
false
closed
2021-03-26T18:09:20Z
null
https://api.github.com/repos/huggingface/datasets/issues/2122
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 5, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 5, "url": "https://api.github.com/repos/huggingface/datasets/issues/2122/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2122/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2122
[]
false
2021-08-04T18:11:59Z
2021-04-06T14:33:01Z
null
[]
null
[]
Fast table queries with interpolation search
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2122/timeline
## Intro This should fix issue #1803 Currently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation. To fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default chunk size is fixed). ## Benchmark Here is a [benchmark](https://pastebin.com/utEXUqsR) I did on bookcorpus (74M rows): for the current implementation ```python >>> python speed.py Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766 ========================= Querying unshuffled bookcorpus ========================= Avg access time key=1 : 0.018ms Avg access time key=74004227 : 0.215ms Avg access time key=range(74003204, 74004228) : 1.416ms Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 92.532ms ========================== Querying shuffled bookcorpus ========================== Avg access time key=1 : 0.187ms Avg access time key=74004227 : 6.642ms Avg access time key=range(74003204, 74004228) : 90.941ms Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 3448.456ms ``` for the new one using interpolation search: ```python >>> python speed.py Loaded dataset 'bookcorpus', len=74004228, nbytes=4835358766 ========================= Querying unshuffled bookcorpus ========================= Avg access time key=1 : 0.076ms Avg access time key=74004227 : 0.056ms Avg access time key=range(74003204, 74004228) : 1.807ms Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 24.028ms ========================== Querying shuffled bookcorpus ========================== Avg access time key=1 : 0.061ms Avg access time key=74004227 : 0.058ms Avg access time key=range(74003204, 74004228) : 22.166ms Avg access time key=RandIter(low=0, high=74004228, size=1024, seed=42): 42.757ms ``` The RandIter class is just an iterable of 1024 random indices from 0 to 74004228. Here is also a plot showing the speed improvement depending on the dataset size: ![image](https://user-images.githubusercontent.com/42851186/112673587-32335c80-8e65-11eb-9a0c-58ad774abaec.png) ## Implementation details: - `datasets.table.Table` objects implement interpolation search for the `slice` method - The interpolation search requires to store the offsets of all the chunks of a table. The offsets are stored when the `Table` is initialized. - `datasets.table.Table.slice` returns a `datasets.table.Table` using interpolation search - `datasets.table.Table.fast_slice` returns a `pyarrow.Table` object using interpolation search. This is useful to get a part of a dataset if we don't need the indexing structure for future computations. For example it's used when querying an example as a dictionary. - Now a `Dataset` object is always backed by a `datasets.table.Table` object. If one passes a `pyarrow.Table` to initialize a `Dataset`, then it's converted to a `datasets.table.Table` ## Checklist: - [x] implement interpolation search - [x] use `datasets.table.Table` in `Dataset` objects - [x] update current tests - [x] add tests for interpolation search - [x] comments and docstring - [x] add the benchmark to the CI Fix #1803.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2122.diff", "html_url": "https://github.com/huggingface/datasets/pull/2122", "merged_at": "2021-04-06T14:33:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/2122.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2122" }
842,194,588
https://api.github.com/repos/huggingface/datasets/issues/2122/comments
MDExOlB1bGxSZXF1ZXN0NjAxODE3MjI0
null
2,122
https://api.github.com/repos/huggingface/datasets/issues/2122/events
true
closed
2021-03-26T17:02:17Z
null
https://api.github.com/repos/huggingface/datasets/issues/2121
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2121/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2121/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
https://github.com/huggingface/datasets/pull/2121
[]
false
2021-05-10T13:17:18Z
2021-05-10T09:41:41Z
null
[ "Good start! Here are some proposed next steps:\r\n- We want the Class structure to reflect the template - so the parser know what section titles to expect and when something has gone wrong\r\n- As a result, we don't need to parse the table of contents, since it will always be the same\r\n- For each section/subsection it would be cool to have a variable saying whether it's filled out or not (when it's either empty or has `[More Information Needed]`)\r\n- `attributes` should probably be `text`", "@yjernite @lhoestq \r\n\r\nI have added basic validation checking in the class. It works based on a YAML string. The YAML string determines the expected structure and which text is to be checked. The `text` can be true or false showing whether the text has to be checked or not for emptiness. Similarly, each subsection is parsed recursively. I have used print statement currently so that all issues are shown.\r\n\r\nPlease let me know your thoughts.\r\n\r\nI haven't added a variable that keeps a track of whether the text is empty or not but it can be done easliy if required.", "This looks like a good start !\r\nMaybe we can use a field named `allow_empty` instead of `text` ?\r\nAlso +1 for keeping track of empty texts\r\n\r\nDo you think you can have a way to collect all the validation fails of a readme and then raise an error showing all the failures instead of using print ?\r\n\r\nThen we can create a `tests/test_dataset_cards.py` test file to make sure all the readmes of the repo are valid !", "Hi @lhoestq \r\n\r\nI have added changes accordingly. I prepared a list which stores all the errors and raises them at the end. I'm not sure if there is a better way.", "Hi @lhoestq @yjernite \r\n\r\nPlease find the output for the existing READMEs here: http://p.ip.fi/2vYU\r\n\r\nThanks,\r\nGunjan", "Hi @lhoestq\r\n\r\nI have added some basic tests, also have restructured `ReadMe` class slightly.\r\n\r\nThere is one print statement currently, I'm not sure how to remove it. Basically, I want to warn but not stop further validation. I can't append to a list because the `error_list` and `warning_list` are both only present in `validate` method, and this print is present in the `parse` method. This is done when someone has repeated a section multiple times. For e.g.:\r\n\r\n```markdown\r\n---\r\n---\r\n\r\n# Dataset Card for FashionMNIST\r\n## Dataset Description\r\n## Dataset Description\r\n```\r\n\r\nIn this case, I check for validation only in the latest entry.\r\n\r\nI can also raise an error (ideal case scenario), but still, it is in the `parse`. Should I add `error_lines` and `warning_lines` as instance variables? That would probably solve the issue.\r\n\r\nIn tests, I'm using a dummy YAML string for structure, we can also make it into a file but I feel that is not a hard requirement. Let me know your thoughts.\r\n\r\nI will add tests for `from_readme` as well.\r\n\r\nHowever, I would love to be able to check the exact message in the test when an error is raised. I checked a couple of methods but couldn't get it working. Let me know if you're aware of a way to do that.", "Hi @lhoestq \r\n\r\nThanks for merging. :)\r\nThanks a lot to you and @yjernite for guiding me and helping me out.\r\n\r\nYes, I'll also use the next PR for combining the readme and tags validation. ^_^" ]
null
[]
Add Validation For README
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2121/timeline
Hi @lhoestq, @yjernite This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each. Let me know if this is going in the right direction :) Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`: ```json { "name": "./datasets/fashion_mnist/README.md", "attributes": "", "subsections": [ { "name": "Dataset Card for FashionMNIST", "attributes": "", "subsections": [ { "name": "Table of Contents", "attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)", "subsections": [] }, { "name": "Dataset Description", "attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**", "subsections": [ { "name": "Dataset Summary", "attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.", "subsections": [] }, { "name": "Supported Tasks and Leaderboards", "attributes": "[More Information Needed]", "subsections": [] }, { "name": "Languages", "attributes": "[More Information Needed]", "subsections": [] } ] }, { "name": "Dataset Structure", "attributes": "", "subsections": [ { "name": "Data Instances", "attributes": "A data point comprises an image and its label.", "subsections": [] }, { "name": "Data Fields", "attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |", "subsections": [] }, { "name": "Data Splits", "attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.", "subsections": [] } ] }, { "name": "Dataset Creation", "attributes": "", "subsections": [ { "name": "Curation Rationale", "attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.", "subsections": [] }, { "name": "Source Data", "attributes": "", "subsections": [ { "name": "Initial Data Collection and Normalization", "attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.", "subsections": [] }, { "name": "Who are the source image producers?", "attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.", "subsections": [] } ] }, { "name": "Annotations", "attributes": "", "subsections": [ { "name": "Annotation process", "attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.", "subsections": [] }, { "name": "Who are the annotators?", "attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.", "subsections": [] } ] }, { "name": "Personal and Sensitive Information", "attributes": "[More Information Needed]", "subsections": [] } ] }, { "name": "Considerations for Using the Data", "attributes": "", "subsections": [ { "name": "Social Impact of Dataset", "attributes": "[More Information Needed]", "subsections": [] }, { "name": "Discussion of Biases", "attributes": "[More Information Needed]", "subsections": [] }, { "name": "Other Known Limitations", "attributes": "[More Information Needed]", "subsections": [] } ] }, { "name": "Additional Information", "attributes": "", "subsections": [ { "name": "Dataset Curators", "attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf", "subsections": [] }, { "name": "Licensing Information", "attributes": "MIT Licence", "subsections": [] }, { "name": "Citation Information", "attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}", "subsections": [] }, { "name": "Contributions", "attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.", "subsections": [] } ] } ] } ] } ``` Thanks, Gunjan
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2121.diff", "html_url": "https://github.com/huggingface/datasets/pull/2121", "merged_at": "2021-05-10T09:41:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/2121.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2121" }
842,148,633
https://api.github.com/repos/huggingface/datasets/issues/2121/comments
MDExOlB1bGxSZXF1ZXN0NjAxNzc4NDc4
null
2,121
https://api.github.com/repos/huggingface/datasets/issues/2121/events
true
closed
2021-03-26T13:22:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/2120
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2120/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2120/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
https://github.com/huggingface/datasets/issues/2120
[]
false
2021-03-26T15:52:22Z
2021-03-26T15:52:22Z
null
[ "Thanks for reporting :) We're looking into it", "Back up. " ]
completed
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
dataset viewer does not work anymore
NONE
https://api.github.com/repos/huggingface/datasets/issues/2120/timeline
Hi I normally use this link to see all datasets and how I can load them https://huggingface.co/datasets/viewer/ Now I am getting 502 Bad Gateway nginx/1.18.0 (Ubuntu) could you bring this webpage back ? this was very helpful @lhoestq thanks for your help
https://api.github.com/repos/huggingface/datasets
null
841,954,521
https://api.github.com/repos/huggingface/datasets/issues/2120/comments
MDU6SXNzdWU4NDE5NTQ1MjE=
null
2,120
https://api.github.com/repos/huggingface/datasets/issues/2120/events
false
closed
2021-03-26T03:58:38Z
null
https://api.github.com/repos/huggingface/datasets/issues/2119
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2119/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2119/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/5506053?v=4", "events_url": "https://api.github.com/users/NihalHarish/events{/privacy}", "followers_url": "https://api.github.com/users/NihalHarish/followers", "following_url": "https://api.github.com/users/NihalHarish/following{/other_user}", "gists_url": "https://api.github.com/users/NihalHarish/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NihalHarish", "id": 5506053, "login": "NihalHarish", "node_id": "MDQ6VXNlcjU1MDYwNTM=", "organizations_url": "https://api.github.com/users/NihalHarish/orgs", "received_events_url": "https://api.github.com/users/NihalHarish/received_events", "repos_url": "https://api.github.com/users/NihalHarish/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NihalHarish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NihalHarish/subscriptions", "type": "User", "url": "https://api.github.com/users/NihalHarish" }
https://github.com/huggingface/datasets/pull/2119
[]
false
2021-03-26T15:13:52Z
2021-03-26T15:13:52Z
null
[]
null
[]
copy.deepcopy os.environ instead of copy
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2119/timeline
Fixes: https://github.com/huggingface/datasets/issues/2115 - bug fix: using envrion.copy() returns a dict. - using deepcopy(environ) returns an `_environ` object - Changing the datatype of the _environ object can break code, if subsequent libraries perform operations using apis exclusive to the environ object, like `environ.getenv()` for example. Testing: Tested the change on my terminal: ``` >>> import os >>> x = deepcopy(os.environ) >>> y = os.environ >>> x is y False >>> isinstance(x, type(os.environ)) True >>> z = os.environ.copy() >>> isinstance(z, type(os.environ)) False >>> isinstance(z, dict) True ```
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2119.diff", "html_url": "https://github.com/huggingface/datasets/pull/2119", "merged_at": "2021-03-26T15:13:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/2119.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2119" }
841,567,199
https://api.github.com/repos/huggingface/datasets/issues/2119/comments
MDExOlB1bGxSZXF1ZXN0NjAxMjg2MjIy
null
2,119
https://api.github.com/repos/huggingface/datasets/issues/2119/events
true
closed
2021-03-26T03:48:17Z
null
https://api.github.com/repos/huggingface/datasets/issues/2118
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2118/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2118/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/2118
[]
false
2021-03-26T12:03:23Z
2021-03-26T12:00:05Z
null
[ "I thought deepcopy on `os.environ` is unsafe (see [this](https://stackoverflow.com/questions/13142972/using-copy-deepcopy-on-os-environ-in-python-appears-broken)), but I can't replicate the behavior described in the linked SO thread.\r\n\r\nClosing this one because #2119 has a much cleaner approach." ]
null
[]
Remove os.environ.copy in Dataset.map
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2118/timeline
Replace `os.environ.copy` with in-place modification Fixes #2115
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2118.diff", "html_url": "https://github.com/huggingface/datasets/pull/2118", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2118.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2118" }
841,563,329
https://api.github.com/repos/huggingface/datasets/issues/2118/comments
MDExOlB1bGxSZXF1ZXN0NjAxMjgzMDUx
null
2,118
https://api.github.com/repos/huggingface/datasets/issues/2118/events
true
closed
2021-03-26T02:35:22Z
null
https://api.github.com/repos/huggingface/datasets/issues/2117
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2117/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2117/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/54012361?v=4", "events_url": "https://api.github.com/users/Frankie123421/events{/privacy}", "followers_url": "https://api.github.com/users/Frankie123421/followers", "following_url": "https://api.github.com/users/Frankie123421/following{/other_user}", "gists_url": "https://api.github.com/users/Frankie123421/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Frankie123421", "id": 54012361, "login": "Frankie123421", "node_id": "MDQ6VXNlcjU0MDEyMzYx", "organizations_url": "https://api.github.com/users/Frankie123421/orgs", "received_events_url": "https://api.github.com/users/Frankie123421/received_events", "repos_url": "https://api.github.com/users/Frankie123421/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Frankie123421/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Frankie123421/subscriptions", "type": "User", "url": "https://api.github.com/users/Frankie123421" }
https://github.com/huggingface/datasets/issues/2117
[]
false
2021-08-25T21:44:05Z
2021-03-26T02:40:26Z
null
[ "@Frankie123421 what was the resolution to this?", "> @Frankie123421 what was the resolution to this?\r\n\r\nuse glue_metric.py instead of glue.py in load_metric", "thank you!" ]
completed
[]
load_metric from local "glue.py" meet error 'NoneType' object is not callable
NONE
https://api.github.com/repos/huggingface/datasets/issues/2117/timeline
actual_task = "mnli" if task == "mnli-mm" else task dataset = load_dataset(path='/home/glue.py', name=actual_task) metric = load_metric(path='/home/glue.py', name=actual_task) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-7ab77a465d81> in <module> 1 actual_task = "mnli" if task == "mnli-mm" else task 2 dataset = load_dataset(path='/home/jcli/glue.py', name=actual_task) ----> 3 metric = load_metric(path='/home/jcli/glue.py', name=actual_task) ~/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs) 508 keep_in_memory=keep_in_memory, 509 experiment_id=experiment_id, --> 510 **metric_init_kwargs, 511 ) 512 TypeError: 'NoneType' object is not callable Please help
https://api.github.com/repos/huggingface/datasets
null
841,535,283
https://api.github.com/repos/huggingface/datasets/issues/2117/comments
MDU6SXNzdWU4NDE1MzUyODM=
null
2,117
https://api.github.com/repos/huggingface/datasets/issues/2117/events
false
closed
2021-03-26T00:37:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/2116
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2116/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2116/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4", "events_url": "https://api.github.com/users/GeetDsa/events{/privacy}", "followers_url": "https://api.github.com/users/GeetDsa/followers", "following_url": "https://api.github.com/users/GeetDsa/following{/other_user}", "gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/GeetDsa", "id": 13940397, "login": "GeetDsa", "node_id": "MDQ6VXNlcjEzOTQwMzk3", "organizations_url": "https://api.github.com/users/GeetDsa/orgs", "received_events_url": "https://api.github.com/users/GeetDsa/received_events", "repos_url": "https://api.github.com/users/GeetDsa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions", "type": "User", "url": "https://api.github.com/users/GeetDsa" }
https://github.com/huggingface/datasets/issues/2116
[]
false
2021-03-31T14:30:32Z
2021-03-31T14:30:32Z
null
[ "Hi,\r\n\r\nthe `_data` attribute is missing due to `MyDataset.__init__` not calling the parent `__init__`. However, I don't think it's a good idea to subclass the `datasets.Dataset` class (e.g. it's kind of dangerous to override `datasets.Dataset.__getitem__`). Instead, it's better to follow the \"association over inheritance\" approach with a simple wrapper class that delegates calls to a wrapped `Dataset` (map, etc.). Btw, the library offers the `datasets.Dataset.from_pandas` class method to directly create a `datasets.Dataset` from the dataframe." ]
completed
[]
Creating custom dataset results in error while calling the map() function
NONE
https://api.github.com/repos/huggingface/datasets/issues/2116/timeline
calling `map()` of `datasets` library results into an error while defining a Custom dataset. Reproducible example: ``` import datasets class MyDataset(datasets.Dataset): def __init__(self, sentences): "Initialization" self.samples = sentences def __len__(self): "Denotes the total number of samples" return len(self.samples) def __getitem__(self, index): "Generates one sample of data" # Select sample # Load data and get label samples = self.samples[index] return samples def preprocess_function_train(examples): inputs = examples labels = [example+tokenizer.eos_token for example in examples ] inputs = tokenizer(inputs, max_length=30, padding=True, truncation=True) labels = tokenizer(labels, max_length=30, padding=True, truncation=True) model_inputs = inputs model_inputs["labels"] = labels["input_ids"] print("about to return") return model_inputs ##train["sentence"] is dataframe column train_dataset = MyDataset(train['sentence'].values.tolist()) train_dataset = train_dataset.map( preprocess_function, batched = True, batch_size=32 ) ``` Stack trace of error: ``` Traceback (most recent call last): File "dir/train_generate.py", line 362, in <module> main() File "dir/train_generate.py", line 245, in main train_dataset = train_dataset.map( File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1244, in map return self._map_single( File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 149, in wrapper unformatted_columns = set(self.column_names) - set(self._format_columns or []) File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 526, in column_names return self._data.column_names AttributeError: 'MyDataset' object has no attribute '_data' ```
https://api.github.com/repos/huggingface/datasets
null
841,481,292
https://api.github.com/repos/huggingface/datasets/issues/2116/comments
MDU6SXNzdWU4NDE0ODEyOTI=
null
2,116
https://api.github.com/repos/huggingface/datasets/issues/2116/events
false
closed
2021-03-25T20:29:19Z
null
https://api.github.com/repos/huggingface/datasets/issues/2115
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2115/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2115/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/19983848?v=4", "events_url": "https://api.github.com/users/leleamol/events{/privacy}", "followers_url": "https://api.github.com/users/leleamol/followers", "following_url": "https://api.github.com/users/leleamol/following{/other_user}", "gists_url": "https://api.github.com/users/leleamol/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leleamol", "id": 19983848, "login": "leleamol", "node_id": "MDQ6VXNlcjE5OTgzODQ4", "organizations_url": "https://api.github.com/users/leleamol/orgs", "received_events_url": "https://api.github.com/users/leleamol/received_events", "repos_url": "https://api.github.com/users/leleamol/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leleamol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leleamol/subscriptions", "type": "User", "url": "https://api.github.com/users/leleamol" }
https://github.com/huggingface/datasets/issues/2115
[]
false
2021-03-26T15:13:52Z
2021-03-26T15:13:52Z
null
[]
completed
[]
The datasets.map() implementation modifies the datatype of os.environ object
NONE
https://api.github.com/repos/huggingface/datasets/issues/2115/timeline
In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'. This causes following function calls to fail as follows: ` x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None) TypeError: get() takes no keyword arguments ` It looks like the following line in datasets.map implementation introduced this functionality. https://github.com/huggingface/datasets/blob/0cb1ac06acb0df44a1cf4128d03a01865faa2504/src/datasets/arrow_dataset.py#L1421 Here is the test script to reproduce this error. ``` from datasets import load_dataset from transformers import AutoTokenizer import os def test_train(): model_checkpoint = "distilgpt2" datasets = load_dataset('wikitext', 'wikitext-2-raw-v1') tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True) tokenizer.pad_token = tokenizer.eos_token def tokenize_function(examples): y = tokenizer(examples['text'], truncation=True, max_length=64) return y x = os.environ.get("TEST_ENV_VARIABLE_BEFORE_dataset_map", default=None) print(f"Testing environment variable: TEST_ENV_VARIABLE_BEFORE_dataset_map {x}") print(f"Data type of os.environ before datasets.map = {os.environ.__class__.__name__}") datasets.map(tokenize_function, batched=True, num_proc=2, remove_columns=["text"]) print(f"Data type of os.environ after datasets.map = {os.environ.__class__.__name__}") x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None) print(f"Testing environment variable: TEST_ENV_VARIABLE_AFTER_dataset_map {x}") if __name__ == "__main__": test_train() ```
https://api.github.com/repos/huggingface/datasets
null
841,283,974
https://api.github.com/repos/huggingface/datasets/issues/2115/comments
MDU6SXNzdWU4NDEyODM5NzQ=
null
2,115
https://api.github.com/repos/huggingface/datasets/issues/2115/events
false
closed
2021-03-25T18:40:17Z
null
https://api.github.com/repos/huggingface/datasets/issues/2114
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2114/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2114/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iliaschalkidis", "id": 1626984, "login": "iliaschalkidis", "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "type": "User", "url": "https://api.github.com/users/iliaschalkidis" }
https://github.com/huggingface/datasets/pull/2114
[]
false
2021-03-31T10:38:50Z
2021-03-31T10:38:50Z
null
[ "> Awesome thank you :)\r\n> This is really cool\r\n> \r\n> I left a few comments.\r\n> \r\n> Also it looks like the dummy data are quite big (100-200KB each). Can you try to reduce their sizes please ? For example I noticed that all the jsonl files inside the `dummy_data.zip` files have 20 lines. Can you only keep 2 lines instead ?\r\n\r\nHi @lhoestq, I did my best to improve the README files, while I also decreased dummy data examples. I included one more legal dataset.", "@lhoestq thanks for your review.\r\n\r\n I shortened the examples in README files and removed `DEFAULT_CONFIG_BUILDER` from `eu_regulatory_ir.py`." ]
null
[]
Support for legal NLP datasets (EURLEX, ECtHR cases and EU-REG-IR)
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2114/timeline
Add support for two legal NLP datasets: - EURLEX (https://www.aclweb.org/anthology/P19-1636/) - ECtHR cases (https://arxiv.org/abs/2103.13084) - EU-REG-IR (https://arxiv.org/abs/2101.10726)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2114.diff", "html_url": "https://github.com/huggingface/datasets/pull/2114", "merged_at": "2021-03-31T10:38:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/2114.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2114" }
841,207,878
https://api.github.com/repos/huggingface/datasets/issues/2114/comments
MDExOlB1bGxSZXF1ZXN0NjAwOTc1MTA3
null
2,114
https://api.github.com/repos/huggingface/datasets/issues/2114/events
true
closed
2021-03-25T18:18:30Z
null
https://api.github.com/repos/huggingface/datasets/issues/2113
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2113/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2113/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2113
[]
false
2021-03-31T11:30:14Z
2021-03-31T08:30:11Z
null
[]
null
[]
Implement Dataset as context manager
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2113/timeline
When used as context manager, it would be safely deleted if some exception is raised. This will avoid > During handling of the above exception, another exception occurred:
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2113.diff", "html_url": "https://github.com/huggingface/datasets/pull/2113", "merged_at": "2021-03-31T08:30:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/2113.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2113" }
841,191,303
https://api.github.com/repos/huggingface/datasets/issues/2113/comments
MDExOlB1bGxSZXF1ZXN0NjAwOTYxMDEz
null
2,113
https://api.github.com/repos/huggingface/datasets/issues/2113/events
true
closed
2021-03-25T16:24:17Z
null
https://api.github.com/repos/huggingface/datasets/issues/2112
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2112/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2112/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iliaschalkidis", "id": 1626984, "login": "iliaschalkidis", "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "type": "User", "url": "https://api.github.com/users/iliaschalkidis" }
https://github.com/huggingface/datasets/pull/2112
[]
false
2021-03-25T18:39:31Z
2021-03-25T18:34:31Z
null
[]
null
[]
Support for legal NLP datasets (EURLEX and ECtHR cases)
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2112/timeline
Add support for two legal NLP datasets: - EURLEX (https://www.aclweb.org/anthology/P19-1636/) - ECtHR cases (https://arxiv.org/abs/2103.13084)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2112.diff", "html_url": "https://github.com/huggingface/datasets/pull/2112", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2112.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2112" }
841,098,008
https://api.github.com/repos/huggingface/datasets/issues/2112/comments
MDExOlB1bGxSZXF1ZXN0NjAwODgyMjA0
null
2,112
https://api.github.com/repos/huggingface/datasets/issues/2112/events
true
closed
2021-03-25T16:06:48Z
null
https://api.github.com/repos/huggingface/datasets/issues/2111
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2111/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2111/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2111
[]
false
2021-04-06T07:20:43Z
2021-04-06T07:20:43Z
null
[ "I discussed with Patrick and I think we could have a nice addition: have a parameter `concatenate_texts` that, if `True`, uses the old implementation.\r\n\r\nBy default `concatenate_texts` would be `False`, so that sentences are evaluated independently, and to save resources (the WER computation has a quadratic complexity).\r\n\r\nSome users might still want to use the old implementation.", "@lhoestq @patrickvonplaten are you sure of the parameter name `concatenate_texts`? I was thinking about something like `iter`...", "Not sure about the name, if you can improve it feel free to do so ^^'\r\nThe old implementation computes the WER on the concatenation of all the input texts, while the new one makes WER measures computation independent for each reference/prediction pair.\r\nThat's why I thought of `concatenate_texts`", "@lhoestq yes, but the end user does not necessarily know the details of the implementation of the WER computation.\r\n\r\nFrom the end user perspective I think it might make more sense: how do you want to compute the metric?\r\n- all in once, more RAM memory needed?\r\n- iteratively, less RAM requirements?\r\n\r\nBecause of that I was thinking of something like `iter` or `iterative`...", "Personally like `concatenate_texts` better since I feel like `iter` or `iterate` are quite vague", "Therefore, you can merge... ;)", "Ok ! merging :)" ]
null
[]
Compute WER metric iteratively
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2111/timeline
Compute WER metric iteratively to avoid MemoryError. Fix #2078.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2111.diff", "html_url": "https://github.com/huggingface/datasets/pull/2111", "merged_at": "2021-04-06T07:20:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/2111.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2111" }
841,082,087
https://api.github.com/repos/huggingface/datasets/issues/2111/comments
MDExOlB1bGxSZXF1ZXN0NjAwODY4OTg5
null
2,111
https://api.github.com/repos/huggingface/datasets/issues/2111/events
true
closed
2021-03-25T10:39:20Z
null
https://api.github.com/repos/huggingface/datasets/issues/2110
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2110/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2110/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2340721?v=4", "events_url": "https://api.github.com/users/dreamgonfly/events{/privacy}", "followers_url": "https://api.github.com/users/dreamgonfly/followers", "following_url": "https://api.github.com/users/dreamgonfly/following{/other_user}", "gists_url": "https://api.github.com/users/dreamgonfly/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dreamgonfly", "id": 2340721, "login": "dreamgonfly", "node_id": "MDQ6VXNlcjIzNDA3MjE=", "organizations_url": "https://api.github.com/users/dreamgonfly/orgs", "received_events_url": "https://api.github.com/users/dreamgonfly/received_events", "repos_url": "https://api.github.com/users/dreamgonfly/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dreamgonfly/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dreamgonfly/subscriptions", "type": "User", "url": "https://api.github.com/users/dreamgonfly" }
https://github.com/huggingface/datasets/pull/2110
[]
false
2021-04-12T13:33:03Z
2021-04-12T13:33:03Z
null
[ "Hi ! The SplitInfo is not always available. By default you would get `split_info.num_examples == 0`\r\nSo unfortunately we can't use this assertion you suggested", "> Hi ! The SplitInfo is not always available. By default you would get `split_info.num_examples == 0`\r\n> So unfortunately we can't use this assertion you suggested\r\n\r\nThen it would be better to just remove the assertion, because the existing assertion does nothing." ]
null
[]
Fix incorrect assertion in builder.py
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2110/timeline
Fix incorrect num_examples comparison assertion in builder.py
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2110.diff", "html_url": "https://github.com/huggingface/datasets/pull/2110", "merged_at": "2021-04-12T13:33:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/2110.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2110" }
840,794,995
https://api.github.com/repos/huggingface/datasets/issues/2110/comments
MDExOlB1bGxSZXF1ZXN0NjAwNjI1NDQ5
null
2,110
https://api.github.com/repos/huggingface/datasets/issues/2110/events
true
closed
2021-03-25T09:41:53Z
null
https://api.github.com/repos/huggingface/datasets/issues/2109
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2109/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2109/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2109
[]
false
2021-04-19T06:20:11Z
2021-04-19T06:20:11Z
null
[ "If you agree, I could also add a link to [Discussions](https://github.com/huggingface/datasets/discussions) in order to reinforce the use of Discussion to make Questions (instead of Issues).\r\n\r\nI could also add some other templates: Bug, Feature Request,...", "@theo-m we wrote our same comments at the same time... 😉 " ]
null
[]
Add more issue templates and customize issue template chooser
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2109/timeline
When opening an issue, it is not evident for the users how to choose a blank issue template. There is a link at the bottom of all the other issue templates (`Don’t see your issue here? Open a blank issue.`), but this is not very visible for users. This is the reason why many users finally chose the `add-dataset` template instead (this is more visible) for issues that indeed are not requesting the addition of a new dataset. ~~With this PR, the default blank issue template would be as visible as the other templates (as the `add-dataset` template), thus making easier for the users to choose it.~~ With this PR: - more issue templates, besides `add-dataset`, are added: `bug-report` and `feature-request` - the issue template chooser is customized, so that it now includes a link to `Discussions` for questions
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2109.diff", "html_url": "https://github.com/huggingface/datasets/pull/2109", "merged_at": "2021-04-19T06:20:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/2109.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2109" }
840,746,598
https://api.github.com/repos/huggingface/datasets/issues/2109/comments
MDExOlB1bGxSZXF1ZXN0NjAwNTg1MzM5
null
2,109
https://api.github.com/repos/huggingface/datasets/issues/2109/events
true
open
2021-03-24T21:32:16Z
null
https://api.github.com/repos/huggingface/datasets/issues/2108
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2108/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2108/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez" }
https://github.com/huggingface/datasets/issues/2108
[]
false
2021-03-25T06:31:43Z
null
null
[]
null
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
Is there a way to use a GPU only when training an Index in the process of add_faisis_index?
NONE
https://api.github.com/repos/huggingface/datasets/issues/2108/timeline
Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6).
https://api.github.com/repos/huggingface/datasets
null
840,181,055
https://api.github.com/repos/huggingface/datasets/issues/2108/comments
MDU6SXNzdWU4NDAxODEwNTU=
null
2,108
https://api.github.com/repos/huggingface/datasets/issues/2108/events
false
closed
2021-03-24T08:52:41Z
null
https://api.github.com/repos/huggingface/datasets/issues/2107
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2107/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2107/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m" }
https://github.com/huggingface/datasets/pull/2107
[ { "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" } ]
false
2021-04-26T08:27:14Z
2021-04-26T08:27:13Z
null
[ "> Also I was wondering this is really needed to have `utils.metadata` as a submodule of `datasets` ? This is only used by the CI so I'm not sure we should have this in the actual `datasets` package.\r\n\r\nI'm unclear on the suggestion, would you rather have a root-level `./metadata.py` file? I think it's well where it is, if anything we could move it out of utils and into `datasets` as it could be used by e.g. `DatasetDict` so that users can pull the metadata easily rather than have to reparse the readme.\r\n", "Ok that makes sense if we want to have functions that parse the metadata for users", "Hi @theo-m @lhoestq \r\n\r\nThis seems very interesting. Should I add the descriptions to the PR on `datasets-tagging`? Alternatively, I can also create a google-sheet/markdown table :)\r\n\r\nSorry for the delay in responding.\r\n\r\nThanks,\r\nGunjan", "> Hi @theo-m @lhoestq\r\n> \r\n> This seems very interesting. Should I add the descriptions to the PR on `datasets-tagging`? Alternatively, I can also create a google-sheet/markdown table :)\r\n> \r\n> Sorry for the delay in responding.\r\n> \r\n> Thanks,\r\n> Gunjan\r\n\r\nHi @gchhablani, yes I think at the moment the best solution is for you to write in `datasets-tagging`, as the PR will allow us to discuss and review, even though the work will be ported to this repo in the end. \r\nOr we wait for this to be merged and you reopen the PR here, your call :)", "cc @abhi1thakur " ]
null
[]
Metadata validation
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2107/timeline
- `pydantic` metadata schema with dedicated validators against our taxonomy - ci script to validate new changes against this schema and start a vertuous loop - soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future for reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https://gist.github.com/theo-m/61b3c0c47fc6121d08d3174bd4c2a26b)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2107.diff", "html_url": "https://github.com/huggingface/datasets/pull/2107", "merged_at": "2021-04-26T08:27:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/2107.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2107" }
839,495,825
https://api.github.com/repos/huggingface/datasets/issues/2107/comments
MDExOlB1bGxSZXF1ZXN0NTk5NTAxODE5
null
2,107
https://api.github.com/repos/huggingface/datasets/issues/2107/events
true
open
2021-03-23T20:14:47Z
null
https://api.github.com/repos/huggingface/datasets/issues/2106
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2106/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2106/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/22580542?v=4", "events_url": "https://api.github.com/users/trina731/events{/privacy}", "followers_url": "https://api.github.com/users/trina731/followers", "following_url": "https://api.github.com/users/trina731/following{/other_user}", "gists_url": "https://api.github.com/users/trina731/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/trina731", "id": 22580542, "login": "trina731", "node_id": "MDQ6VXNlcjIyNTgwNTQy", "organizations_url": "https://api.github.com/users/trina731/orgs", "received_events_url": "https://api.github.com/users/trina731/received_events", "repos_url": "https://api.github.com/users/trina731/repos", "site_admin": false, "starred_url": "https://api.github.com/users/trina731/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trina731/subscriptions", "type": "User", "url": "https://api.github.com/users/trina731" }
https://github.com/huggingface/datasets/issues/2106
[]
false
2021-03-25T21:36:20Z
null
null
[ "Hi ! Thanks for reporting\r\n\r\nBy looking at the raw `news-commentary-v14.en-kk.tsv` file, it looks like there are at least 17 lines with this issue.\r\nMoreover these issues are not always the same:\r\n- L97 is only `kk` text and must be appended at the end of the `kk` text of the **next** line\r\n- L2897 is only `kk` text and must be appended at the end of the `kk` text of the **previous** line\r\n- L1247 and L1248 are only `kk` texts and must be inserted at the **beginning** of the `kk` text of the next line\r\n- (and there are many others)\r\n\r\nIt would be nice to have a corrected version of this file ! The file is available in the `wmt/news-commentary` repository on the Datasets Hub here:\r\nhttps://huggingface.co/datasets/wmt/news-commentary/tree/main/v14/training\r\n\r\nThen maybe we can notify the WMT authors and host the corrected version somewhere" ]
null
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
WMT19 Dataset for Kazakh-English is not formatted correctly
NONE
https://api.github.com/repos/huggingface/datasets/issues/2106/timeline
In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error. The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here: > Line 94. The Swiss National Bank, for its part, has been battling with the deflationary effects of the franc’s dramatic appreciation over the past few years. Швейцарияның Ұлттық банкі өз тарапынан, соңғы бірнеше жыл ішінде франк құнының қатты өсуінің дефляциялық әсерімен күресіп келеді. > > Line 95. Дефляциялық күштер 2008 жылы терең және ұзаққа созылған жаһандық дағдарысқа байланысты орын алған ірі экономикалық және қаржылық орын алмасулардың арқасында босатылды. Жеке қарыз қаражаты үлесінің қысқаруы орталық банктің рефляцияға жұмсалған күш-жігеріне тұрақты соққан қарсы желдей болды. > > Line 96. The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate. 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды. As you can see, line 95 has only the Kazakh translation which should be part of line 96. This causes all of the following English-Kazakh translation pairs to be one off rendering ALL of those translations incorrect. This issue was not fixed when the dataset was imported to Huggingface. By running this code ``` import datasets from datasets import load_dataset dataset = load_dataset('wmt19', 'kk-en') for key in dataset['train']['translation']: if 'The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008.' in key['kk']: print(key['en']) print(key['kk']) break ``` we get: > 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды. > The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate. which shows that the issue still persists in the Huggingface dataset. The Kazakh sentence matches up to the next English sentence in the dataset instead of the current one. Please let me know if there's you have any ideas to fix this one-off error from the dataset or if this can be fixed by Huggingface.
https://api.github.com/repos/huggingface/datasets
null
839,084,264
https://api.github.com/repos/huggingface/datasets/issues/2106/comments
MDU6SXNzdWU4MzkwODQyNjQ=
null
2,106
https://api.github.com/repos/huggingface/datasets/issues/2106/events
false
open
2021-03-23T19:43:06Z
null
https://api.github.com/repos/huggingface/datasets/issues/2105
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2105/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2105/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/13603748?v=4", "events_url": "https://api.github.com/users/kyleclo/events{/privacy}", "followers_url": "https://api.github.com/users/kyleclo/followers", "following_url": "https://api.github.com/users/kyleclo/following{/other_user}", "gists_url": "https://api.github.com/users/kyleclo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kyleclo", "id": 13603748, "login": "kyleclo", "node_id": "MDQ6VXNlcjEzNjAzNzQ4", "organizations_url": "https://api.github.com/users/kyleclo/orgs", "received_events_url": "https://api.github.com/users/kyleclo/received_events", "repos_url": "https://api.github.com/users/kyleclo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kyleclo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kyleclo/subscriptions", "type": "User", "url": "https://api.github.com/users/kyleclo" }
https://github.com/huggingface/datasets/issues/2105
[]
false
2021-08-04T19:18:02Z
null
null
[ "Hello @kyleclo! Currently, we are getting the data from your bucket, so if you remove it the HF script won't work anymore :) \r\n\r\nUntil you solve things on your end, @lhoestq suggested we just return a warning message when people try to load that dataset from HF. What would you like it to say?", "Hi @kyleclo, as of today, you have not removed your bucket data yet, and therefore HuggingFace can download it from there.\r\n\r\nIs it OK? Are you planning to eventually delete it? Thank you.", "Hi! Sorry I missed @yjernite 's previous message, thanks for responding! \r\n\r\nIs there an option where we can keep our data in our bucket, but the HF script no longer pulls data from it? " ]
null
[]
Request to remove S2ORC dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2105/timeline
Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!
https://api.github.com/repos/huggingface/datasets
null
839,059,226
https://api.github.com/repos/huggingface/datasets/issues/2105/comments
MDU6SXNzdWU4MzkwNTkyMjY=
null
2,105
https://api.github.com/repos/huggingface/datasets/issues/2105/events
false
closed
2021-03-23T18:59:54Z
null
https://api.github.com/repos/huggingface/datasets/issues/2104
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2104/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2104/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/35391599?v=4", "events_url": "https://api.github.com/users/adityaarunsinghal/events{/privacy}", "followers_url": "https://api.github.com/users/adityaarunsinghal/followers", "following_url": "https://api.github.com/users/adityaarunsinghal/following{/other_user}", "gists_url": "https://api.github.com/users/adityaarunsinghal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adityaarunsinghal", "id": 35391599, "login": "adityaarunsinghal", "node_id": "MDQ6VXNlcjM1MzkxNTk5", "organizations_url": "https://api.github.com/users/adityaarunsinghal/orgs", "received_events_url": "https://api.github.com/users/adityaarunsinghal/received_events", "repos_url": "https://api.github.com/users/adityaarunsinghal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adityaarunsinghal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adityaarunsinghal/subscriptions", "type": "User", "url": "https://api.github.com/users/adityaarunsinghal" }
https://github.com/huggingface/datasets/issues/2104
[]
false
2022-03-30T08:22:58Z
2022-03-30T08:22:58Z
null
[ "Hi ! `wiki_movies` was added in `datasets==1.2.0`. However it looks like you have `datasets==1.1.2`.\r\n\r\nTo use `wiki_movies`, please update `datasets` with\r\n```\r\npip install --upgrade datasets\r\n```", "Thanks a lot! That solved it and I was able to upload a model trained on it as well :)" ]
completed
[]
Trouble loading wiki_movies
NONE
https://api.github.com/repos/huggingface/datasets/issues/2104/timeline
Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/wiki_movies/wiki_movies.py` Trying to do `python run_mlm.py \ --model_name_or_path roberta-base \ --dataset_name wiki_movies \` also gives the same error. Is this something on my end? From what I can tell, this dataset was re-added by @lhoestq a few months ago. Thank you!
https://api.github.com/repos/huggingface/datasets
null
839,027,834
https://api.github.com/repos/huggingface/datasets/issues/2104/comments
MDU6SXNzdWU4MzkwMjc4MzQ=
null
2,104
https://api.github.com/repos/huggingface/datasets/issues/2104/events
false
closed
2021-03-23T17:18:09Z
null
https://api.github.com/repos/huggingface/datasets/issues/2103
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2103/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2103/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/samsontmr", "id": 15007950, "login": "samsontmr", "node_id": "MDQ6VXNlcjE1MDA3OTUw", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "repos_url": "https://api.github.com/users/samsontmr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "type": "User", "url": "https://api.github.com/users/samsontmr" }
https://github.com/huggingface/datasets/issues/2103
[]
false
2021-04-06T14:39:59Z
2021-04-06T14:39:59Z
null
[ "Thanks for reporting :)\r\nMaybe we can concatenate fields only if they are different.\r\n\r\nCurrently this is done here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/src/datasets/info.py#L180-L196\r\n\r\nThis can be a good first contribution to the library.\r\nPlease comment if you'd like to improve this and open a PR :)" ]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
citation, homepage, and license fields of `dataset_info.json` are duplicated many times
NONE
https://api.github.com/repos/huggingface/datasets/issues/2103/timeline
This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation. Example result: ``` "citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n ``` @lhoestq and I believe this is happening due to the fields being concatenated `num_proc` times.
https://api.github.com/repos/huggingface/datasets
null
838,946,916
https://api.github.com/repos/huggingface/datasets/issues/2103/comments
MDU6SXNzdWU4Mzg5NDY5MTY=
null
2,103
https://api.github.com/repos/huggingface/datasets/issues/2103/events
false
closed
2021-03-23T14:35:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/2102
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2102/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2102/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2102
[]
false
2021-03-24T14:07:35Z
2021-03-24T14:07:34Z
null
[]
null
[ { "color": "B67A40", "default": false, "description": "Restructuring existing code without changing its external behavior", "id": 2851292821, "name": "refactoring", "node_id": "MDU6TGFiZWwyODUxMjkyODIx", "url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring" } ]
Move Dataset.to_csv to csv module
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2102/timeline
Move the implementation of `Dataset.to_csv` to module `datasets.io.csv`.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2102.diff", "html_url": "https://github.com/huggingface/datasets/pull/2102", "merged_at": "2021-03-24T14:07:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/2102.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2102" }
838,794,090
https://api.github.com/repos/huggingface/datasets/issues/2102/comments
MDExOlB1bGxSZXF1ZXN0NTk4OTEyNzUw
null
2,102
https://api.github.com/repos/huggingface/datasets/issues/2102/events
true
closed
2021-03-23T10:41:23Z
null
https://api.github.com/repos/huggingface/datasets/issues/2101
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2101/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2101/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4", "events_url": "https://api.github.com/users/eusip/events{/privacy}", "followers_url": "https://api.github.com/users/eusip/followers", "following_url": "https://api.github.com/users/eusip/following{/other_user}", "gists_url": "https://api.github.com/users/eusip/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eusip", "id": 1551356, "login": "eusip", "node_id": "MDQ6VXNlcjE1NTEzNTY=", "organizations_url": "https://api.github.com/users/eusip/orgs", "received_events_url": "https://api.github.com/users/eusip/received_events", "repos_url": "https://api.github.com/users/eusip/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eusip/subscriptions", "type": "User", "url": "https://api.github.com/users/eusip" }
https://github.com/huggingface/datasets/pull/2101
[]
false
2021-03-23T18:08:10Z
2021-03-23T18:08:10Z
null
[ "Hi !\r\nLooks like there's a unicode error in the new citation in the miam.py file.\r\nCould you try to fix it ? Not sure from which character it comes from though\r\n\r\nYou can test if it works on your side with\r\n```\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_miam\r\n```", "Unicode error resolved!" ]
null
[]
MIAM dataset - new citation details
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2101/timeline
Hi @lhoestq, I have updated the citations to reference an OpenReview preprint.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2101.diff", "html_url": "https://github.com/huggingface/datasets/pull/2101", "merged_at": "2021-03-23T18:08:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2101.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2101" }
838,586,184
https://api.github.com/repos/huggingface/datasets/issues/2101/comments
MDExOlB1bGxSZXF1ZXN0NTk4NzQzMDM4
null
2,101
https://api.github.com/repos/huggingface/datasets/issues/2101/events
true
closed
2021-03-23T10:27:52Z
null
https://api.github.com/repos/huggingface/datasets/issues/2100
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2100/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2100/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2100
[]
false
2021-03-24T08:19:41Z
2021-03-23T18:03:49Z
null
[ "I have a question: what about `dictionary_encode_column_`?\r\n- It is deprecated in Dataset, but it recommends using a non-existing method instead: `Dataset.dictionary_encode_column` does not exist.\r\n- It is NOT deprecated in DatasetDict.", "`dictionary_encode_column_ ` should be deprecated since it never worked correctly. It will be removed in a major release.\r\nThis has to be deprecated in `DatasetDict` as well.\r\nAnd `Dataset.dictionary_encode_column` doesn't exist indeed.", "Thanks @lhoestq. I have fixed deprecated for `dictionary_encode_column_`." ]
null
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
Fix deprecated warning message and docstring
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2100/timeline
Fix deprecated warnings: - Use deprecated Sphinx directive in docstring - Fix format of deprecated message - Raise FutureWarning
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2100.diff", "html_url": "https://github.com/huggingface/datasets/pull/2100", "merged_at": "2021-03-23T18:03:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/2100.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2100" }
838,574,631
https://api.github.com/repos/huggingface/datasets/issues/2100/comments
MDExOlB1bGxSZXF1ZXN0NTk4NzMzOTM0
null
2,100
https://api.github.com/repos/huggingface/datasets/issues/2100/events
true
closed
2021-03-23T09:28:37Z
null
https://api.github.com/repos/huggingface/datasets/issues/2099
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2099/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2099/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/samsontmr", "id": 15007950, "login": "samsontmr", "node_id": "MDQ6VXNlcjE1MDA3OTUw", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "repos_url": "https://api.github.com/users/samsontmr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "type": "User", "url": "https://api.github.com/users/samsontmr" }
https://github.com/huggingface/datasets/issues/2099
[]
false
2021-03-23T17:12:16Z
2021-03-23T17:12:16Z
null
[ "Hi !\r\nCan you share more information about the features of your dataset ? You can get them by printing `my_dataset.features`\r\nCan you also share the code of your `map` function ?", "It is actually just the tokenized `wikipedia` dataset with `input_ids`, `attention_mask`, etc, with one extra column which is a list of integers. The `text` column is removed during tokenization.\r\n\r\n```\r\ndef add_len_and_seq(example):\r\n end_idx = example['input_ids'].index(SEP)\r\n example['actual_len'] = end_idx-1\r\n seq_len = len(example['input_ids'])\r\n \r\n\r\n example['seq'] = [PAD_ID] + [np.uint8(example['some_integer'])]*(end_idx-1) + [PAD_ID]*(seq_len-end_idx)\r\n \r\n return example\r\n```\r\n", "Is `PAD_ID` a python integer ? You need all the integers in `example['seq']` to have the same type.\r\nDoes this work if you remove the `np.uint8` and use python integers instead ?", "yup I casted it to `np.uint8` outside the function where it was defined. It was originally using python integers.", "Strangely, even when I manually created `np.arrays` of specific `dtypes`, the types in the final `dataset_info.json` that gets written are still `int64`.\r\n\r\nUpdate: I tried creating lists of `int8`s and got the same result.", "Yes this is a known issue: #625 \r\nWe're working on making the precision kept for numpy :)\r\nTo specify the precision of the integers, currently one needs to specify the output features with `.map(..., features=output_features)`", "Do you know what step is taking forever in the code ?\r\nWhat happens if you interrupt the execution of the dataset loading ?", "After a synchronous discussion, we found that the cache file sizes have an enormous effect on the loading speed: smaller cache files result in faster load times. `num_proc` controls the number of cache files that are being written and is inversely proportional to the individual file size. In other words, increase `num_proc` for smaller cache files :)\r\n\r\nMaybe this can be highlighted somewhere in the docs." ]
completed
[]
load_from_disk takes a long time to load local dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2099/timeline
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helping (the total size seems to be smaller though). Does anyone know what could be the issue? Or does the casting of that column to `int8` need to happen in the function that writes the arrow table instead of in the `map` where I create the list of integers? Tagging @lhoestq since you seem to be working on these issues and PRs :)
https://api.github.com/repos/huggingface/datasets
null
838,523,819
https://api.github.com/repos/huggingface/datasets/issues/2099/comments
MDU6SXNzdWU4Mzg1MjM4MTk=
null
2,099
https://api.github.com/repos/huggingface/datasets/issues/2099/events
false
closed
2021-03-23T07:47:54Z
null
https://api.github.com/repos/huggingface/datasets/issues/2098
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2098/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2098/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4", "events_url": "https://api.github.com/users/h-peng17/events{/privacy}", "followers_url": "https://api.github.com/users/h-peng17/followers", "following_url": "https://api.github.com/users/h-peng17/following{/other_user}", "gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/h-peng17", "id": 39556019, "login": "h-peng17", "node_id": "MDQ6VXNlcjM5NTU2MDE5", "organizations_url": "https://api.github.com/users/h-peng17/orgs", "received_events_url": "https://api.github.com/users/h-peng17/received_events", "repos_url": "https://api.github.com/users/h-peng17/repos", "site_admin": false, "starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions", "type": "User", "url": "https://api.github.com/users/h-peng17" }
https://github.com/huggingface/datasets/issues/2098
[]
false
2021-03-26T09:48:54Z
2021-03-26T09:48:54Z
null
[ "Hi ! This is 1.1 as specified by the download urls here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/datasets/squad/squad.py#L50-L55", "Got it. Thank you~" ]
completed
[]
SQuAD version
NONE
https://api.github.com/repos/huggingface/datasets/issues/2098/timeline
Hi~ I want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it.
https://api.github.com/repos/huggingface/datasets
null
838,447,959
https://api.github.com/repos/huggingface/datasets/issues/2098/comments
MDU6SXNzdWU4Mzg0NDc5NTk=
null
2,098
https://api.github.com/repos/huggingface/datasets/issues/2098/events
false
closed
2021-03-22T21:00:55Z
null
https://api.github.com/repos/huggingface/datasets/issues/2097
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2097/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2097/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4", "events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}", "followers_url": "https://api.github.com/users/dcfidalgo/followers", "following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}", "gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dcfidalgo", "id": 15979778, "login": "dcfidalgo", "node_id": "MDQ6VXNlcjE1OTc5Nzc4", "organizations_url": "https://api.github.com/users/dcfidalgo/orgs", "received_events_url": "https://api.github.com/users/dcfidalgo/received_events", "repos_url": "https://api.github.com/users/dcfidalgo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions", "type": "User", "url": "https://api.github.com/users/dcfidalgo" }
https://github.com/huggingface/datasets/pull/2097
[]
false
2021-03-22T21:01:11Z
2021-03-22T21:01:11Z
null
[]
null
[]
fixes issue #1110 by descending further if `obj["_type"]` is a dict
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2097/timeline
Check metrics
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2097.diff", "html_url": "https://github.com/huggingface/datasets/pull/2097", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2097.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2097" }
838,105,289
https://api.github.com/repos/huggingface/datasets/issues/2097/comments
MDExOlB1bGxSZXF1ZXN0NTk4MzM4MTA3
null
2,097
https://api.github.com/repos/huggingface/datasets/issues/2097/events
true
closed
2021-03-22T19:23:56Z
null
https://api.github.com/repos/huggingface/datasets/issues/2096
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2096/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2096/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8406802?v=4", "events_url": "https://api.github.com/users/rxian/events{/privacy}", "followers_url": "https://api.github.com/users/rxian/followers", "following_url": "https://api.github.com/users/rxian/following{/other_user}", "gists_url": "https://api.github.com/users/rxian/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rxian", "id": 8406802, "login": "rxian", "node_id": "MDQ6VXNlcjg0MDY4MDI=", "organizations_url": "https://api.github.com/users/rxian/orgs", "received_events_url": "https://api.github.com/users/rxian/received_events", "repos_url": "https://api.github.com/users/rxian/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rxian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rxian/subscriptions", "type": "User", "url": "https://api.github.com/users/rxian" }
https://github.com/huggingface/datasets/issues/2096
[]
false
2023-07-25T16:49:07Z
2023-07-25T16:49:07Z
null
[ "Hello. I've been looking for information about German Conll2003 and found your question. Official site (https://www.clips.uantwerpen.be/conll2003/ner/) mentions that organizers provide only annotation. German texts (ECI Multilingual Text Corpus) are not freely available and can be ordered from the Linguistic Data Consortium.\r\n\r\nBut maybe something has changed since 2003.", "You can find the reason for not including the German data here: https://github.com/huggingface/datasets/issues/4230." ]
completed
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
CoNLL 2003 dataset not including German
NONE
https://api.github.com/repos/huggingface/datasets/issues/2096/timeline
Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with! I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it could be found in some places on the internet such as GitHub? I could help adding the German data to the hub, unless there are some copyright issues that I am unaware of... This is considering that many work use the union of CoNLL 2002 and 2003 datasets for comparing cross-lingual NER transfer performance in `en`, `de`, `es`, and `nl`. E.g., [XLM-R](https://www.aclweb.org/anthology/2020.acl-main.747.pdf). ## Adding a Dataset - **Name:** CoNLL 2003 German - **Paper:** https://www.aclweb.org/anthology/W03-0419/ - **Data:** https://github.com/huggingface/datasets/tree/master/datasets/conll2003
https://api.github.com/repos/huggingface/datasets
null
838,038,379
https://api.github.com/repos/huggingface/datasets/issues/2096/comments
MDU6SXNzdWU4MzgwMzgzNzk=
null
2,096
https://api.github.com/repos/huggingface/datasets/issues/2096/events
false
closed
2021-03-21T23:21:57Z
null
https://api.github.com/repos/huggingface/datasets/issues/2093
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2093/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2093/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/15979778?v=4", "events_url": "https://api.github.com/users/dcfidalgo/events{/privacy}", "followers_url": "https://api.github.com/users/dcfidalgo/followers", "following_url": "https://api.github.com/users/dcfidalgo/following{/other_user}", "gists_url": "https://api.github.com/users/dcfidalgo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dcfidalgo", "id": 15979778, "login": "dcfidalgo", "node_id": "MDQ6VXNlcjE1OTc5Nzc4", "organizations_url": "https://api.github.com/users/dcfidalgo/orgs", "received_events_url": "https://api.github.com/users/dcfidalgo/received_events", "repos_url": "https://api.github.com/users/dcfidalgo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dcfidalgo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dcfidalgo/subscriptions", "type": "User", "url": "https://api.github.com/users/dcfidalgo" }
https://github.com/huggingface/datasets/pull/2093
[]
false
2021-03-25T14:35:54Z
2021-03-25T14:35:54Z
null
[ "Nice thank you !\r\nThis looks like a pretty simple yet effective fix ;)\r\nCould you just add a test in `test_features.py` to make sure that you can create `features` with a `_type` field and that it is possible to convert it as a dict and reload it ?\r\n```python\r\nfrom datasets import Features, Value\r\n\r\n# We usually use `asdict` on a `DatasetInfo` object which is a dataclass instance that contains the features.\r\n# So we need the conversion of features to dict to work.\r\n# You can test that using `dataclasses._asdict_inner`.\r\n# This is the function used by `dataclasses.asdict` to convert a dataclass instance attribute to a dict\r\nfrom dataclasses import _asdict_inner \r\n\r\nf = Features({\"_type\": Value(\"string\")})\r\nreloaded_f = Features.from_dict(_asdict_inner(f, dict))\r\nassert reloaded_f == f\r\n```", "Sure, i will add a test. \r\nOne question: are the posted benchmarks reliable? The extra type check seems to add quite some overhead judging by the relative differences. Do you think this is an issue?", "The benchmark has a bit of noise, the values are fine ;)\r\nespecially in the change you did since the overhead added is negligible.", "Ok, i added the test you described above. \r\n\r\nI avoided importing the private `_asdict_inner` method and directly used the `DatasetInfo` class, if this is ok with you. Thanks a lot for your support during this PR!" ]
null
[]
Fix: Allows a feature to be named "_type"
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2093/timeline
This PR tries to fix issue #1110. Sorry for taking so long to come back to this. It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2093.diff", "html_url": "https://github.com/huggingface/datasets/pull/2093", "merged_at": "2021-03-25T14:35:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/2093.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2093" }
837,209,211
https://api.github.com/repos/huggingface/datasets/issues/2093/comments
MDExOlB1bGxSZXF1ZXN0NTk3NTgyNjUx
null
2,093
https://api.github.com/repos/huggingface/datasets/issues/2093/events
true
closed
2021-03-21T04:50:07Z
null
https://api.github.com/repos/huggingface/datasets/issues/2092
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2092/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2092/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/48825663?v=4", "events_url": "https://api.github.com/users/Jeevesh8/events{/privacy}", "followers_url": "https://api.github.com/users/Jeevesh8/followers", "following_url": "https://api.github.com/users/Jeevesh8/following{/other_user}", "gists_url": "https://api.github.com/users/Jeevesh8/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Jeevesh8", "id": 48825663, "login": "Jeevesh8", "node_id": "MDQ6VXNlcjQ4ODI1NjYz", "organizations_url": "https://api.github.com/users/Jeevesh8/orgs", "received_events_url": "https://api.github.com/users/Jeevesh8/received_events", "repos_url": "https://api.github.com/users/Jeevesh8/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Jeevesh8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jeevesh8/subscriptions", "type": "User", "url": "https://api.github.com/users/Jeevesh8" }
https://github.com/huggingface/datasets/issues/2092
[]
false
2022-06-01T16:49:52Z
2022-06-01T16:49:52Z
null
[ "Hi ! We plan to add streaming features in the future.\r\n\r\nThis should allow to load a dataset instantaneously without generating the arrow table. The trade-off is that accessing examples from a streaming dataset must be done in an iterative way, and with an additional (but hopefully minor) overhead.\r\nWhat do you think about this ?\r\n\r\nIf you have ideas or suggestions of what you expect from such features as a user, feel free to share them, this is really valuable to us !", "People mainly want this feature either because it takes too much time too make arrow tables, or they occupy too much memory on the disk. I think both the problem can be solved if we provide arrow tables themselves on datasets hub. Can we do this currently @lhoestq ? \r\n", "@lhoestq I think the ```try_from_hf_gcs``` provide the same functionality. What all datasets are available on HF GCS? Are all the datasets on huggingFace datasets hub are made available on GCS, automatically?", "Only datasets like wikipedia, wiki40b, wiki_dpr and natural questions are available already processed on the HF google storage. This is used to download directly the arrow file instead of building it from the original data files.", "@lhoestq How can we make sure that the data we upload on HuggingFace hub is available in form of preprocessed arrow files ?", "We're still working on this :) This will be available soon\r\nUsers will be able to put their processed arrow files on the Hub", "Hi! You can now use `Dataset.push_to_hub` to store preprocessed files on the Hub.\r\n\r\nAnd to avoid downloading preprocessed files, you can use streaming by setting `streaming=True` in `load_dataset`." ]
completed
[]
How to disable making arrow tables in load_dataset ?
NONE
https://api.github.com/repos/huggingface/datasets/issues/2092/timeline
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
https://api.github.com/repos/huggingface/datasets
null
836,984,043
https://api.github.com/repos/huggingface/datasets/issues/2092/comments
MDU6SXNzdWU4MzY5ODQwNDM=
null
2,092
https://api.github.com/repos/huggingface/datasets/issues/2092/events
false
closed
2021-03-20T15:08:22Z
null
https://api.github.com/repos/huggingface/datasets/issues/2091
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2091/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2091/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/2091
[]
false
2021-03-24T08:20:50Z
2021-03-23T17:18:31Z
null
[]
null
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
Fix copy snippet in docs
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2091/timeline
With this change the lines starting with `...` in the code blocks can be properly copied to clipboard.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2091.diff", "html_url": "https://github.com/huggingface/datasets/pull/2091", "merged_at": "2021-03-23T17:18:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/2091.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2091" }
836,831,403
https://api.github.com/repos/huggingface/datasets/issues/2091/comments
MDExOlB1bGxSZXF1ZXN0NTk3Mjk4ODI3
null
2,091
https://api.github.com/repos/huggingface/datasets/issues/2091/events
true
closed
2021-03-20T13:28:07Z
null
https://api.github.com/repos/huggingface/datasets/issues/2090
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2090/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2090/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PhilipMay", "id": 229382, "login": "PhilipMay", "node_id": "MDQ6VXNlcjIyOTM4Mg==", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "repos_url": "https://api.github.com/users/PhilipMay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "type": "User", "url": "https://api.github.com/users/PhilipMay" }
https://github.com/huggingface/datasets/pull/2090
[]
false
2021-03-29T13:24:42Z
2021-03-29T13:00:15Z
null
[ "Hello dear maintainer, are there any comments or questions about this PR?", "@iamollas thanks for the feedback. I did not see the template.\r\nI improved it...", "Should be clean for merge IMO.", "@lhoestq CI is green. ;-)", "Thanks again ! this is awesome :)", "Thanks for merging. :-)" ]
null
[]
Add machine translated multilingual STS benchmark dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2090/timeline
also see here https://github.com/PhilipMay/stsb-multi-mt
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2090.diff", "html_url": "https://github.com/huggingface/datasets/pull/2090", "merged_at": "2021-03-29T13:00:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/2090.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2090" }
836,807,498
https://api.github.com/repos/huggingface/datasets/issues/2090/comments
MDExOlB1bGxSZXF1ZXN0NTk3MjgwNTEy
null
2,090
https://api.github.com/repos/huggingface/datasets/issues/2090/events
true
closed
2021-03-20T11:44:38Z
null
https://api.github.com/repos/huggingface/datasets/issues/2089
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2089/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2089/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PhilipMay", "id": 229382, "login": "PhilipMay", "node_id": "MDQ6VXNlcjIyOTM4Mg==", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "repos_url": "https://api.github.com/users/PhilipMay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "type": "User", "url": "https://api.github.com/users/PhilipMay" }
https://github.com/huggingface/datasets/issues/2089
[]
false
2023-07-25T16:45:38Z
2023-07-25T16:45:37Z
null
[ "Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add.\r\n\r\nWe are also adding the full list of tags in #2107 \r\nThis covers multilinguality, language_creators, licenses, size_categories and task_categories.\r\n\r\nIn general if you want to add a tag that doesn't exist (for example for a custom license) you must make it start with `other-` and then a custom tag name.\r\n\r\nedit (@theo-m) if you ever find yourself resorting to adding an `other-*` tag, please do ping us somewhere so we can think about adding it to the \"official\" list :)", "@lhoestq hmm - ok thanks for the answer.\r\nTo be honest I am not sure if this issue can be closed now.\r\nI just wanted to point out that this should either be documented or linked in the documentation.\r\nIf you feel like it is (will be) please just close this.", "We're still working on the validation+documentation in this.\r\nFeel free to keep this issue open till we've added them", "@lhoestq what is the status on this? Did you add documentation?", "Hi ! There's the tagging app at https://huggingface.co/datasets/tagging/ that you can use.\r\nIt shows the list of all the tags you can use.\r\n\r\nIt is based on all the tag sets defined in this folder:\r\nhttps://github.com/huggingface/datasets/tree/master/src/datasets/utils/resources", "@lhoestq is there something like this form Models?", "I don't think so. Feel free to take a look at the tags of other models (example [here](https://huggingface.co/bert-base-uncased/blob/main/README.md)). But we should definitely have some docs or an app to write the tags. Feel free to open an issue in the `transformers` repo or in the `huggingface_hub` repo so we can discuss this", "When modifying a README file, the Hub now displays a special UI with allowed values (see https://huggingface.co/docs/datasets/main/en/upload_dataset#create-a-dataset-card)." ]
completed
[]
Add documentaton for dataset README.md files
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2089/timeline
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which values should licenses have? What do I say when it is a custom license? Should I add a link? - how should I choose size_categories ? What are valid ranges? - what are valid task_categories? Thanks Philip
https://api.github.com/repos/huggingface/datasets
null
836,788,019
https://api.github.com/repos/huggingface/datasets/issues/2089/comments
MDU6SXNzdWU4MzY3ODgwMTk=
null
2,089
https://api.github.com/repos/huggingface/datasets/issues/2089/events
false
closed
2021-03-20T09:23:44Z
null
https://api.github.com/repos/huggingface/datasets/issues/2088
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2088/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2088/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PhilipMay", "id": 229382, "login": "PhilipMay", "node_id": "MDQ6VXNlcjIyOTM4Mg==", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "repos_url": "https://api.github.com/users/PhilipMay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "type": "User", "url": "https://api.github.com/users/PhilipMay" }
https://github.com/huggingface/datasets/pull/2088
[]
false
2021-03-23T15:40:12Z
2021-03-23T15:40:12Z
null
[ "Trailing whitespace was removed. So more changes in diff than just this fix." ]
null
[]
change bibtex template to author instead of authors
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2088/timeline
Hi, IMO when using BibTex Author should be used instead of Authors. See here: http://www.bibtex.org/Using/de/ Thanks Philip
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2088.diff", "html_url": "https://github.com/huggingface/datasets/pull/2088", "merged_at": "2021-03-23T15:40:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/2088.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2088" }
836,763,733
https://api.github.com/repos/huggingface/datasets/issues/2088/comments
MDExOlB1bGxSZXF1ZXN0NTk3MjQ4Mzk1
null
2,088
https://api.github.com/repos/huggingface/datasets/issues/2088/events
true
closed
2021-03-20T02:05:23Z
null
https://api.github.com/repos/huggingface/datasets/issues/2087
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 1, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2087/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2087/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/2087
[]
false
2021-04-09T09:25:33Z
2021-04-09T09:25:33Z
null
[ "@lhoestq I'll try to add a test later if you think this approach with the wrapper is good.", "Awesome thank you !\r\nYes this approach with a wrapper is good :)", "@lhoestq Added a test. To verify that this change fixes the problem, replace:\r\n```\r\n!pip install datasets==1.5\r\n```\r\nwith:\r\n```\r\n!pip install git+https://github.com/mariosasko/datasets-1.git@update-metadata\r\n```\r\nin the first cell of the notebook that is attached to the linked issue.\r\n\r\nThe CI failure is unrelated I think (building the docs locally doesn't throw an error).", "The CI fail for the docs has been fixed on master.\r\nMerging :)" ]
null
[]
Update metadata if dataset features are modified
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2087/timeline
This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features. Fixes #2083
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2087.diff", "html_url": "https://github.com/huggingface/datasets/pull/2087", "merged_at": "2021-04-09T09:25:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/2087.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2087" }
836,587,392
https://api.github.com/repos/huggingface/datasets/issues/2087/comments
MDExOlB1bGxSZXF1ZXN0NTk3MDg4NTk2
null
2,087
https://api.github.com/repos/huggingface/datasets/issues/2087/events
true
closed
2021-03-19T18:14:56Z
null
https://api.github.com/repos/huggingface/datasets/issues/2086
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2086/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2086/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
https://github.com/huggingface/datasets/pull/2086
[]
false
2021-03-24T13:59:04Z
2021-03-24T13:59:04Z
null
[ "I tried this with `ade_corpus_v2` dataset. `ade_corpus_v2-train.arrow` (downloaded dataset) and `cache-25d41a4d3c2d8a25.arrow` (ran a mapping function on the dataset) both had file permission with octal value of `0644`. " ]
null
[]
change user permissions to -rw-r--r--
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2086/timeline
Fix for #2065
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2086.diff", "html_url": "https://github.com/huggingface/datasets/pull/2086", "merged_at": "2021-03-24T13:59:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/2086.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2086" }
836,249,587
https://api.github.com/repos/huggingface/datasets/issues/2086/comments
MDExOlB1bGxSZXF1ZXN0NTk2Nzg0Mjcz
null
2,086
https://api.github.com/repos/huggingface/datasets/issues/2086/events
true
closed
2021-03-19T11:22:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/2085
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2085/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2085/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2085
[]
false
2021-03-23T15:36:38Z
2021-03-23T15:36:37Z
null
[]
null
[]
Fix max_wait_time in requests
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2085/timeline
it was handled as a min time, not max cc @SBrandeis
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2085.diff", "html_url": "https://github.com/huggingface/datasets/pull/2085", "merged_at": "2021-03-23T15:36:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/2085.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2085" }
835,870,994
https://api.github.com/repos/huggingface/datasets/issues/2085/comments
MDExOlB1bGxSZXF1ZXN0NTk2NDYyOTc2
null
2,085
https://api.github.com/repos/huggingface/datasets/issues/2085/events
true
closed
2021-03-19T09:27:43Z
null
https://api.github.com/repos/huggingface/datasets/issues/2084
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2084/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2084/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m" }
https://github.com/huggingface/datasets/issues/2084
[]
false
2021-04-16T08:50:44Z
2021-04-16T08:50:44Z
null
[ "+1 on this request" ]
completed
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
CUAD - Contract Understanding Atticus Dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2084/timeline
## Adding a Dataset - **Name:** CUAD - Contract Understanding Atticus Dataset - **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community. - **Paper:** https://arxiv.org/abs/2103.06268 - **Data:** https://github.com/TheAtticusProject/cuad/ - **Motivation:** good domain specific datasets are valuable Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
https://api.github.com/repos/huggingface/datasets
null
835,750,671
https://api.github.com/repos/huggingface/datasets/issues/2084/comments
MDU6SXNzdWU4MzU3NTA2NzE=
null
2,084
https://api.github.com/repos/huggingface/datasets/issues/2084/events
false
closed
2021-03-19T08:29:48Z
null
https://api.github.com/repos/huggingface/datasets/issues/2083
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2083/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2083/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/issues/2083
[]
false
2021-04-09T09:25:33Z
2021-04-09T09:25:33Z
null
[ "Hi,\r\n\r\nthis bug is related to `Dataset.{remove_columns, rename_column, flatten}` not propagating the change to the schema metadata when the info features are updated, so this line is the culprit:\r\n```python\r\ncommon_voice_train = common_voice_train.remove_columns(['client_id', 'up_votes', 'down_votes', 'age', 'gender', 'accent', 'locale', 'segment'])\r\n\r\n``` \r\nThe order is important because the resulting dataset inherits the schema metadata of the first dataset passed to the `concatenate_datasets(...)` function (`pa.concat_tables` [docs](https://arrow.apache.org/docs/python/generated/pyarrow.concat_tables.html)). I'll try to fix this ASAP." ]
completed
[]
`concatenate_datasets` throws error when changing the order of datasets to concatenate
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2083/timeline
Hey, I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets and noticed that when the order in which the datasets are concatenated changes an error is thrown where it should not IMO. Here is a google colab to reproduce the error: https://colab.research.google.com/drive/17VTFU4KQ735-waWZJjeOHS6yDTfV5ekK?usp=sharing
https://api.github.com/repos/huggingface/datasets
null
835,695,425
https://api.github.com/repos/huggingface/datasets/issues/2083/comments
MDU6SXNzdWU4MzU2OTU0MjU=
null
2,083
https://api.github.com/repos/huggingface/datasets/issues/2083/events
false
closed
2021-03-19T00:39:38Z
null
https://api.github.com/repos/huggingface/datasets/issues/2082
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2082/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2082/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}", "followers_url": "https://api.github.com/users/mcmillanmajora/followers", "following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}", "gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mcmillanmajora", "id": 26722925, "login": "mcmillanmajora", "node_id": "MDQ6VXNlcjI2NzIyOTI1", "organizations_url": "https://api.github.com/users/mcmillanmajora/orgs", "received_events_url": "https://api.github.com/users/mcmillanmajora/received_events", "repos_url": "https://api.github.com/users/mcmillanmajora/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions", "type": "User", "url": "https://api.github.com/users/mcmillanmajora" }
https://github.com/huggingface/datasets/pull/2082
[]
false
2021-03-19T14:29:09Z
2021-03-19T14:29:09Z
null
[]
null
[]
Updated card using information from data statement and datasheet
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2082/timeline
I updated and clarified the REFreSD [data card](https://github.com/mcmillanmajora/datasets/blob/refresd_card/datasets/refresd/README.md) with information from the Eleftheria's [website](https://elbria.github.io/post/refresd/). I added brief descriptions where the initial card referred to the paper, and I also recreated some of the tables in the paper to show relevant dataset statistics. I'll email Eleftheria to see if she has any comments on the card.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2082.diff", "html_url": "https://github.com/huggingface/datasets/pull/2082", "merged_at": "2021-03-19T14:29:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/2082.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2082" }
835,401,555
https://api.github.com/repos/huggingface/datasets/issues/2082/comments
MDExOlB1bGxSZXF1ZXN0NTk2MDY1NTM0
null
2,082
https://api.github.com/repos/huggingface/datasets/issues/2082/events
true
closed
2021-03-18T18:11:01Z
null
https://api.github.com/repos/huggingface/datasets/issues/2081
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2081/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2081/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2081
[]
false
2021-04-07T14:37:43Z
2021-04-07T14:37:43Z
null
[]
null
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
Fix docstrings issues
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2081/timeline
Fix docstring issues.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2081.diff", "html_url": "https://github.com/huggingface/datasets/pull/2081", "merged_at": "2021-04-07T14:37:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/2081.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2081" }
835,112,968
https://api.github.com/repos/huggingface/datasets/issues/2081/comments
MDExOlB1bGxSZXF1ZXN0NTk1ODE3OTM4
null
2,081
https://api.github.com/repos/huggingface/datasets/issues/2081/events
true
closed
2021-03-18T16:29:14Z
null
https://api.github.com/repos/huggingface/datasets/issues/2080
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2080/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2080/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/3142085?v=4", "events_url": "https://api.github.com/users/vermouthmjl/events{/privacy}", "followers_url": "https://api.github.com/users/vermouthmjl/followers", "following_url": "https://api.github.com/users/vermouthmjl/following{/other_user}", "gists_url": "https://api.github.com/users/vermouthmjl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vermouthmjl", "id": 3142085, "login": "vermouthmjl", "node_id": "MDQ6VXNlcjMxNDIwODU=", "organizations_url": "https://api.github.com/users/vermouthmjl/orgs", "received_events_url": "https://api.github.com/users/vermouthmjl/received_events", "repos_url": "https://api.github.com/users/vermouthmjl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vermouthmjl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vermouthmjl/subscriptions", "type": "User", "url": "https://api.github.com/users/vermouthmjl" }
https://github.com/huggingface/datasets/issues/2080
[]
false
2021-03-25T12:46:53Z
2021-03-25T12:46:53Z
null
[ "Hi !\r\n\r\nThis is actually supported ! but not yet in `from_pandas`.\r\nYou can use `from_dict` for now instead:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])\r\n ],\r\n 'input_ids': [1, 2, 3, 4]\r\n}\r\ndataset = Dataset.from_dict(dataset)\r\n```\r\n\r\nThis will work but to use it with the torch formatter you must specify the `Array2D` feature type in order to tell the shape:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]),\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]])\r\n ],\r\n 'input_ids': [1, 2, 3, 4]\r\n}\r\ndataset = Dataset.from_dict(dataset, features=Features({\r\n \"bbox\": Array2D(shape=(3, 4), dtype=\"int64\"),\r\n \"input_ids\": Value(\"int64\")\r\n}))\r\ndataset.set_format(\"torch\")\r\nprint(dataset[0]['bbox'])\r\n# tensor([[1, 2, 3, 4],\r\n# [1, 2, 3, 4],\r\n# [1, 2, 3, 4]])\r\n```\r\nIf you don't specify the `Array2D` feature type, then the inferred type will be Sequence(Sequence(Value(\"int64\"))) and therefore the torch formatter will return list of tensors", "Thanks for the explanation. \r\nWith my original DataFrame, I did\r\n```\r\ndataset = dataset.to_dict(\"list\")\r\n```\r\nand then the rest of the transformation from dictionary works just fine." ]
completed
[]
Multidimensional arrays in a Dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2080/timeline
Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row. The following code results in conversion error in pyarrow (`pyarrow.lib.ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column bbox with type object')`) ``` from datasets import Dataset import pandas as pd import numpy as np dataset = pd.DataFrame({ 'bbox': [ np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]) ], 'input_ids': [1, 2, 3, 4] }) dataset = Dataset.from_pandas(dataset) ``` Since I wanted to use pytorch for the downstream training task, I also tried a few ways to directly put in a column of 2-D pytorch tensor in a formatted dataset, but I can only have a list of 1-D tensors, or a list of arrays, or a list of lists. ``` import torch from datasets import Dataset import pandas as pd dataset = pd.DataFrame({ 'bbox': [ [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]] ], 'input_ids': [1, 2, 3, 4] }) dataset = Dataset.from_pandas(dataset) def test(examples): return {'bbbox': torch.Tensor(examples['bbox'])} dataset = dataset.map(test) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) dataset.set_format(type='torch', columns=['input_ids', 'bbox'], output_all_columns=True) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) def test2(examples): return {'bbbox': torch.stack(examples['bbox'])} dataset = dataset.map(test2) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) ``` Is is possible to support n-D arrays/tensors in datasets? It seems that it can also be useful for this [feature request](https://github.com/huggingface/datasets/issues/263).
https://api.github.com/repos/huggingface/datasets
null
835,023,000
https://api.github.com/repos/huggingface/datasets/issues/2080/comments
MDU6SXNzdWU4MzUwMjMwMDA=
null
2,080
https://api.github.com/repos/huggingface/datasets/issues/2080/events
false
closed
2021-03-18T15:05:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/2079
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2079/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2079/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2079
[]
false
2021-03-23T15:31:44Z
2021-03-23T15:31:44Z
null
[]
null
[]
Refactorize Metric.compute signature to force keyword arguments only
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2079/timeline
Minor refactoring of Metric.compute signature to force the use of keyword arguments, by using the single star syntax.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2079.diff", "html_url": "https://github.com/huggingface/datasets/pull/2079", "merged_at": "2021-03-23T15:31:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/2079.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2079" }
834,920,493
https://api.github.com/repos/huggingface/datasets/issues/2079/comments
MDExOlB1bGxSZXF1ZXN0NTk1NjU2MDQ5
null
2,079
https://api.github.com/repos/huggingface/datasets/issues/2079/events
true
closed
2021-03-18T11:30:05Z
null
https://api.github.com/repos/huggingface/datasets/issues/2078
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2078/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2078/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4", "events_url": "https://api.github.com/users/diego-fustes/events{/privacy}", "followers_url": "https://api.github.com/users/diego-fustes/followers", "following_url": "https://api.github.com/users/diego-fustes/following{/other_user}", "gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/diego-fustes", "id": 5707233, "login": "diego-fustes", "node_id": "MDQ6VXNlcjU3MDcyMzM=", "organizations_url": "https://api.github.com/users/diego-fustes/orgs", "received_events_url": "https://api.github.com/users/diego-fustes/received_events", "repos_url": "https://api.github.com/users/diego-fustes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions", "type": "User", "url": "https://api.github.com/users/diego-fustes" }
https://github.com/huggingface/datasets/issues/2078
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2021-05-01T08:31:49Z
2021-04-06T07:20:43Z
null
[ "Hi ! Thanks for reporting.\r\nWe're indeed using `jiwer` to compute the WER.\r\n\r\nMaybe instead of calling `jiwer.wer` once for all the preditions/references we can compute the WER iteratively to avoid memory issues ? I'm not too familial with `jiwer` but this must be possible.\r\n\r\nCurrently the code to compute the WER is defined here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/metrics/wer/wer.py#L93-L94", "Hi,\r\n\r\nI've just pushed a pull request that is related to this issue https://github.com/huggingface/datasets/pull/2169. It's not iterative, but it should avoid memory errors. It's based on the editdistance python library. An iterative implementation should be as easy as storing scores and words stepwise and dividing at the end. ", "I see, this was solved by other thread. Ok, let me know if you want to switch the implementation for any reason :)", "Thanks for diving into this anyway ^^'\r\nAs you said this actually got solved a few days ago", "Someone created an issue https://github.com/jitsi/jiwer/issues/40 at jiwer which shows that this is still a problem in the current version. Would be curious to figure out how this can be fixed by jiwer... :) I assume that it runs of out memory because it's trying to compute the WER over (too many) test samples?", "Hi !\r\n\r\nIt's computed iteratively so not sure what could go wrong\r\n\r\nhttps://github.com/huggingface/datasets/blob/8afd0ba8c27800a55ea69d9fcd702dc97d9c16d8/metrics/wer/wer.py#L100-L106\r\n\r\n@NiklasHoltmeyer what version of `datasets` are you running ?\r\n", "One possible explanation might be that it is the user who is passing all the sentences in a single element to `wer.compute`?\r\n\r\nAs current implementation iterates over the elements of `predictions` and `references`, this can be problematic if `predictions` and `references` contain a single huge element each. \r\n\r\nThis could be the case, for example, with a single string with all sentences:\r\n```python\r\nresult[\"predicted\"] = \"One sentence. Other sentence.\"\r\n```\r\nor with a __double__ nested list of sentence lists\r\n```python\r\nresult[\"predicted\"] = [[ [\"One sentence.\"], [\"Other sentence\"] ]]\r\n```\r\n\r\nThe user should check the dimensions of the data structure passed to `predictions` and `references`.", "Hi all,\r\n\r\nin my case I was using and older version of datasets and, as @albertvillanova points out, passing the full list of sentences for the metric calculation. The problem was in the way jiwer implements WER, as it tries to compute WER for the full list at once instead of doing it element-wise. I think that with the latest implementation of datasets, or by using the alternative WER function that I've contributed on this [pull request](https://github.com/huggingface/datasets/pull/2169) there shouldn't be memory errors.", "@lhoestq i was using Datasets==1.5.0 with 1.6.1 it worked (atleast the first run) but 1.5.0 is not compatible with my preprocessing. i cant save my dataset to a parquet file while using the latest datasets version\r\n\r\n-> \r\n```\r\n File \"../preprocess_dataset.py\", line 132, in <module>\r\n pq.write_table(train_dataset.data, f'{resampled_data_dir}/{data_args.dataset_config_name}.train.parquet')\r\n File \"/usr/local/lib/python3.8/dist-packages/pyarrow/parquet.py\", line 1674, in write_table\r\n writer.write_table(table, row_group_size=row_group_size)\r\n File \"/usr/local/lib/python3.8/dist-packages/pyarrow/parquet.py\", line 588, in write_table\r\n self.writer.write_table(table, row_group_size=row_group_size)\r\nTypeError: Argument 'table' has incorrect type (expected pyarrow.lib.Table, got ConcatenationTable)\r\n``` \r\n\r\nif i do \r\n```\r\nimport pyarrow.parquet as pq\r\n...\r\n...\r\npq.write_table(train_dataset.data, 'train.parquet')\r\npq.write_table(eval_dataset.data, 'eval.parquet')\r\n```\r\n\r\nwhile using 1.6.1. and its working with 1.5.0\r\n", "Hi ! You can pass dataset.data.table instead of dataset.data to pq.write_table", "This seems to be working so far! Thanks!" ]
completed
[ { "color": "25b21e", "default": false, "description": "A bug in a metric script", "id": 2067393914, "name": "metric bug", "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug" } ]
MemoryError when computing WER metric
NONE
https://api.github.com/repos/huggingface/datasets/issues/2078/timeline
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File "/home/diego/IpGlobal/wav2vec/test_wav2vec.py", line 51, in <module> print(wer.compute(predictions=result["predicted"], references=result["target"])) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/datasets/metric.py", line 403, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/diego/.cache/huggingface/modules/datasets_modules/metrics/wer/73b2d32b723b7fb8f204d785c00980ae4d937f12a65466f8fdf78706e2951281/wer.py", line 94, in _compute return wer(references, predictions) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 81, in wer truth, hypothesis, truth_transform, hypothesis_transform, **kwargs File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 192, in compute_measures H, S, D, I = _get_operation_counts(truth, hypothesis) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 273, in _get_operation_counts editops = Levenshtein.editops(source_string, destination_string) MemoryError` My system has more than 10GB of available RAM. Looking at the code, I think that it could be related to the way jiwer does the calculation, as it is pasting all the sentences in a single string before calling Levenshtein editops function.
https://api.github.com/repos/huggingface/datasets
null
834,694,819
https://api.github.com/repos/huggingface/datasets/issues/2078/comments
MDU6SXNzdWU4MzQ2OTQ4MTk=
null
2,078
https://api.github.com/repos/huggingface/datasets/issues/2078/events
false
closed
2021-03-18T10:54:34Z
null
https://api.github.com/repos/huggingface/datasets/issues/2077
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2077/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2077/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
https://github.com/huggingface/datasets/pull/2077
[]
false
2021-03-18T11:33:26Z
2021-03-18T11:33:26Z
null
[ "🔥 " ]
null
[]
Bump huggingface_hub version
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2077/timeline
`0.0.2 => 0.0.6`
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2077.diff", "html_url": "https://github.com/huggingface/datasets/pull/2077", "merged_at": "2021-03-18T11:33:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/2077.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2077" }
834,649,536
https://api.github.com/repos/huggingface/datasets/issues/2077/comments
MDExOlB1bGxSZXF1ZXN0NTk1NDI0MTYw
null
2,077
https://api.github.com/repos/huggingface/datasets/issues/2077/events
true
open
2021-03-18T06:36:06Z
null
https://api.github.com/repos/huggingface/datasets/issues/2076
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2076/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2076/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/20436061?v=4", "events_url": "https://api.github.com/users/XuhuiZhou/events{/privacy}", "followers_url": "https://api.github.com/users/XuhuiZhou/followers", "following_url": "https://api.github.com/users/XuhuiZhou/following{/other_user}", "gists_url": "https://api.github.com/users/XuhuiZhou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/XuhuiZhou", "id": 20436061, "login": "XuhuiZhou", "node_id": "MDQ6VXNlcjIwNDM2MDYx", "organizations_url": "https://api.github.com/users/XuhuiZhou/orgs", "received_events_url": "https://api.github.com/users/XuhuiZhou/received_events", "repos_url": "https://api.github.com/users/XuhuiZhou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/XuhuiZhou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XuhuiZhou/subscriptions", "type": "User", "url": "https://api.github.com/users/XuhuiZhou" }
https://github.com/huggingface/datasets/issues/2076
[]
false
2021-03-22T11:52:31Z
null
null
[ "Hi @XuhuiZhou, thanks for reporting this issue. \r\n\r\nIndeed, the old links are no longer valid (404 Not Found error), and the script must be updated with the new links to Google Drive.", "It would be nice to update the urls indeed !\r\n\r\nTo do this, you just need to replace the urls in `iwslt2017.py` and then update the dataset_infos.json file with\r\n```\r\ndatasets-cli test ./datasets/iwslt2017 --all_configs --save_infos --ignore_verifications\r\n```", "Is this a command to update my local files or fix the file Github repo in general? (I am not so familiar with the datasets-cli command here)\r\n\r\nI also took a brief look at the **Sharing your dataset** section, looks like I could fix that locally and push it to the repo? I guess we are \"canonical\" category?", "This command will update your local file. Then you can open a Pull Request to push your fix to the github repo :)\r\nAnd yes you are right, it is a \"canonical\" dataset, i.e. a dataset script defined in this github repo (as opposed to dataset repositories of users on the huggingface hub)", "Hi, thanks for the answer. \r\n\r\nI gave a try to the problem today. But I encountered an upload error: \r\n\r\n```\r\ngit push -u origin fix_link_iwslt\r\nEnter passphrase for key '/home2/xuhuizh/.ssh/id_rsa': \r\nERROR: Permission to huggingface/datasets.git denied to XuhuiZhou.\r\nfatal: Could not read from remote repository.\r\n\r\nPlease make sure you have the correct access rights\r\nand the repository exists.\r\n```\r\n\r\nAny insight here? \r\n\r\nBy the way, when I run the datasets-cli command, it shows the following error, but does not seem to be the error coming from `iwslt.py`\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home2/xuhuizh/anaconda3/envs/UMT/bin/datasets-cli\", line 33, in <module>\r\n sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/commands/datasets_cli.py\", line 35, in main\r\n service.run()\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/commands/test.py\", line 141, in run\r\n try_from_hf_gcs=False,\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/builder.py\", line 579, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/builder.py\", line 639, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"/home2/xuhuizh/projects/datasets/src/datasets/utils/info_utils.py\", line 32, in verify_checksums\r\n raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))\r\ndatasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'https://wit3.fbk.eu/archive/2017-01-trnmted//texts/DeEnItNlRo/DeEnItNlRo/DeEnItNlRo-DeEnItNlRo.tgz'}\r\n```", "Hi ! To create a PR on this repo your must fork it and create a branch on your fork. See how to fork the repo [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#start-by-preparing-your-environment).\r\nAnd to make the command work without the `ExpectedMoreDownloadedFiles` error, you just need to use the `--ignore_verifications` flag.", "Hi @XuhuiZhou,\r\n\r\nAs @lhoestq has well explained, you need to fork HF's repository, create a feature branch in your fork, push your changes to it and then open a Pull Request to HF's upstream repository. This is so because at HuggingFace Datasets we follow a development model called \"Fork and Pull Model\". You can find more information here:\r\n- [Understanding the GitHub flow](https://guides.github.com/introduction/flow/)\r\n- [Forking Projects](https://guides.github.com/activities/forking/)\r\n\r\nAlternatively, if you find all these steps too complicated, you can use the GitHub official command line tool: [GitHub CLI](https://cli.github.com/). Once installed, in order to create a Pull Request, you only need to use this command:\r\n```shell\r\ngh pr create --web\r\n```\r\nThis utility will automatically create the fork, push your changes and open a Pull Request, under the hood." ]
null
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
Issue: Dataset download error
NONE
https://api.github.com/repos/huggingface/datasets/issues/2076/timeline
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
https://api.github.com/repos/huggingface/datasets
null
834,445,296
https://api.github.com/repos/huggingface/datasets/issues/2076/comments
MDU6SXNzdWU4MzQ0NDUyOTY=
null
2,076
https://api.github.com/repos/huggingface/datasets/issues/2076/events
false
closed
2021-03-18T01:19:06Z
null
https://api.github.com/repos/huggingface/datasets/issues/2075
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2075/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2075/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4", "events_url": "https://api.github.com/users/LifaSun/events{/privacy}", "followers_url": "https://api.github.com/users/LifaSun/followers", "following_url": "https://api.github.com/users/LifaSun/following{/other_user}", "gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LifaSun", "id": 6188893, "login": "LifaSun", "node_id": "MDQ6VXNlcjYxODg4OTM=", "organizations_url": "https://api.github.com/users/LifaSun/orgs", "received_events_url": "https://api.github.com/users/LifaSun/received_events", "repos_url": "https://api.github.com/users/LifaSun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions", "type": "User", "url": "https://api.github.com/users/LifaSun" }
https://github.com/huggingface/datasets/issues/2075
[]
false
2021-03-20T10:29:41Z
2021-03-20T10:29:41Z
null
[ "Hi @LifaSun, thanks for reporting this issue.\r\n\r\nSometimes, GitHub has some connectivity problems. Could you confirm that the problem persists?", "@albertvillanova Thanks! It works well now. " ]
completed
[]
ConnectionError: Couldn't reach common_voice.py
NONE
https://api.github.com/repos/huggingface/datasets/issues/2075/timeline
When I run: from datasets import load_dataset, load_metric common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation") common_voice_test = load_dataset("common_voice", "zh-CN", split="test") Got: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/common_voice/common_voice.py Version: 1.4.1 Thanks! @lhoestq @LysandreJik @thomwolf
https://api.github.com/repos/huggingface/datasets
null
834,301,246
https://api.github.com/repos/huggingface/datasets/issues/2075/comments
MDU6SXNzdWU4MzQzMDEyNDY=
null
2,075
https://api.github.com/repos/huggingface/datasets/issues/2075/events
false
closed
2021-03-18T00:02:36Z
null
https://api.github.com/repos/huggingface/datasets/issues/2074
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2074/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2074/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
https://github.com/huggingface/datasets/pull/2074
[]
false
2021-03-23T17:11:10Z
2021-03-23T17:11:10Z
null
[ "> It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.\r\n\r\nWe can also update the task lists here: https://github.com/huggingface/datasets-tagging/blob/main/task_set.json", "Hi @lhoestq,\r\n\r\nThanks for approving.\r\nHow do I add the new categories to the tagging app? What I have added is till `1T` and not `1M`.\r\n\r\nI'll also check the task list :)\r\n\r\nThanks,\r\nGunjan", "I think you can change it here: https://github.com/huggingface/datasets-tagging/blob/main/tagging_app.py#L412-L423", "Hi @lhoestq,\r\n\r\nI have made a PR for size categories on `datasets-tagging`\r\n\r\nFor tags, I have thought of adding more tags and categories, based on what I know about the existing datasets, any list will not be exhaustive because the contributors can be very specific or very general. Hence, there could be a continuous process of evaluating existing tags and adding more and more.\r\n\r\n```json\r\n{\r\n \"image-classification\": {\r\n \"description\": \"image classification tasks\",\r\n \"options\": [\r\n \"multi-class-classification\",\r\n \"multi-label-classification\",\r\n \"other\"\r\n ]\r\n },\r\n \"conditional-text-generation\": {\r\n \"description\": \"data-to-text and text transduction tasks such as translation or summarization\",\r\n \"options\": [\r\n \"machine-translation\",\r\n \"sentence-splitting-fusion\",\r\n \"extractive-and-abstractive-summarization\",\r\n \"abstractive-summarization\",\r\n \"extractive-summarization\",\r\n \"multi-document-summarization\",\r\n \"table-to-text\",\r\n \"text-simplification\",\r\n \"explanation-generation\",\r\n \"stuctured-to-text\",\r\n \"other\"\r\n ]\r\n },\r\n \"conditional-speech-generation\": {\r\n \"description\": \"speech generation tasks\",\r\n \"options\": [\r\n \"text-to-speech\",\r\n \"speech-translation\",\r\n \"other\"\r\n ]\r\n },\r\n\r\n \"conditional-structure-generation\":{\r\n \"description\": \"text or speech to structured data\",\r\n \"options\":[\r\n \"knowlege-graph-mining\",\r\n \"code-generation\",\r\n ]\r\n },\r\n \"question-answering\": {\r\n \"description\": \"question answering tasks\",\r\n \"options\": [\r\n \"open-domain-qa\",\r\n \"closed-domain-qa\",\r\n \"multiple-choice-qa\",\r\n \"extractive-qa\",\r\n \"abstractive-qa\",\r\n \"conversational-qa\",\r\n \"multi-document-qa\",\r\n \"other\"\r\n ]\r\n },\r\n \"speech-classification\": {\r\n \"description\": \"speech to label tasks\",\r\n \"options\": [\r\n \"other\"\r\n ]\r\n },\r\n \"sequence-modeling\": {\r\n \"description\": \"such as language, speech or dialogue modeling\",\r\n \"options\": [\r\n \"dialogue-modeling\",\r\n \"language-modeling\",\r\n \"speech-modeling\",\r\n \"multi-turn\",\r\n \"slot-filling\",\r\n \"other\"\r\n ]\r\n },\r\n \"speech-recognition\": {\r\n \"description\": \"speech to text tasks\",\r\n \"options\": [\r\n \"automatic-speech-recognition\",\r\n \"other\"\r\n ]\r\n },\r\n \"structure-prediction\": {\r\n \"description\": \"predicting structural properties of the text, such as syntax\",\r\n \"options\": [\r\n \"coreference-resolution\",\r\n \"named-entity-recognition\",\r\n \"part-of-speech-tagging\",\r\n \"parsing\",\r\n \"sentence-segmentation\",\r\n \"single-span-prediction\",\r\n \"multi-span-prediction\",\r\n \"clause-or-phrase-segmentation\",\r\n \"dependency-parsing\",\r\n \"constituency-parsing\",\r\n \"other\"\r\n ]\r\n },\r\n\r\n \"text-classification\": {\r\n \"description\": \"predicting a class index or boolean value\",\r\n \"options\": [\r\n \"acceptability-classification\",\r\n \"entity-linking-classification\",\r\n \"relation-extraction\",\r\n \"common-sense-reasoning\",\r\n \"fact-checking\",\r\n \"intent-classification\",\r\n \"multi-class-classification\",\r\n \"multi-label-classification\",\r\n \"natural-language-inference\",\r\n \"semantic-similarity-classification\",\r\n \"sentiment-classification\",\r\n \"topic-classification\",\r\n \"emotion-classification\",\r\n \"token-classification\",\r\n \"word-sense-disambiguation\",\r\n \"offense-classification\",\r\n \"hate-speech-classification\",\r\n \"language-classification\",\r\n \"bias-classification\",\r\n \"other\"\r\n ]\r\n },\r\n \"text-retrieval\": {\r\n \"description\": \"information or text retrieval tasks\",\r\n \"options\": [\r\n \"document-retrieval\",\r\n \"utterance-retrieval\",\r\n \"entity-linking-retrieval\",\r\n \"fact-checking-retrieval\",\r\n \"other\"\r\n ]\r\n },\r\n \"text-scoring\": {\r\n \"description\": \"text scoring tasks, predicting a real valued score for some text\",\r\n \"options\": [\r\n \"semantic-similarity-scoring\",\r\n \"sentiment-scoring\",\r\n \"other\"\r\n ]\r\n },\r\n \"other\": {\r\n \"description\": \"raw data or other task families\",\r\n \"options\": [\r\n \"data-mining\",\r\n \"raw-text\",\r\n \"raw-speech\",\r\n \"raw-image\",\r\n \"other\"\r\n ]\r\n }\r\n}\r\n```\r\nI'll sort this when adding it to the .json. Also, I'll change categories according to this if this seems okay to you and commit it to this PR.\r\n\r\nI'll also fix spelling others, and some categories which are partially correct, for e.g. `other-machine-translation` to the correct tag.\r\nLastly, with the options also we can add a description to make it easier for the users to understand what we mean by each option. Example, for \"emotion-classification\", we can explain what kinds of data we are talking about, or what we mean by \"single-span-prediction\", etc.", "Good idea thank you ! Can you open a PR on datasets-tagging for the tasks as well ?\r\nAlso you can update the dataset card with the new tasks categories in another PR if you don't mind", "Hi @lhoestq,\r\n\r\nThanks, what all do I need to add to merge this PR?", "We can merge this one once the PR on dataset sizes is merged on `datasets-tagging` ;)", "Hi @lhoestq,\r\n\r\nOne problem with this approach is that for datasets like `ccaligned_multilingual`, the infos won't be complete because we don't have all configs. In that case, people might face trouble finding the datatset using the tag. Although, they probably won't be checking the size tag for a dataset like that.\r\n\r\nWhat do you think?\r\n\r\nCC @theo-m ", "For datasets like `ccaligned_multilingual` it's important to have all the tags for users to search and find it. Currently is has the full list of tags (without the config names). So you can actually find the dataset, but you don't know what tag correspond to what configuration. " ]
null
[]
Fix size categories in YAML Tags
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2074/timeline
This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also. This PR also adds a couple of infos that I found missing. The code for generating this: ```python for dataset in sorted(os.listdir('./datasets/')): if '.' not in dataset and dataset not in ['c4', 'csv', 'downloads', 'cc100', 'ccaligned_multilingual', 'celeb_a', 'chr_en', 'emea', 'glue']: infos = {} stats = {} st = '' with open(f'datasets/{dataset}/README.md') as f: d = f.read() start_dash = d.find('---') + 3 end_dash = d[start_dash:].find('---') + 3 rest_text = d[end_dash + 3:] try: full_yaml = OmegaConf.create(d[start_dash:end_dash]) readme = OmegaConf.to_container(full_yaml['size_categories'], resolve=True) except Exception as e: print(e) continue try: with open(f'datasets/{dataset}/dataset_infos.json') as f: data = json.load(f) except Exception as e: print(e) continue # Skip those without infos. done_set = set([]) num_keys = len(data.keys()) for keys in data: # dataset = load_dataset('opus100', f'{dirs}') total = 0 for split in data[keys]['splits']: total = total + data[keys]['splits'][split]['num_examples'] if total < 1000: st += "- n<1K" + '\n' infos[keys] = ["n<1K"] elif total >= 1000 and total < 10000: infos[keys] = ["1K<n<10K"] elif total >= 10000 and total < 100000: infos[keys] = ["10K<n<100K"] elif total >= 100000 and total < 1000000: infos[keys] = ["100K<n<1M"] elif total >= 1000000 and total < 10000000: infos[keys] = ["1M<n<10M"] elif total >= 10000000 and total < 100000000: infos[keys] = ["10M<n<100M"] elif total >= 100000000 and total < 1000000000: infos[keys] = ["100M<n<1B"] elif total >= 1000000000 and total < 10000000000: infos[keys] = ["1B<n<10B"] elif total >= 10000000000 and total < 100000000000: infos[keys] = ["10B<n<100B"] elif total >= 100000000000 and total < 1000000000000: infos[keys] = ["100B<n<1T"] else: infos[keys] = ["n>1T"] done_set = done_set.union(infos[keys]) if (isinstance(readme, list) and list(infos.values())[0] != readme) or (isinstance(readme, dict) and readme != infos): print('-' * 30) print(done_set) print(f"Changing Full YAML for {dataset}") print(OmegaConf.to_yaml(full_yaml)) if len(done_set) == 1: full_yaml['size_categories'] = list(done_set) else: full_yaml['size_categories'] = dict([(k, v) for k, v in sorted(infos.items(), key=lambda x: x[0])]) full_yaml_string = OmegaConf.to_yaml(full_yaml) print('-' * 30) print(full_yaml_string) inp = input('Do you wish to continue?(Y/N)') if inp == 'Y': with open(f'./datasets/{dataset}/README.md', 'w') as f: f.write('---\n') f.write(full_yaml_string) f.write('---') f.write(rest_text) else: break ``` Note that the lower-bound is inclusive. I'm unsure if this is how it is done in the tagging app. EDIT: It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency. EDIT: I understand this will not work for cases where only the infos for some of the configs are present, for example: `ccaligned_multingual` has only 5 out of several configs present, and infos has only information about them. Hence, I have skipped a few datasets in the code, if there are more such datasets, then I'll ignore them too.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2074.diff", "html_url": "https://github.com/huggingface/datasets/pull/2074", "merged_at": "2021-03-23T17:11:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2074.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2074" }
834,268,463
https://api.github.com/repos/huggingface/datasets/issues/2074/comments
MDExOlB1bGxSZXF1ZXN0NTk1MTIzMjYw
null
2,074
https://api.github.com/repos/huggingface/datasets/issues/2074/events
true
closed
2021-03-17T21:28:53Z
null
https://api.github.com/repos/huggingface/datasets/issues/2073
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2073/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2073/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/philschmid", "id": 32632186, "login": "philschmid", "node_id": "MDQ6VXNlcjMyNjMyMTg2", "organizations_url": "https://api.github.com/users/philschmid/orgs", "received_events_url": "https://api.github.com/users/philschmid/received_events", "repos_url": "https://api.github.com/users/philschmid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "type": "User", "url": "https://api.github.com/users/philschmid" }
https://github.com/huggingface/datasets/pull/2073
[]
false
2021-03-18T09:09:25Z
2021-03-18T09:09:24Z
null
[]
null
[]
Fixes check of TF_AVAILABLE and TORCH_AVAILABLE
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2073/timeline
# What is this PR doing This PR implements the checks if `Tensorflow` and `Pytorch` are available the same way as `transformers` does it. I added the additional checks for the different `Tensorflow` and `torch` versions. #2068
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2073.diff", "html_url": "https://github.com/huggingface/datasets/pull/2073", "merged_at": "2021-03-18T09:09:24Z", "patch_url": "https://github.com/huggingface/datasets/pull/2073.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2073" }
834,192,501
https://api.github.com/repos/huggingface/datasets/issues/2073/comments
MDExOlB1bGxSZXF1ZXN0NTk1MDYyMzQ2
null
2,073
https://api.github.com/repos/huggingface/datasets/issues/2073/events
true
closed
2021-03-17T18:13:44Z
null
https://api.github.com/repos/huggingface/datasets/issues/2072
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2072/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2072/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2072
[]
false
2021-03-24T08:20:57Z
2021-03-18T12:41:21Z
null
[ "I think I will stop pushing to this PR, so that it can me merged for today release. \r\n\r\nI will open another PR for further fixing docs.\r\n\r\nDo you agree, @lhoestq ?", "Sounds good thanks !" ]
null
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
Fix docstring issues
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2072/timeline
Fix docstring issues.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2072.diff", "html_url": "https://github.com/huggingface/datasets/pull/2072", "merged_at": "2021-03-18T12:41:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/2072.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2072" }
834,054,837
https://api.github.com/repos/huggingface/datasets/issues/2072/comments
MDExOlB1bGxSZXF1ZXN0NTk0OTQ5NjA4
null
2,072
https://api.github.com/repos/huggingface/datasets/issues/2072/events
true
closed
2021-03-17T16:08:58Z
null
https://api.github.com/repos/huggingface/datasets/issues/2071
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2071/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2071/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m" }
https://github.com/huggingface/datasets/issues/2071
[]
false
2021-03-18T09:10:23Z
2021-03-18T09:10:23Z
null
[ "dupe of #1992" ]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
Multiprocessing is slower than single process
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2071/timeline
```python # benchmark_filter.py import logging import sys import time from datasets import load_dataset, set_caching_enabled if __name__ == "__main__": set_caching_enabled(False) logging.basicConfig(level=logging.DEBUG) bc = load_dataset("bookcorpus") now = time.time() try: bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1])) except Exception as e: print(f"cancelled: {e}") elapsed = time.time() - now print(elapsed) ``` Running `python benchmark_filter.py 1` (20min+) is faster than `python benchmark_filter.py 2` (2hrs+)
https://api.github.com/repos/huggingface/datasets
null
833,950,824
https://api.github.com/repos/huggingface/datasets/issues/2071/comments
MDU6SXNzdWU4MzM5NTA4MjQ=
null
2,071
https://api.github.com/repos/huggingface/datasets/issues/2071/events
false
closed
2021-03-17T13:51:49Z
null
https://api.github.com/repos/huggingface/datasets/issues/2070
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2070/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2070/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29818977?v=4", "events_url": "https://api.github.com/users/MichaelYxWang/events{/privacy}", "followers_url": "https://api.github.com/users/MichaelYxWang/followers", "following_url": "https://api.github.com/users/MichaelYxWang/following{/other_user}", "gists_url": "https://api.github.com/users/MichaelYxWang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MichaelYxWang", "id": 29818977, "login": "MichaelYxWang", "node_id": "MDQ6VXNlcjI5ODE4OTc3", "organizations_url": "https://api.github.com/users/MichaelYxWang/orgs", "received_events_url": "https://api.github.com/users/MichaelYxWang/received_events", "repos_url": "https://api.github.com/users/MichaelYxWang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MichaelYxWang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichaelYxWang/subscriptions", "type": "User", "url": "https://api.github.com/users/MichaelYxWang" }
https://github.com/huggingface/datasets/issues/2070
[]
false
2021-08-04T17:57:16Z
2021-08-04T17:57:16Z
null
[ "Hi ! This error happens when you use `map` in batched mode and then your function doesn't return the same number of values per column.\r\n\r\nIndeed since you're using `map` in batched mode, `prepare_validation_features` must take a batch as input (i.e. a dictionary of multiple rows of the dataset), and return a batch.\r\n\r\nHowever it seems like `tokenized_examples` doesn't have the same number of elements in each field. One field seems to have `1180` elements while `candidate_attention_mask` only has `1178`." ]
completed
[]
ArrowInvalid issue for squad v2 dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2070/timeline
Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb). In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original contexts and save them in three different list called candidate_input_dis, candidate_attetion_mask and candidate_token_type_ids. When I try to run the next cell for dataset.map, I got the following error: `ArrowInvalid: Column 1 named candidate_attention_mask expected length 1180 but got length 1178` My code is as follows: ``` def generate_candidate_questions(examples): val_questions = examples["question"] candididate_questions = random.sample(datasets["train"]["question"], len(val_questions)) candididate_questions = [x[:max_length] for x in candididate_questions] return candididate_questions def prepare_validation_features(examples, use_mixing=False): pad_on_right = tokenizer.padding_side == "right" tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) if use_mixing: candidate_questions = generate_candidate_questions(examples) tokenized_candidates = tokenizer( candidate_questions if pad_on_right else examples["context"], examples["context"] if pad_on_right else candidate_questions, truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") tokenized_examples["example_id"] = [] if use_mixing: tokenized_examples["candidate_input_ids"] = tokenized_candidates["input_ids"] tokenized_examples["candidate_attention_mask"] = tokenized_candidates["attention_mask"] tokenized_examples["candidate_token_type_ids"] = tokenized_candidates["token_type_ids"] for i in range(len(tokenized_examples["input_ids"])): sequence_ids = tokenized_examples.sequence_ids(i) context_index = 1 if pad_on_right else 0 sample_index = sample_mapping[i] tokenized_examples["example_id"].append(examples["id"][sample_index]) tokenized_examples["offset_mapping"][i] = [ (o if sequence_ids[k] == context_index else None) for k, o in enumerate(tokenized_examples["offset_mapping"][i]) ] return tokenized_examples validation_features = datasets["validation"].map( lambda xs: prepare_validation_features(xs, True), batched=True, remove_columns=datasets["validation"].column_names ) ``` I guess this might happen because of the batched=True. I see similar issues in this repo related to arrow table length mismatch error, but in their cases, the numbers vary a lot. In my case, this error always happens when the expected length and unexpected length are very close. Thanks for the help!
https://api.github.com/repos/huggingface/datasets
null
833,799,035
https://api.github.com/repos/huggingface/datasets/issues/2070/comments
MDU6SXNzdWU4MzM3OTkwMzU=
null
2,070
https://api.github.com/repos/huggingface/datasets/issues/2070/events
false
closed
2021-03-17T13:19:28Z
null
https://api.github.com/repos/huggingface/datasets/issues/2069
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2069/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2069/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2069
[]
false
2021-03-18T10:27:40Z
2021-03-18T10:27:40Z
null
[ "Maybe we should add some other split classes?" ]
null
[]
Add and fix docstring for NamedSplit
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2069/timeline
Add and fix docstring for `NamedSplit`, which was missing.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2069.diff", "html_url": "https://github.com/huggingface/datasets/pull/2069", "merged_at": "2021-03-18T10:27:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/2069.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2069" }
833,768,926
https://api.github.com/repos/huggingface/datasets/issues/2069/comments
MDExOlB1bGxSZXF1ZXN0NTk0NzA5ODYw
null
2,069
https://api.github.com/repos/huggingface/datasets/issues/2069/events
true
closed
2021-03-17T10:04:27Z
null
https://api.github.com/repos/huggingface/datasets/issues/2068
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2068/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2068/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1651457?v=4", "events_url": "https://api.github.com/users/sivakhno/events{/privacy}", "followers_url": "https://api.github.com/users/sivakhno/followers", "following_url": "https://api.github.com/users/sivakhno/following{/other_user}", "gists_url": "https://api.github.com/users/sivakhno/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sivakhno", "id": 1651457, "login": "sivakhno", "node_id": "MDQ6VXNlcjE2NTE0NTc=", "organizations_url": "https://api.github.com/users/sivakhno/orgs", "received_events_url": "https://api.github.com/users/sivakhno/received_events", "repos_url": "https://api.github.com/users/sivakhno/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sivakhno/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sivakhno/subscriptions", "type": "User", "url": "https://api.github.com/users/sivakhno" }
https://github.com/huggingface/datasets/issues/2068
[]
false
2021-06-14T04:47:30Z
2021-06-14T04:47:30Z
null
[ "cc @philschmid ", "Hey @sivakhno,\r\n\r\nhow does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`", "Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. \r\nI have also tried using `torch=1.6.0` (docker `763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu-py3 `), but the error is the same. ", "Could paste the code you use the start your training job and the fine-tuning script you run? ", "@sivakhno this should be now fixed in `datasets>=1.5.0`. ", "@philschmid Recently released tensorflow-macos seems to be missing. ", "I've created a PR to add this. " ]
completed
[]
PyTorch not available error on SageMaker GPU docker though it is installed
NONE
https://api.github.com/repos/huggingface/datasets/issues/2068/timeline
I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*args, **kwargs) File "/opt/ml/code/data_module.py", line 103, in setup self.dataset[split].set_format(type="torch", columns=self.columns) File "/opt/conda/lib/python3.6/site-packages/datasets/fingerprint.py", line 337, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 995, in set_format _ = get_formatter(type, **format_kwargs) File "/opt/conda/lib/python3.6/site-packages/datasets/formatting/__init__.py", line 114, in get_formatter raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type] ValueError: PyTorch needs to be installed to be able to return PyTorch tensors. ``` when trying to execute dataset loading using this notebook https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb, specifically lines ``` self.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns] self.dataset[split].set_format(type="torch", columns=self.columns) ``` The SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.4.0-gpu-py3 . By running container interactively I have checked that torch loading completes successfully by executing `https://github.com/huggingface/datasets/blob/master/src/datasets/config.py#L39`. Also as a first line in the data loading module I have ``` import os os.environ["USE_TF"] = "0" os.environ["USE_TORCH"] = "1" ```` But unfortunately the error stills persists. Any suggestions would be appreciated as I am stack. Many Thanks!
https://api.github.com/repos/huggingface/datasets
null
833,602,832
https://api.github.com/repos/huggingface/datasets/issues/2068/comments
MDU6SXNzdWU4MzM2MDI4MzI=
null
2,068
https://api.github.com/repos/huggingface/datasets/issues/2068/events
false
closed
2021-03-17T09:12:28Z
null
https://api.github.com/repos/huggingface/datasets/issues/2067
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2067/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2067/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/flozi00", "id": 47894090, "login": "flozi00", "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "organizations_url": "https://api.github.com/users/flozi00/orgs", "received_events_url": "https://api.github.com/users/flozi00/received_events", "repos_url": "https://api.github.com/users/flozi00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "type": "User", "url": "https://api.github.com/users/flozi00" }
https://github.com/huggingface/datasets/issues/2067
[]
false
2021-08-04T17:59:08Z
2021-08-04T17:59:08Z
null
[ "Hi ! Thanks for reporting.\r\nThis looks like a bug, could you try to provide a minimal code example that reproduces the issue ? This would be very helpful !\r\n\r\nOtherwise I can try to run the wav2vec2 code above on my side but probably not this week..", "```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('glue', 'mrpc', split='train')\r\n\r\n\r\nupdated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n\r\n```", "\r\n\r\n\r\n\r\n\r\nI was able to copy some of the shell \r\nThis is repeating every half second\r\nWin 10, Anaconda with python 3.8, datasets installed from main branche\r\n```\r\n\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n _check_not_importing_main()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 116, in spawn_main\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 134, in _check_not_importing_main\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n exitcode = _main(fd, parent_sentinel)\r\n raise RuntimeError('''\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 125, in _main\r\nRuntimeError:\r\n An attempt has been made to start a new process before the\r\n current process has finished its bootstrapping phase.\r\n\r\n This probably means that you are not using fork to start your\r\n child processes and you have forgotten to use the proper idiom\r\n in the main module:\r\n\r\n if __name__ == '__main__':\r\n freeze_support()\r\n ...\r\n\r\n The \"freeze_support()\" line can be omitted if the program\r\n is not going to be frozen to produce an executable. return _run_module_code(code, init_globals, run_name,\r\n prepare(preparation_data)\r\n\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 236, in prepare\r\n _run_code(code, mod_globals, init_globals,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\test.py\", line 6, in <module>\r\n updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1370, in map\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n with Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 119, in Pool\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 212, in __init__\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\test.py\", line 6, in <module>\r\n self._repopulate_pool()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 303, in _repopulate_pool\r\n updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1370, in map\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 326, in _repopulate_pool_static\r\n with Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 119, in Pool\r\n w.start()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\process.py\", line 121, in start\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 212, in __init__\r\n self._popen = self._Popen(self)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 327, in _Popen\r\n self._repopulate_pool()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 303, in _repopulate_pool\r\n return Popen(process_obj)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\popen_spawn_win32.py\", line 45, in __init__\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n prep_data = spawn.get_preparation_data(process_obj._name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 326, in _repopulate_pool_static\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 154, in get_preparation_data\r\n _check_not_importing_main()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 134, in _check_not_importing_main\r\n w.start()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\process.py\", line 121, in start\r\n raise RuntimeError('''\r\nRuntimeError:\r\n An attempt has been made to start a new process before the\r\n current process has finished its bootstrapping phase.\r\n\r\n This probably means that you are not using fork to start your\r\n child processes and you have forgotten to use the proper idiom\r\n in the main module:\r\n\r\n if __name__ == '__main__':\r\n freeze_support()\r\n ...\r\n```", "Thanks this is really helpful !\r\nI'll try to reproduce on my side and come back to you", "if __name__ == '__main__':\r\n\r\n\r\nThis line before calling the map function stops the error but the script still repeats endless", "Indeed you needed `if __name__ == '__main__'` since accoding to [this stackoverflow post](https://stackoverflow.com/a/18205006):\r\n\r\n> On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an if __name__ == '__main__': guard in the main module to avoid creating subprocesses recursively.\r\n\r\nRegarding the hanging issue, can you try to update `dill` and `multiprocess` ?", "It's already on the newest version", "```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 791, in move\r\n os.rename(src, real_dst)\r\nFileExistsError: [WinError 183] Eine Datei kann nicht erstellt werden, wenn sie bereits vorhanden ist: 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\tmpx9fl_jg8' -> 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\cache-9b4f203a63742dfc.arrow'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 116, in spawn_main\r\n exitcode = _main(fd, parent_sentinel)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 125, in _main\r\n prepare(preparation_data)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 236, in prepare\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\cvtrain.py\", line 243, in <module>\r\n common_voice_train = common_voice_train.map(remove_special_characters, remove_columns=[\"sentence\"])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1339, in map\r\n return self._map_single(\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 203, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\fingerprint.py\", line 337, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1646, in _map_single\r\n shutil.move(tmp_file.name, cache_file_name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 805, in move\r\n copy_function(src, real_dst)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 435, in copy2\r\n copyfile(src, dst, follow_symlinks=follow_symlinks)\r\n 0%| | 0/27771 [00:00<?, ?ex/s] \r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 264, in copyfile\r\n with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:\r\nOSError: [Errno 22] Invalid argument: 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\cache-9b4f203a63742dfc.arrow'\r\n```\r\n\r\nI was adding freeze support before calling the mapping function like this\r\nif __name__ == '__main__':\r\n freeze_support()\r\n dataset.map(....)", "Usually OSError of an arrow file on windows means that the file is currently opened as a dataset object, so you can't overwrite it until the dataset object falls out of scope.\r\nCan you make sure that there's no dataset object that loaded the `cache-9b4f203a63742dfc.arrow` file ?", "Now I understand\r\nThe error occures because the script got restarted in another thread, so the object is already loaded.\r\nStill don't have an idea why a new thread starts the whole script again" ]
completed
[]
Multiprocessing windows error
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2067/timeline
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log crashes into an loop
https://api.github.com/repos/huggingface/datasets
null
833,559,940
https://api.github.com/repos/huggingface/datasets/issues/2067/comments
MDU6SXNzdWU4MzM1NTk5NDA=
null
2,067
https://api.github.com/repos/huggingface/datasets/issues/2067/events
false
closed
2021-03-17T07:23:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/2066
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2066/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2066/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2066
[]
false
2021-03-17T09:21:21Z
2021-03-17T09:21:21Z
null
[]
null
[]
Fix docstring rendering of Dataset/DatasetDict.from_csv args
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2066/timeline
Fix the docstring rendering of Dataset/DatasetDict.from_csv args.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2066.diff", "html_url": "https://github.com/huggingface/datasets/pull/2066", "merged_at": "2021-03-17T09:21:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/2066.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2066" }
833,480,551
https://api.github.com/repos/huggingface/datasets/issues/2066/comments
MDExOlB1bGxSZXF1ZXN0NTk0NDcwMjEz
null
2,066
https://api.github.com/repos/huggingface/datasets/issues/2066/events
true
closed
2021-03-17T00:20:22Z
null
https://api.github.com/repos/huggingface/datasets/issues/2065
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2065/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2065/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/57237365?v=4", "events_url": "https://api.github.com/users/lorr1/events{/privacy}", "followers_url": "https://api.github.com/users/lorr1/followers", "following_url": "https://api.github.com/users/lorr1/following{/other_user}", "gists_url": "https://api.github.com/users/lorr1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lorr1", "id": 57237365, "login": "lorr1", "node_id": "MDQ6VXNlcjU3MjM3MzY1", "organizations_url": "https://api.github.com/users/lorr1/orgs", "received_events_url": "https://api.github.com/users/lorr1/received_events", "repos_url": "https://api.github.com/users/lorr1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lorr1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lorr1/subscriptions", "type": "User", "url": "https://api.github.com/users/lorr1" }
https://github.com/huggingface/datasets/issues/2065
[]
false
2023-03-31T12:17:06Z
2021-05-10T06:45:29Z
null
[ "Hi ! Thanks for reporting.\r\n\r\nCurrently there's no way to specify this.\r\n\r\nWhen loading/processing a dataset, the arrow file is written using a temporary file. Then once writing is finished, it's moved to the cache directory (using `shutil.move` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1646))\r\n\r\nThat means it keeps the permissions specified by the `tempfile.NamedTemporaryFile` object, i.e. `-rw-------` instead of `-rw-r--r--`. Improving this could be a nice first contribution to the library :)", "Hi @lhoestq,\r\nI looked into this and yes you're right. The `NamedTemporaryFile` is always created with mode 0600, which prevents group from reading the file. Should we change the permissions of `tmp_file.name` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1871) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1590), post creation to 0644 inorder for group and others to read it?", "Good idea :) we could even update the permissions after the file has been moved by shutil.move [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1899) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1646) actually.\r\nApparently they set the default 0600 for temporary files for security reasons, so let's update the umask only after the file has been moved", "Would it be possible to actually set the umask based on a user provided argument? For example, a popular usecase my team has is using a shared file-system for processing datasets. This may involve writing/deleting other files, or changing filenames, which a -rw-r--r-- wouldn't fix. ", "Note that you can get the cache files of a dataset with the `cache_files` attributes.\r\nThen you can `chmod` those files and all the other cache files in the same directory.\r\n\r\nMoreover we can probably keep the same permissions after each transform. This way you just need to set the permissions once after doing `load_dataset` for example, and then all the new transformed cached files will have the same permissions.\r\nWhat do you think ?", "This means we'll check the permissions of other `cache_files` already created for a dataset before setting permissions for new `cache_files`?", "You can just check the permission of `dataset.cache_files[0]` imo", "> This way you just need to set the permissions once after doing load_dataset for example, and then all the new transformed cached files will have the same permissions.\r\n\r\nI was referring to this. Ensuring that newly generated `cache_files` have the same permissions", "Yes exactly\r\n\r\nI imagine users can first do `load_dataset`, then chmod on the arrow files. After that all the new cache files could have the same permissions as the first arrow files. Opinions on this ?", "Sounds nice but I feel this is a sub-part of the approach mentioned by @siddk. Instead of letting the user set new permissions by itself first and then making sure newly generated files have same permissions why don't we ask the user initially only what they want? What are your thoughts?", "Yes sounds good. Should this be a parameter in `load_dataset` ? Or an env variable ? Or use the value of `os.umask` ?", "Ideally it should be a parameter in `load_dataset` but I'm not sure how important it is for the users (considering only important things should go into `load_dataset` parameters)", "I think it's fairly important; for context, our team uses a shared file-system where many folks run experiments based on datasets that are cached by other users.\r\n\r\nFor example, I might start a training run, downloading a dataset. Then, a couple of days later, a collaborator using the same repository might want to use the same dataset on the same shared filesystem, but won't be able to under the default permissions.\r\n\r\nBeing able to specify directly in the top-level `load_dataset()` call seems important, but an equally valid option would be to just inherit from the running user's `umask` (this should probably be the default anyway).\r\n\r\nSo basically, argument that takes a custom set of permissions, and by default, use the running user's umask!", "Maybe let's start by defaulting to the user's umask !\r\nDo you want to give it a try @bhavitvyamalik ?", "Yeah sure! Instead of using default `0o644` should I first extract umask of current user and then use `os.umask` on it? We can do it inside `Dataset` class so that all folders/files created during the call use running user's umask\r\n\r\n", "You can get the umask using `os.umask` and then I guess you can just use `os.chmod` as in your previous PR, but with the right permissions depending on the umask.", "FWIW, we have this issue with other caches - e.g. `transformers` model files. So probably will need to backport this into `transformers` as well.\r\n\r\nthanks @thomwolf for the pointer.", "Hi @stas00,\r\nFor this should we use the same umask code in the respective model directory inside `TRANSFORMERS_CACHE`?", "That sounds very right to me, @bhavitvyamalik ", "The cluster I am working on does not allow me to change the permission of the files with os.chmod. I was wondering if there is any workaround for this? My cache is in a GCP bucket and I can't change file permissions once I mount it.", "@vmurahari3 what error do you have exactly ?", "I get a permission denied error on https://github.com/huggingface/datasets/blob/b8363e0539c6f0cb5de49af32962cf2eb4c47395/src/datasets/arrow_dataset.py#L2799. I suspect I don't have permissions to change group permissions. I am mounting a GCP bucket through [gcsfuse](https://github.com/GoogleCloudPlatform/gcsfuse). ", "What @lhoestq is asking for is the full multi-line traceback - it's almost never enough to show the last line - a full stack is needed to get the context. Thank you!\r\n\r\nI wonder if a workaround is to try/except and then issue a warning if this fails?", "Hello, I'm working on a project with a very similar setup to the one mentioned by @siddk, namely we have a shared cache directory in the team that we wish to use to avoid redundant dataset downloads. However, we're hitting `PermissionError` when one member of the team tries to reload a dataset that was downloaded by another team member (stack trace below).\r\n\r\nConcretely, we first create a shared directory and give everyone read, write, execute permissions as follows (I know this isn't best practice 🤫):\r\n\r\n```shell\r\nmkdir shared-folder\r\nchmod -R 777 shared-folder\r\n```\r\n\r\nWe then set the following in our `.bashrc` profiles:\r\n\r\n```\r\n# Hugging Face caches\r\nexport HUGGINGFACE_HUB_CACHE=/path/to/shared-folder\r\nexport HF_DATASETS_CACHE=/path/to/shared-folder\r\n\r\n# For shared access to the shared-folder directory\r\numask 000\r\n```\r\n\r\nNow, running e.g. `load_dataset(\"emotion\")` the first time works (as expected), but when another team member tries to load from cache we get something like:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/load.py\", line 1496, in load_dataset_builder\r\n dataset_module = dataset_module_factory(\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/load.py\", line 1218, in dataset_module_factory\r\n raise e1 from None\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/load.py\", line 1193, in dataset_module_factory\r\n ).get_module()\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/load.py\", line 903, in get_module\r\n local_path = self.download_loading_script()\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/load.py\", line 871, in download_loading_script\r\n return cached_path(file_path, download_config=download_config)\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/utils/file_utils.py\", line 210, in cached_path\r\n output_path = ExtractManager(cache_dir=download_config.cache_dir).extract(\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/utils/extract.py\", line 42, in extract\r\n extractor_format = self.extractor.infer_extractor_format(input_path)\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/utils/extract.py\", line 287, in infer_extractor_format\r\n if extractor.is_extractable(path, magic_number=magic_number):\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/site-packages/datasets/utils/extract.py\", line 84, in is_extractable\r\n return tarfile.is_tarfile(path)\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/tarfile.py\", line 2517, in is_tarfile\r\n t = open(name)\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/tarfile.py\", line 1632, in open\r\n return func(name, \"r\", fileobj, **kwargs)\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/tarfile.py\", line 1698, in gzopen\r\n fileobj = GzipFile(name, mode + \"b\", compresslevel, fileobj)\r\n File \"~/miniconda3/envs/llama-nol/lib/python3.10/gzip.py\", line 174, in __init__\r\n fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')\r\nPermissionError: [Errno 13] Permission denied: '/path/to/shared-folder/downloads/4e7db366b1ea045d0faa083a2e47ac87326ad8e653f894763b0982c2a1e94078.cc96367835404d4195bf75b2602e6dbbfd2da9288170fc0b2298fc0e376ff52a.py'\r\n```\r\n\r\nIf I understand this comment from @bhavitvyamalik:\r\n\r\n> Yeah sure! Instead of using default `0o644` should I first extract umask of current user and then use `os.umask` on it? We can do it inside `Dataset` class so that all folders/files created during the call use running user's umask\r\n\r\nthe goal was to infer the `umask` of the user profile, but perhaps I misunderstood and the true solution is to chmod the `cache_files` as @lhoestq suggests above - is that correct?\r\n\r\ncc @natolambert @nazneenrajani @edbeeching ", "Python files are stored in the modules caches, that you can modify by setting `HF_MODULES_CACHE` as well and set appropriate permissions\r\n\r\nThey are in different locations because the `HF_MODULES_CACHE` is added to the python path to be able to import the dataset scripts :)", "Thanks a lot for the tip @lhoestq ! I didn't know about this extra cache - thanks :)" ]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
Only user permission of saved cache files, not group
NONE
https://api.github.com/repos/huggingface/datasets/issues/2065/timeline
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you know any ways around this or a way to correctly set the permissions?
https://api.github.com/repos/huggingface/datasets
null
833,291,432
https://api.github.com/repos/huggingface/datasets/issues/2065/comments
MDU6SXNzdWU4MzMyOTE0MzI=
null
2,065
https://api.github.com/repos/huggingface/datasets/issues/2065/events
false
closed
2021-03-16T16:43:45Z
null
https://api.github.com/repos/huggingface/datasets/issues/2064
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2064/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2064/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/2064
[]
false
2021-03-16T18:00:08Z
2021-03-16T18:00:08Z
null
[]
null
[]
Fix ted_talks_iwslt version error
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2064/timeline
This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly. Fixes #2059
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2064.diff", "html_url": "https://github.com/huggingface/datasets/pull/2064", "merged_at": "2021-03-16T18:00:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/2064.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2064" }
833,002,360
https://api.github.com/repos/huggingface/datasets/issues/2064/comments
MDExOlB1bGxSZXF1ZXN0NTk0MDczOTQ1
null
2,064
https://api.github.com/repos/huggingface/datasets/issues/2064/events
true
closed
2021-03-16T16:33:44Z
null
https://api.github.com/repos/huggingface/datasets/issues/2063
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2063/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2063/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
https://github.com/huggingface/datasets/pull/2063
[]
false
2021-03-17T09:42:52Z
2021-03-17T09:42:37Z
null
[]
null
[]
[Common Voice] Adapt dataset script so that no manual data download is actually needed
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2063/timeline
This PR changes the dataset script so that no manual data dir is needed anymore.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2063.diff", "html_url": "https://github.com/huggingface/datasets/pull/2063", "merged_at": "2021-03-17T09:42:37Z", "patch_url": "https://github.com/huggingface/datasets/pull/2063.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2063" }
832,993,705
https://api.github.com/repos/huggingface/datasets/issues/2063/comments
MDExOlB1bGxSZXF1ZXN0NTk0MDY2NzI5
null
2,063
https://api.github.com/repos/huggingface/datasets/issues/2063/events
true
closed
2021-03-16T10:07:54Z
null
https://api.github.com/repos/huggingface/datasets/issues/2062
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2062/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2062/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/46561493?v=4", "events_url": "https://api.github.com/users/neal2018/events{/privacy}", "followers_url": "https://api.github.com/users/neal2018/followers", "following_url": "https://api.github.com/users/neal2018/following{/other_user}", "gists_url": "https://api.github.com/users/neal2018/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/neal2018", "id": 46561493, "login": "neal2018", "node_id": "MDQ6VXNlcjQ2NTYxNDkz", "organizations_url": "https://api.github.com/users/neal2018/orgs", "received_events_url": "https://api.github.com/users/neal2018/received_events", "repos_url": "https://api.github.com/users/neal2018/repos", "site_admin": false, "starred_url": "https://api.github.com/users/neal2018/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/neal2018/subscriptions", "type": "User", "url": "https://api.github.com/users/neal2018" }
https://github.com/huggingface/datasets/pull/2062
[]
false
2021-03-17T09:21:57Z
2021-03-17T09:21:57Z
null
[]
null
[]
docs: fix missing quotation
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2062/timeline
The json code misses a quote
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2062.diff", "html_url": "https://github.com/huggingface/datasets/pull/2062", "merged_at": "2021-03-17T09:21:56Z", "patch_url": "https://github.com/huggingface/datasets/pull/2062.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2062" }
832,625,483
https://api.github.com/repos/huggingface/datasets/issues/2062/comments
MDExOlB1bGxSZXF1ZXN0NTkzNzUyNTMz
null
2,062
https://api.github.com/repos/huggingface/datasets/issues/2062/events
true
closed
2021-03-16T09:32:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/2061
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2061/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2061/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/55791365?v=4", "events_url": "https://api.github.com/users/adzcodez/events{/privacy}", "followers_url": "https://api.github.com/users/adzcodez/followers", "following_url": "https://api.github.com/users/adzcodez/following{/other_user}", "gists_url": "https://api.github.com/users/adzcodez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adzcodez", "id": 55791365, "login": "adzcodez", "node_id": "MDQ6VXNlcjU1NzkxMzY1", "organizations_url": "https://api.github.com/users/adzcodez/orgs", "received_events_url": "https://api.github.com/users/adzcodez/received_events", "repos_url": "https://api.github.com/users/adzcodez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adzcodez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adzcodez/subscriptions", "type": "User", "url": "https://api.github.com/users/adzcodez" }
https://github.com/huggingface/datasets/issues/2061
[]
false
2021-06-18T11:54:11Z
2021-06-18T11:54:10Z
null
[ "@lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n\r\nThe bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.", "Hi ! Thanks for reporting @adzcodez \r\n\r\n\r\n> @lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n> \r\n> The bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.\r\n\r\nYou're right: \"_\" should be added to the list of labels, and the examples must be sequences of tokens, not singles tokens.\r\n", "@lhoestq Can you please label this issue with the \"good first issue\" label? I'm not sure I'll find time to fix this.\r\n\r\nTo resolve it, the user should:\r\n1. add `\"_\"` to the list of labels\r\n2. transform the udpos subset to the conll format (I think the preprocessing logic can be borrowed from [the original repo](https://github.com/google-research/xtreme/blob/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de/utils_preprocess.py#L187-L204))\r\n3. update the dummy data\r\n4. update the dataset info\r\n5. [optional] add info about the data fields structure of the udpos subset to the dataset readme", "I tried fixing this issue, but its working fine in the dev version : \"1.6.2.dev0\"\r\n\r\nI think somebody already fixed it. ", "Hi,\r\n\r\nafter #2326, the lines with pos tags equal to `\"_\"` are filtered out when generating the dataset, so this fixes the KeyError described above. However, the udpos subset should be in the conll format i.e. it should yield sequences of tokens and not single tokens, so it would be great to see this fixed (feel free to borrow the logic from [here](https://github.com/google-research/xtreme/blob/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de/utils_preprocess.py#L187-L204) if you decide to work on this). ", "Closed by #2466." ]
completed
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
Cannot load udpos subsets from xtreme dataset using load_dataset()
NONE
https://api.github.com/repos/huggingface/datasets/issues/2061/timeline
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and faced the same error. Reprex is: `from datasets import load_dataset ` `dataset = load_dataset('xtreme', 'udpos.English')` The error is: `KeyError: '_'` The full traceback is: KeyError Traceback (most recent call last) <ipython-input-5-7181359ea09d> in <module> 1 from datasets import load_dataset ----> 2 dataset = load_dataset('xtreme', 'udpos.English') ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 738 739 # Download and prepare data --> 740 builder_instance.download_and_prepare( 741 download_config=download_config, 742 download_mode=download_mode, ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 576 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 577 if not downloaded_from_gcs: --> 578 self._download_and_prepare( 579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 580 ) ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 654 try: 655 # Prepare split will record examples associated to the split --> 656 self._prepare_split(split_generator, **prepare_split_kwargs) 657 except OSError as e: 658 raise OSError( ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _prepare_split(self, split_generator) 977 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose 978 ): --> 979 example = self.info.features.encode_example(record) 980 writer.write(example) 981 finally: ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example) 946 def encode_example(self, example): 947 example = cast_to_python_objects(example) --> 948 return encode_nested_example(self, example) 949 950 def encode_batch(self, batch): ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj) 840 # Nested structures: we allow dict, list/tuples, sequences 841 if isinstance(schema, dict): --> 842 return { 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 844 } ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in <dictcomp>(.0) 841 if isinstance(schema, dict): 842 return { --> 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 844 } 845 elif isinstance(schema, (list, tuple)): ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj) 868 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks 869 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)): --> 870 return schema.encode_example(obj) 871 # Other object should be directly convertible to a native Arrow type (like Translation and Translation) 872 return obj ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example_data) 647 # If a string is given, convert to associated integer 648 if isinstance(example_data, str): --> 649 example_data = self.str2int(example_data) 650 651 # Allowing -1 to mean no label. ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in str2int(self, values) 605 if value not in self._str2int: 606 value = value.strip() --> 607 output.append(self._str2int[str(value)]) 608 else: 609 # No names provided, try to integerize KeyError: '_'
https://api.github.com/repos/huggingface/datasets
null
832,596,228
https://api.github.com/repos/huggingface/datasets/issues/2061/comments
MDU6SXNzdWU4MzI1OTYyMjg=
null
2,061
https://api.github.com/repos/huggingface/datasets/issues/2061/events
false
closed
2021-03-16T09:23:30Z
null
https://api.github.com/repos/huggingface/datasets/issues/2060
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2060/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2060/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m" }
https://github.com/huggingface/datasets/pull/2060
[ { "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m" } ]
false
2023-09-24T09:52:57Z
2021-10-13T09:09:03Z
null
[ "I thought at first that the multiproc test was not relevant now that we do stuff only in memory, but I think there's something that's actually broken, my tiny benchmark on bookcorpus runs forever (2hrs+) when I add `num_proc=4` as a kwarg, will investigate 👀 \r\n\r\nI'm not familiar with the caching you describe for `.map`, I'll look it up.", "turns out the multi proc issue is also on master, I won't fix it in this PR but opened #2071 to track the problem.", "tracemalloc outputs from this script:\r\n\r\n```python\r\nimport logging\r\nimport sys\r\nimport time\r\nimport tracemalloc\r\n\r\nfrom datasets import load_dataset, set_caching_enabled\r\n\r\n\r\nif __name__ == \"__main__\":\r\n set_caching_enabled(False)\r\n logging.basicConfig(level=logging.DEBUG)\r\n\r\n tracemalloc.start()\r\n bc = load_dataset(\"bookcorpus\")\r\n\r\n now = time.time()\r\n try:\r\n snapshot1 = tracemalloc.take_snapshot()\r\n bc[\"train\"].filter(lambda x: len(x[\"text\"]) < 64, num_proc=int(sys.argv[1]))\r\n except Exception as e:\r\n print(f\"cancelled: {e}\")\r\n exit(1)\r\n snapshot2 = tracemalloc.take_snapshot()\r\n tracemalloc.stop()\r\n elapsed = time.time() - now\r\n\r\n print(elapsed)\r\n top_stats = snapshot2.compare_to(snapshot1, \"lineno\")\r\n\r\n print(\"[ Top 10 differences ]\")\r\n for stat in top_stats[:10]:\r\n print(stat)\r\n\r\n```\r\n\r\n\r\nThis branch:\r\n\r\n```\r\n ssh://theo@35.205.12.130:22/home/theo/.local/share/miniconda3/envs/datasets/bin/python -u benchmark_filter.py 1\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443\r\n DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 \"HEAD /datasets.huggingface.co/datasets/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/dataset_infos.json HTTP/1.1\" 200 0\r\n WARNING:datasets.builder:Reusing dataset bookcorpus (/home/theo/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)\r\n 0%| | 0/74005 [00:00<?, ?ba/s]2021-03-23 10:23:20.051255: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n 2021-03-23 10:23:20.051304: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.\r\n 100%|████████████████████████████████████| 74005/74005 [12:16<00:00, 100.54ba/s]\r\n 815.6356580257416\r\n [ Top 10 differences ]\r\n <frozen importlib._bootstrap_external>:580: size=38.0 MiB (+33.7 MiB), count=326226 (+307928), average=122 B\r\n <frozen importlib._bootstrap>:219: size=7643 KiB (+7553 KiB), count=26372 (+25473), average=297 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/torch/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/abc.py:85: size=1039 KiB (+1026 KiB), count=3428 (+3384), average=310 B\r\n <frozen importlib._bootstrap_external>:64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/collections/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/linecache.py:137: size=998 KiB (+487 KiB), count=9551 (+4517), average=107 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B\r\n```\r\n\r\nOn master:\r\n```\r\n ssh://theo@35.205.12.130:22/home/theo/.local/share/miniconda3/envs/datasets/bin/python -u benchmark_filter.py 1\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): s3.amazonaws.com:443\r\n DEBUG:urllib3.connectionpool:https://s3.amazonaws.com:443 \"HEAD /datasets.huggingface.co/datasets/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/bookcorpus.py HTTP/1.1\" 200 0\r\n DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443\r\n DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 \"HEAD /huggingface/datasets/master/datasets/bookcorpus/dataset_infos.json HTTP/1.1\" 200 0\r\n WARNING:datasets.builder:Reusing dataset bookcorpus (/home/theo/.cache/huggingface/datasets/bookcorpus/plain_text/1.0.0/af844be26c089fb64810e9f2cd841954fd8bd596d6ddd26326e4c70e2b8c96fc)\r\n 0%| | 0/74005 [00:00<?, ?ba/s]2021-03-23 12:26:47.219622: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\r\n 2021-03-23 12:26:47.219669: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.\r\n DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.\r\n 100%|███████████████████████████████████| 74005/74005 [1:02:17<00:00, 19.80ba/s]\r\n 3738.870892047882\r\n [ Top 10 differences ]\r\n <frozen importlib._bootstrap_external>:580: size=38.0 MiB (+33.7 MiB), count=326221 (+307919), average=122 B\r\n <frozen importlib._bootstrap>:219: size=7648 KiB (+7557 KiB), count=26455 (+25555), average=296 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/torch/__init__.py:427: size=1291 KiB (+1291 KiB), count=5924 (+5924), average=223 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/abc.py:85: size=1039 KiB (+1026 KiB), count=3429 (+3385), average=310 B\r\n <frozen importlib._bootstrap_external>:64: size=917 KiB (+891 KiB), count=5300 (+5132), average=177 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/collections/__init__.py:456: size=720 KiB (+709 KiB), count=3403 (+3349), average=217 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_export.py:346: size=607 KiB (+607 KiB), count=3962 (+3962), average=157 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/linecache.py:137: size=1000 KiB (+489 KiB), count=9569 (+4535), average=107 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/tf_decorator.py:241: size=367 KiB (+367 KiB), count=5225 (+5225), average=72 B\r\n /home/theo/.local/share/miniconda3/envs/datasets/lib/python3.8/site-packages/tensorflow/python/util/decorator_utils.py:114: size=359 KiB (+359 KiB), count=330 (+330), average=1114 B\r\n```\r\n\r\nI'm not concluding much, it seems nothing is really happening to memory on `pyarrow::Table.filter`? ", "Cool ! Maybe it increases the memory a bit but what's brought in memory is not the resulting Table but something else (not sure what though).\r\nWhat's the length of the resulting dataset ?\r\nYou can also take a look at `pyarrow.total_allocated_memory()` to show how much memory is being used by pyarrow", "```diff\r\ndiff --git a/benchmarks/benchmark_filter.py b/benchmarks/benchmark_filter.py\r\nindex 4b9efd4e..a862c204 100644\r\n--- a/benchmarks/benchmark_filter.py\r\n+++ b/benchmarks/benchmark_filter.py\r\n@@ -1,6 +1,9 @@\r\n import logging\r\n import sys\r\n import time\r\n+import tracemalloc\r\n+\r\n+import pyarrow as pa\r\n \r\n from datasets import load_dataset, set_caching_enabled\r\n \r\n@@ -9,13 +12,28 @@ if __name__ == \"__main__\":\r\n set_caching_enabled(False)\r\n logging.basicConfig(level=logging.DEBUG)\r\n \r\n+ tracemalloc.start()\r\n bc = load_dataset(\"bookcorpus\")\r\n \r\n now = time.time()\r\n try:\r\n+ snapshot1 = tracemalloc.take_snapshot()\r\n+ pamem1 = pa.total_allocated_bytes()\r\n bc[\"train\"].filter(lambda x: len(x[\"text\"]) < 64, num_proc=int(sys.argv[1]))\r\n+ pamem2 = pa.total_allocated_bytes()\r\n+ snapshot2 = tracemalloc.take_snapshot()\r\n except Exception as e:\r\n print(f\"cancelled: {e}\")\r\n+ exit(1)\r\n+ tracemalloc.stop()\r\n elapsed = time.time() - now\r\n \r\n print(elapsed)\r\n+ top_stats = snapshot2.compare_to(snapshot1, \"lineno\")\r\n+\r\n+ print(\"[ Top 10 differences ]\")\r\n+ for stat in top_stats[:10]:\r\n+ print(stat)\r\n+\r\n+ print(\"[ pyarrow reporting ]\")\r\n+ print(f\"before: ({pamem1}) after: ({pamem2})\")\r\n```\r\n\r\nthis yields 0-0, does not seem like a good tool 😛 and the documentation is [quite mysterious.](https://arrow.apache.org/docs/python/generated/pyarrow.total_allocated_bytes.html)", "Personally if I use your script to benchmark on this branch\r\n```python\r\nbc = load_dataset(\"bookcorpus\", split=\"train[:1%]\")\r\nbc = bc.filter(lambda x: len(x[\"text\"]) < 64)\r\n```\r\n\r\nthen I get\r\n```\r\n[ pyarrow reporting ]\r\nbefore: (0) after: (15300672)\r\n```\r\n\r\nMaybe you got 0-0 because the filter output is directly garbage collected, since you didn't do\r\n```python\r\nbc[\"train\"] = bc[\"train\"].filter(...)\r\n```\r\nCan you try again on your side just to make sure ?\r\n\r\nEven if the documentation doesn't say much, `pa.total_allocated_bytes` if pretty useful, and also very consistent.\r\nIt tracks the number of bytes used for arrow data.", "> Maybe you got 0-0 because the filter output is directly garbage collected, since you didn't do\r\n> \r\n> ```python\r\n> bc[\"train\"] = bc[\"train\"].filter(...)\r\n> ```\r\nNice catch! I get 1.74GB for this branch", "Looks like we may need to write the filtered table on the disk then.\r\n\r\nThe other option is to slice the table to keep only the good rows and concatenate them but this is too slow at the moment since slicing is O(n) until #1803 is fixed. I'll work on this issue this afternoon", "From investigation it looks like the lib's `Table.filter` cannot send its output to memorymap, asked a question on the mailing list, see [here](https://lists.apache.org/thread.html/r8cd8591ce83a967eb0097a7f31785ac2f3ee95ea371c8c5beb0720ad%40%3Cuser.arrow.apache.org%3E)", "closing in favor of #2836 " ]
null
[]
Filtering refactor
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2060/timeline
fix https://github.com/huggingface/datasets/issues/2032 benchmarking is somewhat inconclusive, currently running on `book_corpus` with: ```python bc = load_dataset("bookcorpus") now = time.time() bc.filter(lambda x: len(x["text"]) < 64) elapsed = time.time() - now print(elapsed) ``` this branch does it in 233 seconds, master in 1409 seconds.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2060.diff", "html_url": "https://github.com/huggingface/datasets/pull/2060", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2060.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2060" }
832,588,591
https://api.github.com/repos/huggingface/datasets/issues/2060/comments
MDExOlB1bGxSZXF1ZXN0NTkzNzIxNzcx
null
2,060
https://api.github.com/repos/huggingface/datasets/issues/2060/events
true
closed
2021-03-16T09:12:19Z
null
https://api.github.com/repos/huggingface/datasets/issues/2059
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2059/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2059/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/40426312?v=4", "events_url": "https://api.github.com/users/ekdnam/events{/privacy}", "followers_url": "https://api.github.com/users/ekdnam/followers", "following_url": "https://api.github.com/users/ekdnam/following{/other_user}", "gists_url": "https://api.github.com/users/ekdnam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ekdnam", "id": 40426312, "login": "ekdnam", "node_id": "MDQ6VXNlcjQwNDI2MzEy", "organizations_url": "https://api.github.com/users/ekdnam/orgs", "received_events_url": "https://api.github.com/users/ekdnam/received_events", "repos_url": "https://api.github.com/users/ekdnam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ekdnam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekdnam/subscriptions", "type": "User", "url": "https://api.github.com/users/ekdnam" }
https://github.com/huggingface/datasets/issues/2059
[]
false
2021-03-16T18:00:31Z
2021-03-16T18:00:07Z
null
[ "@skyprince999 as you authored the PR for this dataset, any comments?", "This has been fixed in #2064 by @mariosasko (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)" ]
completed
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
Error while following docs to load the `ted_talks_iwslt` dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2059/timeline
I am currently trying to load the `ted_talks_iwslt` dataset into google colab. The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so. ```python dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") ``` Executing it results in the error attached below. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-7dcc67154ef9> in <module>() ----> 1 dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") 4 frames /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 730 hash=hash, 731 features=features, --> 732 **config_kwargs, 733 ) 734 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, writer_batch_size, *args, **kwargs) 927 928 def __init__(self, *args, writer_batch_size=None, **kwargs): --> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs) 930 # Batch size used by the ArrowWriter 931 # It defines the number of samples that are kept in memory before writing them /usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs) 241 name, 242 custom_features=features, --> 243 **config_kwargs, 244 ) 245 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs) 337 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION: 338 config_kwargs["version"] = self.VERSION --> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) 340 341 # otherwise use the config_kwargs to overwrite the attributes /root/.cache/huggingface/modules/datasets_modules/datasets/ted_talks_iwslt/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs) 219 description=description, 220 version=datasets.Version("1.1.0", ""), --> 221 **kwargs, 222 ) 223 TypeError: __init__() got multiple values for keyword argument 'version' ``` How to resolve this? PS: Thanks a lot @huggingface team for creating this great library!
https://api.github.com/repos/huggingface/datasets
null
832,579,156
https://api.github.com/repos/huggingface/datasets/issues/2059/comments
MDU6SXNzdWU4MzI1NzkxNTY=
null
2,059
https://api.github.com/repos/huggingface/datasets/issues/2059/events
false
closed
2021-03-15T20:18:47Z
null
https://api.github.com/repos/huggingface/datasets/issues/2058
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2058/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2058/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "events_url": "https://api.github.com/users/abarbosa94/events{/privacy}", "followers_url": "https://api.github.com/users/abarbosa94/followers", "following_url": "https://api.github.com/users/abarbosa94/following{/other_user}", "gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abarbosa94", "id": 6608232, "login": "abarbosa94", "node_id": "MDQ6VXNlcjY2MDgyMzI=", "organizations_url": "https://api.github.com/users/abarbosa94/orgs", "received_events_url": "https://api.github.com/users/abarbosa94/received_events", "repos_url": "https://api.github.com/users/abarbosa94/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions", "type": "User", "url": "https://api.github.com/users/abarbosa94" }
https://github.com/huggingface/datasets/issues/2058
[]
false
2023-07-25T16:47:40Z
2023-07-25T16:47:40Z
null
[ "Hi! You can either save the TF dataset to one of the formats supported by datasets (`parquet`, `csv`, `json`, ...) or pass a generator function to `Dataset.from_generator` that yields its examples." ]
completed
[]
Is it possible to convert a `tfds` to HuggingFace `dataset`?
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2058/timeline
I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :) I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` in the future if you think that it would be useful. Thanks!
https://api.github.com/repos/huggingface/datasets
null
832,159,844
https://api.github.com/repos/huggingface/datasets/issues/2058/comments
MDU6SXNzdWU4MzIxNTk4NDQ=
null
2,058
https://api.github.com/repos/huggingface/datasets/issues/2058/events
false
closed
2021-03-15T19:22:57Z
null
https://api.github.com/repos/huggingface/datasets/issues/2057
null
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/2057/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2057/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/619844?v=4", "events_url": "https://api.github.com/users/matt-peters/events{/privacy}", "followers_url": "https://api.github.com/users/matt-peters/followers", "following_url": "https://api.github.com/users/matt-peters/following{/other_user}", "gists_url": "https://api.github.com/users/matt-peters/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/matt-peters", "id": 619844, "login": "matt-peters", "node_id": "MDQ6VXNlcjYxOTg0NA==", "organizations_url": "https://api.github.com/users/matt-peters/orgs", "received_events_url": "https://api.github.com/users/matt-peters/received_events", "repos_url": "https://api.github.com/users/matt-peters/repos", "site_admin": false, "starred_url": "https://api.github.com/users/matt-peters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/matt-peters/subscriptions", "type": "User", "url": "https://api.github.com/users/matt-peters" }
https://github.com/huggingface/datasets/pull/2057
[]
false
2021-03-16T17:06:28Z
2021-03-16T17:06:28Z
null
[]
null
[]
update link to ZEST dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2057/timeline
Updating the link as the original one is no longer working.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2057.diff", "html_url": "https://github.com/huggingface/datasets/pull/2057", "merged_at": "2021-03-16T17:06:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/2057.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2057" }
832,120,522
https://api.github.com/repos/huggingface/datasets/issues/2057/comments
MDExOlB1bGxSZXF1ZXN0NTkzMzMzMjM0
null
2,057
https://api.github.com/repos/huggingface/datasets/issues/2057/events
true
closed
2021-03-15T11:32:42Z
null
https://api.github.com/repos/huggingface/datasets/issues/2056
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2056/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2056/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
https://github.com/huggingface/datasets/issues/2056
[]
false
2021-03-16T15:49:00Z
2021-03-16T15:48:59Z
null
[ "@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks ", "Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import MT5TokenizerFast\r\n\r\ndef get_tokenized_dataset(dataset_name, dataset_config_name, tokenizer):\r\n datasets = load_dataset(dataset_name, dataset_config_name, script_version=\"master\")\r\n column_names = datasets[\"train\"].column_names\r\n text_column_name = \"translation\"\r\n def process_dataset(datasets):\r\n def process_function(examples):\r\n lang = \"fr\"\r\n return {\"src_texts\": [example[lang] for example in examples[text_column_name]]}\r\n datasets = datasets.map(\r\n process_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True,\r\n )\r\n return datasets\r\n datasets = process_dataset(datasets)\r\n text_column_name = \"src_texts\"\r\n column_names = [\"src_texts\"]\r\n def tokenize_function(examples):\r\n return tokenizer(examples[text_column_name], return_special_tokens_mask=True)\r\n tokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True\r\n )\r\n\r\nif __name__ == \"__main__\":\r\n tokenizer_kwargs = {\r\n \"cache_dir\": None,\r\n \"use_fast\": True,\r\n \"revision\": \"main\",\r\n \"use_auth_token\": None\r\n }\r\n tokenizer = MT5TokenizerFast.from_pretrained(\"google/mt5-small\", **tokenizer_kwargs)\r\n get_tokenized_dataset(dataset_name=\"opus100\", dataset_config_name=\"en-fr\", tokenizer=tokenizer)\r\n~ \r\n```", "as per https://github.com/huggingface/tokenizers/issues/626 this looks like to be the tokenizer bug, I therefore, reported it there https://github.com/huggingface/tokenizers/issues/626 and I am closing this one." ]
completed
[]
issue with opus100/en-fr dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2056/timeline
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked at 'index out of bounds: the len is 617 but the index is 617', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace 63%|██████████████████████████████████████████████████████████▊ | 626/1000 [00:27<00:16, 22.69ba/s] Traceback (most recent call last): File "run_mlm.py", line 550, in <module> main() File "run_mlm.py", line 412, in main in zip(data_args.dataset_name, data_args.dataset_config_name)] File "run_mlm.py", line 411, in <listcomp> logger) for dataset_name, dataset_config_name\ File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 96, in get_tokenized_dataset load_from_cache_file=not data_args.overwrite_cache, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in map for k, dataset in self.items() File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in <dictcomp> for k, dataset in self.items() File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1309, in map update_data=update_data, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 204, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/fingerprint.py", line 337, in wrapper out = func(self, *args, **kwargs) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1574, in _map_single batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1490, in apply_function_on_filtered_inputs function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 89, in tokenize_function return tokenizer(examples[text_column_name], return_special_tokens_mask=True) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2347, in __call__ **kwargs, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2532, in batch_encode_plus **kwargs, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 384, in _batch_encode_plus is_pretokenized=is_split_into_words, pyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617 `
https://api.github.com/repos/huggingface/datasets
null
831,718,397
https://api.github.com/repos/huggingface/datasets/issues/2056/comments
MDU6SXNzdWU4MzE3MTgzOTc=
null
2,056
https://api.github.com/repos/huggingface/datasets/issues/2056/events
false
closed
2021-03-15T10:50:53Z
null
https://api.github.com/repos/huggingface/datasets/issues/2055
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2055/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2055/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez" }
https://github.com/huggingface/datasets/issues/2055
[]
false
2021-03-22T04:06:17Z
2021-03-22T04:06:17Z
null
[ "Hi\r\nYou can rename the arrow file and update the name in `state.json`", "I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file,\r\n\r\n```\r\n dataset_with_embedding =csv_dataset.map(\r\n partial(self.embed, ctx_encoder=ctx_encoder, ctx_tokenizer=self.context_tokenizer),\r\n batched=True,\r\n batch_size=1,\r\n features=new_features,\r\n cache_file_name=cache_arrow_path,\r\n load_from_cache_file=False\r\n )\r\n```\r\nSo here we set a cache_file_name , after this it uses the same file name when saving again and again. ", "I'm not sure I understand your issue, can you elaborate ?\r\n\r\n`cache_file_name` is indeed an argument you can set to specify the cache file that will be used for the processed dataset. By default the file is named with something like `cache-<fingerprint>.arrow` where the fingerprint is a hash.", "Let's say I am updating a set of embedding in a dataset that is around 40GB inside a training loop every 500 steps (Ex: calculating the embeddings in updated ctx_encoder in RAG and saving it to the passage path). So when we use **dataset_object.save_to_disk('passage_path_directory')** it will save the new dataset object every time with a random file name, especially when we do some transformations to dataset objects such as map or shards. This way, we keep collecting unwanted files that will eventually eat up all the disk space. \r\n\r\nBut if we can save the dataset object every time by a single name like **data_shard_1.arrow**, it will automatically remove the previous file and save the new one in the same directory. I found the above-mentioned code snippet useful to complete this task. \r\n\r\nIs this clear?" ]
completed
[]
is there a way to override a dataset object saved with save_to_disk?
NONE
https://api.github.com/repos/huggingface/datasets/issues/2055/timeline
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
https://api.github.com/repos/huggingface/datasets
null
831,684,312
https://api.github.com/repos/huggingface/datasets/issues/2055/comments
MDU6SXNzdWU4MzE2ODQzMTI=
null
2,055
https://api.github.com/repos/huggingface/datasets/issues/2055/events
false
closed
2021-03-15T09:11:58Z
null
https://api.github.com/repos/huggingface/datasets/issues/2054
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2054/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2054/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhadreshpsavani", "id": 26653468, "login": "bhadreshpsavani", "node_id": "MDQ6VXNlcjI2NjUzNDY4", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "type": "User", "url": "https://api.github.com/users/bhadreshpsavani" }
https://github.com/huggingface/datasets/issues/2054
[]
false
2021-05-03T09:30:24Z
2021-05-03T09:30:24Z
null
[ "The zest dataset url was changed (allenai/zest#3) and #2057 should resolve this.", "This has been fixed in #2057 by @matt-peters (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)", "Thanks @lhoestq and @matt-peters ", "I am closing this issue since its fixed!" ]
completed
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
Could not find file for ZEST dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2054/timeline
I am trying to use zest dataset from Allen AI using below code in colab, ``` !pip install -q datasets from datasets import load_dataset dataset = load_dataset("zest") ``` I am getting the following error, ``` Using custom data configuration default Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca... --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-6-18dbbc1a4b8a> in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("zest") 9 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 612 ) 613 elif response is not None and response.status_code == 404: --> 614 raise FileNotFoundError("Couldn't find file at {}".format(url)) 615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 616 raise ConnectionError("Couldn't reach {}".format(url)) FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip ```
https://api.github.com/repos/huggingface/datasets
null
831,597,665
https://api.github.com/repos/huggingface/datasets/issues/2054/comments
MDU6SXNzdWU4MzE1OTc2NjU=
null
2,054
https://api.github.com/repos/huggingface/datasets/issues/2054/events
false
closed
2021-03-14T13:04:39Z
null
https://api.github.com/repos/huggingface/datasets/issues/2053
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2053/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2053/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
https://github.com/huggingface/datasets/pull/2053
[]
false
2021-03-29T12:41:48Z
2021-03-29T12:41:48Z
null
[ "Hi @lhoestq,\r\n\r\nShould I remove the 160 configurations? Is it too much?\r\n\r\nEDIT:\r\nCan you also check the task category? I'm not sure if there is an appropriate tag for the same.", "Thanks for the changes !\r\n\r\n> Should I remove the 160 configurations? Is it too much?\r\n\r\nYea 160 configuration is a lot.\r\nMaybe this dataset can work with parameters `type` and `task_no` ?\r\nYou can just remove the configuration in BUILDER_CONFIGS to only keep a few ones.\r\nAlso feel free to add an example in the dataset card of how to load the other configurations\r\n```\r\nload_dataset(\"babi_qa\", type=\"hn\", task_no=\"qa1\")\r\n```\r\nfor example, and with a list of the possible combinations.\r\n\r\n> Can you also check the task category? I'm not sure if there is an appropriate tag for the same.\r\n\r\nIt looks appropriate, thanks :)", "Hi @lhoestq \r\n\r\nI'm unable to test it locally using:\r\n```python\r\nload_dataset(\"datasets/babi_qa\", type=\"hn\", task_no=\"qa1\")\r\n```\r\nIt raises an error:\r\n```python\r\nTypeError: __init__() got an unexpected keyword argument 'type'\r\n```\r\nWill this be possible only after merging? Or am I missing something here?", "Can you try adding this class attribute to `BabiQa` ?\r\n```python\r\nBUILDER_CONFIG_CLASS = BabiQaConfig\r\n```\r\nThis should fix the TypeError issue you got", "My bad. Thanks a lot!", "Hi @lhoestq \r\n\r\nI have added the changes. Only the \"qa1\" task for each category is included. Also, I haven't removed the size categories and other description because I think it will still be useful. I have updated the line in README showing the example.\r\n\r\nThanks,\r\nGunjan", "Hi @lhoestq,\r\n\r\nDoes this look good now?" ]
null
[]
Add bAbI QA tasks
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2053/timeline
- **Name:** *The (20) QA bAbI tasks* - **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.* - **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf) - **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/) - **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research. **Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done. Thanks :) ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2053.diff", "html_url": "https://github.com/huggingface/datasets/pull/2053", "merged_at": "2021-03-29T12:41:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/2053.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2053" }
831,151,728
https://api.github.com/repos/huggingface/datasets/issues/2053/comments
MDExOlB1bGxSZXF1ZXN0NTkyNTM4ODY2
null
2,053
https://api.github.com/repos/huggingface/datasets/issues/2053/events
true
closed
2021-03-14T11:43:43Z
null
https://api.github.com/repos/huggingface/datasets/issues/2052
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2052/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2052/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7583522?v=4", "events_url": "https://api.github.com/users/fermaat/events{/privacy}", "followers_url": "https://api.github.com/users/fermaat/followers", "following_url": "https://api.github.com/users/fermaat/following{/other_user}", "gists_url": "https://api.github.com/users/fermaat/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fermaat", "id": 7583522, "login": "fermaat", "node_id": "MDQ6VXNlcjc1ODM1MjI=", "organizations_url": "https://api.github.com/users/fermaat/orgs", "received_events_url": "https://api.github.com/users/fermaat/received_events", "repos_url": "https://api.github.com/users/fermaat/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fermaat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fermaat/subscriptions", "type": "User", "url": "https://api.github.com/users/fermaat" }
https://github.com/huggingface/datasets/issues/2052
[]
false
2021-03-15T10:37:16Z
2021-03-15T10:37:16Z
null
[ "Hi,\r\n\r\nthis was fixed by #1995, so you can wait for the next release or install the package directly from the master branch with the following command: \r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n```", "Ty!" ]
completed
[]
Timit_asr dataset repeats examples
NONE
https://api.github.com/repos/huggingface/datasets/issues/2052/timeline
Summary When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same Steps to reproduce As an example, on this code there is the text from the training part: Code snippet: ``` from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") timit['train']['text'] #['Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', ``` The same behavior happens for other columns Expected behavior: Different info on the actual timit_asr dataset Actual behavior: When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same. I've checked datasets 1.3 and the rows are different Debug info Streamlit version: (get it with $ streamlit version) Python version: Python 3.6.12 Using Conda? PipEnv? PyEnv? Pex? Using pip OS version: Centos-release-7-9.2009.1.el7.centos.x86_64 Additional information You can check the same behavior on https://huggingface.co/datasets/viewer/?dataset=timit_asr
https://api.github.com/repos/huggingface/datasets
null
831,135,704
https://api.github.com/repos/huggingface/datasets/issues/2052/comments
MDU6SXNzdWU4MzExMzU3MDQ=
null
2,052
https://api.github.com/repos/huggingface/datasets/issues/2052/events
false
closed
2021-03-14T00:01:05Z
null
https://api.github.com/repos/huggingface/datasets/issues/2051
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2051/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2051/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
https://github.com/huggingface/datasets/pull/2051
[]
false
2021-03-19T11:15:44Z
2021-03-19T10:31:59Z
null
[ "Hi @lhoestq,\r\n\r\nI have added changes from review.", "Thanks for approving :)" ]
null
[]
Add MDD Dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2051/timeline
- **Name:** *MDD Dataset* - **Description:** The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb. - **Paper:** [arXiv](https://arxiv.org/pdf/1511.06931.pdf) - **Data:** https://research.fb.com/downloads/babi/ - **Motivation:** This is one of the popular dialog datasets, a part of Facebook Research's "bAbI project". ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()` - [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class. - [x] Generate the metadata file `dataset_infos.json` for all configurations - [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs - [x] Both tests for the real data and the dummy data pass. **Note**: I haven't included the following from the data files: `entities` (the file containing list of all entities in the first three subtasks), `dictionary`(the dictionary of words they use in their models), `movie_kb`(contains the knowledge base of information about the movies, actors and other entities that are mentioned in the dialogs). Please let me know if those are needed, and if yes, should I make separate configurations for them?
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2051.diff", "html_url": "https://github.com/huggingface/datasets/pull/2051", "merged_at": "2021-03-19T10:31:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/2051.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2051" }
831,027,021
https://api.github.com/repos/huggingface/datasets/issues/2051/comments
MDExOlB1bGxSZXF1ZXN0NTkyNDQ2MDU1
null
2,051
https://api.github.com/repos/huggingface/datasets/issues/2051/events
true
closed
2021-03-13T22:01:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/2050
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2050/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2050/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4", "events_url": "https://api.github.com/users/Omarnabk/events{/privacy}", "followers_url": "https://api.github.com/users/Omarnabk/followers", "following_url": "https://api.github.com/users/Omarnabk/following{/other_user}", "gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Omarnabk", "id": 72882909, "login": "Omarnabk", "node_id": "MDQ6VXNlcjcyODgyOTA5", "organizations_url": "https://api.github.com/users/Omarnabk/orgs", "received_events_url": "https://api.github.com/users/Omarnabk/received_events", "repos_url": "https://api.github.com/users/Omarnabk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions", "type": "User", "url": "https://api.github.com/users/Omarnabk" }
https://github.com/huggingface/datasets/issues/2050
[]
false
2021-03-15T09:27:28Z
2021-03-15T09:27:28Z
null
[ "@lhoestq - We could simply use the \"general\" json dataset for this no? ", "Sure you can use the json loader\r\n```python\r\ndata_files = {\"train\": \"path/to/your/train_data.json\", \"test\": \"path/to/your/test_data.json\"}\r\ntrain_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\ntest_dataset = load_dataset(\"json\", data_files=data_files, split=\"test\")\r\n```\r\n\r\nYou just need to make sure that the data contain the paths to the audio files.\r\nIf not, feel free to use `.map()` to add them.", "Many thanks! that was what I was looking for. " ]
completed
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
Build custom dataset to fine-tune Wav2Vec2
NONE
https://api.github.com/repos/huggingface/datasets/issues/2050/timeline
Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
https://api.github.com/repos/huggingface/datasets
null
831,006,551
https://api.github.com/repos/huggingface/datasets/issues/2050/comments
MDU6SXNzdWU4MzEwMDY1NTE=
null
2,050
https://api.github.com/repos/huggingface/datasets/issues/2050/events
false
closed
2021-03-13T19:51:42Z
null
https://api.github.com/repos/huggingface/datasets/issues/2049
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2049/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2049/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
https://github.com/huggingface/datasets/pull/2049
[]
false
2021-03-16T15:47:46Z
2021-03-16T15:47:46Z
null
[ "LGTM, thanks for fixing." ]
null
[]
Fix text-classification tags
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2049/timeline
There are different tags for text classification right now: `text-classification` and `text_classification`: ![image](https://user-images.githubusercontent.com/29076344/111042457-856bdf00-8463-11eb-93c9-50a30106a1a1.png). This PR fixes it.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2049.diff", "html_url": "https://github.com/huggingface/datasets/pull/2049", "merged_at": "2021-03-16T15:47:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/2049.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2049" }
830,978,687
https://api.github.com/repos/huggingface/datasets/issues/2049/comments
MDExOlB1bGxSZXF1ZXN0NTkyNDE2MzQ0
null
2,049
https://api.github.com/repos/huggingface/datasets/issues/2049/events
true
closed
2021-03-13T18:03:32Z
null
https://api.github.com/repos/huggingface/datasets/issues/2048
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2048/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
https://github.com/huggingface/datasets/issues/2048
[]
false
2022-04-01T15:27:10Z
2022-04-01T15:27:10Z
null
[]
completed
[]
github is not always available - probably need a back up
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2048/timeline
Yesterday morning github wasn't working: ``` :/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2021-03-12 18:36:11 ERROR 500: Internal Server Error. ``` Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source.
https://api.github.com/repos/huggingface/datasets
null
830,953,431
https://api.github.com/repos/huggingface/datasets/issues/2048/comments
MDU6SXNzdWU4MzA5NTM0MzE=
null
2,048
https://api.github.com/repos/huggingface/datasets/issues/2048/events
false
closed
2021-03-12T23:02:55Z
null
https://api.github.com/repos/huggingface/datasets/issues/2047
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2047/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2047/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4", "events_url": "https://api.github.com/users/eusip/events{/privacy}", "followers_url": "https://api.github.com/users/eusip/followers", "following_url": "https://api.github.com/users/eusip/following{/other_user}", "gists_url": "https://api.github.com/users/eusip/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eusip", "id": 1551356, "login": "eusip", "node_id": "MDQ6VXNlcjE1NTEzNTY=", "organizations_url": "https://api.github.com/users/eusip/orgs", "received_events_url": "https://api.github.com/users/eusip/received_events", "repos_url": "https://api.github.com/users/eusip/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eusip/subscriptions", "type": "User", "url": "https://api.github.com/users/eusip" }
https://github.com/huggingface/datasets/pull/2047
[]
false
2021-03-23T10:36:34Z
2021-03-19T10:47:13Z
null
[ "Hello. All aforementioned changes have been made. I've also re-run black on miam.py. :-)", "I will run isort again. Hopefully it resolves the current check_code_quality test failure.", "Once the review period is over, feel free to open a PR to add all the missing information ;)", "Hi! I will follow up right now with one more pull request as I have new anonymous citation information to include." ]
null
[]
Multilingual dIalogAct benchMark (miam)
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2047/timeline
My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2047.diff", "html_url": "https://github.com/huggingface/datasets/pull/2047", "merged_at": "2021-03-19T10:47:13Z", "patch_url": "https://github.com/huggingface/datasets/pull/2047.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2047" }
830,626,430
https://api.github.com/repos/huggingface/datasets/issues/2047/comments
MDExOlB1bGxSZXF1ZXN0NTkyMTI2NzQ3
null
2,047
https://api.github.com/repos/huggingface/datasets/issues/2047/events
true
closed
2021-03-12T20:27:18Z
null
https://api.github.com/repos/huggingface/datasets/issues/2046
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2046/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2046/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez" }
https://github.com/huggingface/datasets/issues/2046
[]
false
2021-03-24T22:29:11Z
2021-03-24T22:29:11Z
null
[ "I think faiss automatically sets the number of threads to use to build the index.\r\nCan you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?", "Hi,\r\n I am running the add_faiss_index during the training process of the RAG from the master process (rank 0). But at the exact moment, I do not run any other process since I do it in every 5000 training steps. \r\n \r\n I think what you say is correct. It depends on the number of CPU cores. I did an experiment to compare the time taken to finish the add_faiss_index process on use_own_knowleldge_dataset.py vs the training loop thing. The training loop thing takes 40 mins more. It might be natural right? \r\n \r\n \r\n at the moment it uses around 40 cores of a 96 core machine (I am fine-tuning the entire process). ", "Can you try to set the number of threads manually ?\r\nIf you set the same number of threads for both the `use_own_knowledge_dataset.py` and RAG training, it should take the same amount of time.\r\nYou can see how to set the number of thread in the faiss wiki: https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls", "Ok, I will report the details too soon. I am the first one on the list and currently add_index being computed for the 3rd time in the loop. Actually seems like the time is taken to complete each interaction is the same, but around 1 hour more compared to running it without the training loop. A the moment this takes 5hrs and 30 mins. If there is any way to faster the process, an end-to-end rag will be perfect. So I will also try out with different thread numbers too. \r\n\r\n![image](https://user-images.githubusercontent.com/16892570/111453464-798c5f80-8778-11eb-86d0-19d212f58e38.png)\r\n", "@lhoestq on a different note, I read about using Faiss-GPU, but the documentation says we should use it when the dataset has the ability to fit into the GPU memory. Although this might work, in the long-term this is not that practical for me.\r\n\r\nhttps://github.com/matsui528/faiss_tips", "@lhoestq \r\n\r\nHi, I executed the **use_own_dataset.py** script independently and ask a few of my friends to run their programs in the HPC machine at the same time. \r\n\r\n Once there are so many other processes are running the add_index function gets slows down naturally. So basically the speed of the add_index depends entirely on the number of CPU processes. Then I set the number of threads as you have mentioned and got actually the same time for RAG training and independat running. So you are correct! :) \r\n\r\n \r\n Then I added this [issue in Faiss repostiary](https://github.com/facebookresearch/faiss/issues/1767). I got an answer saying our current **IndexHNSWFlat** can get slow for 30 million vectors and it would be better to use alternatives. What do you think?", "It's a matter of tradeoffs.\r\nHSNW is fast at query time but takes some time to build.\r\nA flat index is flat to build but is \"slow\" at query time.\r\nAn IVF index is probably a good choice for you: fast building and fast queries (but still slower queries than HSNW).\r\n\r\nNote that for an IVF index you would need to have an `nprobe` parameter (number of cells to visit for one query, there are `nlist` in total) that is not too small in order to have good retrieval accuracy, but not too big otherwise the queries will take too much time. From the faiss documentation:\r\n> The nprobe parameter is always a way of adjusting the tradeoff between speed and accuracy of the result. Setting nprobe = nlist gives the same result as the brute-force search (but slower).\r\n\r\nFrom my experience with indexes on DPR embeddings, setting nprobe around 1/4 of nlist gives really good retrieval accuracy and there's no need to have a value higher than that (or you would need to brute-force in order to see a difference).", "@lhoestq \r\n\r\nThanks a lot for sharing all this prior knowledge. \r\n\r\nJust asking what would be a good nlist of parameters for 30 million embeddings?", "When IVF is used alone, nlist should be between `4*sqrt(n)` and `16*sqrt(n)`.\r\nFor more details take a look at [this section of the Faiss wiki](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index#how-big-is-the-dataset)", "Thanks a lot. I was lost with calling the index from class and using faiss_index_factory. ", "@lhoestq Thanks a lot for the help you have given to solve this issue. As per my experiments, IVF index suits well for my case and it is a lot faster. The use of this can make the entire RAG end-to-end trainable lot faster. So I will close this issue. Will do the final PR soon. " ]
completed
[]
add_faisis_index gets very slow when doing it interatively
NONE
https://api.github.com/repos/huggingface/datasets/issues/2046/timeline
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster? @lhoestq ``` def training_step(self, batch, batch_idx) -> Dict: if (not batch_idx==0) and (batch_idx%5==0): print("******************************************************") ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff list_of_gpus = ['cuda:2','cuda:3'] c_dir='/custom/cache/dir' kb_dataset = load_dataset("csv", data_files=[self.custom_config.csv_path], split="train", delimiter="\t", column_names=["title", "text"],cache_dir=c_dir) print(kb_dataset) n=len(list_of_gpus) #nunber of dedicated GPUs kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)] #kb_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/haha-dir') print(self.trainer.global_rank) dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank]) output = [None for _ in list_of_gpus] #self.trainer.accelerator_connector.accelerator.barrier("embedding_process") dist.all_gather_object(output, dataset_shards) #This creation and re-initlaization of the new index if (self.trainer.global_rank==0): #saving will be done in the main process combined_dataset = concatenate_datasets(output) passages_path =self.config.passages_path logger.info("saving the dataset with ") #combined_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/MY-Passage') combined_dataset.save_to_disk(passages_path) logger.info("Add faiss index to the dataset that consist of embeddings") embedding_dataset=combined_dataset index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT) embedding_dataset.add_faiss_index("embeddings", custom_index=index) embedding_dataset.get_index("embeddings").save(self.config.index_path)
https://api.github.com/repos/huggingface/datasets
null
830,423,033
https://api.github.com/repos/huggingface/datasets/issues/2046/comments
MDU6SXNzdWU4MzA0MjMwMzM=
null
2,046
https://api.github.com/repos/huggingface/datasets/issues/2046/events
false
closed
2021-03-12T18:26:47Z
null
https://api.github.com/repos/huggingface/datasets/issues/2045
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2045/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2045/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/2045
[]
false
2021-03-16T14:48:05Z
2021-03-16T14:35:05Z
null
[ "Not sure why CI isn't triggered.\r\n\r\n@lhoestq Can you please help me with this? ", "I don't know how to trigger it manually, but an empty commit should do the job" ]
null
[]
Preserve column ordering in Dataset.rename_column
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2045/timeline
Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns: ```python >>> from datasets import Dataset >>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]}) >>> d Dataset({ features: ['sentences', 'label'], num_rows: 2 }) >>> d.rename_column('sentences', 'text') Dataset({ features: ['label', 'text'], num_rows: 2 }) ``` This PR fixes this.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2045.diff", "html_url": "https://github.com/huggingface/datasets/pull/2045", "merged_at": "2021-03-16T14:35:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2045.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2045" }
830,351,527
https://api.github.com/repos/huggingface/datasets/issues/2045/comments
MDExOlB1bGxSZXF1ZXN0NTkxODc2Mjcz
null
2,045
https://api.github.com/repos/huggingface/datasets/issues/2045/events
true
closed
2021-03-12T18:04:19Z
null
https://api.github.com/repos/huggingface/datasets/issues/2044
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2044/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2044/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani" }
https://github.com/huggingface/datasets/pull/2044
[]
false
2021-03-19T11:10:13Z
2021-03-19T10:29:15Z
null
[ "Hi @lhoestq,\r\n\r\nI have added changes from the review.", "Thanks for approving @lhoestq " ]
null
[]
Add CBT dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2044/timeline
This PR adds the [CBT Dataset](https://arxiv.org/abs/1511.02301). Note that I have also added the `raw` dataset as a separate configuration. I couldn't find a suitable "task" for it in YAML tags. The dummy files have one example each, as the examples are slightly big. For `raw` dataset, I just used top few lines, because they are entire books and would take up a lot of space. Let me know in case of any issues.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2044.diff", "html_url": "https://github.com/huggingface/datasets/pull/2044", "merged_at": "2021-03-19T10:29:15Z", "patch_url": "https://github.com/huggingface/datasets/pull/2044.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2044" }
830,339,905
https://api.github.com/repos/huggingface/datasets/issues/2044/comments
MDExOlB1bGxSZXF1ZXN0NTkxODY2NzM1
null
2,044
https://api.github.com/repos/huggingface/datasets/issues/2044/events
true
closed
2021-03-12T16:35:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/2043
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2043/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2043/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/2043
[]
false
2021-03-16T14:25:38Z
2021-03-16T14:05:05Z
null
[ "@lhoestq But we don't perform conversion to a `NamedSplit` if `_split` is not a string which means it **will** be a `ReadInstruction` after reloading.", "Yes right ! I read it wrong.\r\nPerfect then" ]
null
[]
Support pickle protocol for dataset splits defined as ReadInstruction
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2043/timeline
Fixes #2022 (+ some style fixes)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2043.diff", "html_url": "https://github.com/huggingface/datasets/pull/2043", "merged_at": "2021-03-16T14:05:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2043.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2043" }
830,279,098
https://api.github.com/repos/huggingface/datasets/issues/2043/comments
MDExOlB1bGxSZXF1ZXN0NTkxODE1ODAz
null
2,043
https://api.github.com/repos/huggingface/datasets/issues/2043/events
true
closed
2021-03-12T14:49:52Z
null
https://api.github.com/repos/huggingface/datasets/issues/2042
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2042/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2042/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2042
[]
false
2021-03-12T15:04:23Z
2021-03-12T15:04:22Z
null
[]
null
[]
Fix arrow memory checks issue in tests
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2042/timeline
The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory. From my experiments, the tests fail only when the full test suite is ran. This made me think that maybe some arrow objects from other tests were not freeing their memory until they do and cause the memory verifications to fail in other tests. Collecting the garbage collector before checking the arrow memory usage seems to fix this issue. I added a context manager `assert_arrow_memory_increases` that we can use in tests and that deals with the gc.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2042.diff", "html_url": "https://github.com/huggingface/datasets/pull/2042", "merged_at": "2021-03-12T15:04:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/2042.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2042" }
830,190,276
https://api.github.com/repos/huggingface/datasets/issues/2042/comments
MDExOlB1bGxSZXF1ZXN0NTkxNzQwNzQ3
null
2,042
https://api.github.com/repos/huggingface/datasets/issues/2042/events
true
closed
2021-03-12T14:39:29Z
null
https://api.github.com/repos/huggingface/datasets/issues/2041
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2041/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2041/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/songfeng", "id": 2062185, "login": "songfeng", "node_id": "MDQ6VXNlcjIwNjIxODU=", "organizations_url": "https://api.github.com/users/songfeng/orgs", "received_events_url": "https://api.github.com/users/songfeng/received_events", "repos_url": "https://api.github.com/users/songfeng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "type": "User", "url": "https://api.github.com/users/songfeng" }
https://github.com/huggingface/datasets/pull/2041
[]
false
2021-03-16T11:09:20Z
2021-03-16T11:09:20Z
null
[]
null
[]
Doc2dial update data_infos and data_loaders
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2041/timeline
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2041.diff", "html_url": "https://github.com/huggingface/datasets/pull/2041", "merged_at": "2021-03-16T11:09:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/2041.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2041" }
830,180,803
https://api.github.com/repos/huggingface/datasets/issues/2041/comments
MDExOlB1bGxSZXF1ZXN0NTkxNzMyNzMw
null
2,041
https://api.github.com/repos/huggingface/datasets/issues/2041/events
true
closed
2021-03-12T14:27:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/2040
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2040/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "events_url": "https://api.github.com/users/simonschoe/events{/privacy}", "followers_url": "https://api.github.com/users/simonschoe/followers", "following_url": "https://api.github.com/users/simonschoe/following{/other_user}", "gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/simonschoe", "id": 53626067, "login": "simonschoe", "node_id": "MDQ6VXNlcjUzNjI2MDY3", "organizations_url": "https://api.github.com/users/simonschoe/orgs", "received_events_url": "https://api.github.com/users/simonschoe/received_events", "repos_url": "https://api.github.com/users/simonschoe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions", "type": "User", "url": "https://api.github.com/users/simonschoe" }
https://github.com/huggingface/datasets/issues/2040
[]
false
2021-08-04T18:00:43Z
2021-08-04T18:00:43Z
null
[ "Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no longer have such restrictions.", "Sure, thanks for the fast reply!\r\n\r\nFor dataset A: `[{'filename': 'drive/MyDrive/data_target_task/dataset_a/train/cache-4797266bf4db1eb7.arrow'}]`\r\nFor dataset B: `[]`\r\n\r\nNo clue why for B it returns nothing. `PATH_DATA_CLS_B` is exactly the same in `save_to_disk` and `load_from_disk`... Also I can verify that the folder physically exists under 'drive/MyDrive/data_target_task/dataset_b/'", "In the next release you'll be able to concatenate any kinds of dataset (either from memory or from disk).\r\n\r\nFor now I'd suggest you to flatten the indices of the A and B datasets. This will remove the indices mapping and you will be able to concatenate them. You can flatten the indices with\r\n```python\r\ndataset = dataset.flatten_indices()\r\n```", "Indeed this works. Not the most elegant solution, but it does the trick. Thanks a lot! " ]
completed
[]
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
NONE
https://api.github.com/repos/huggingface/datasets/issues/2040/timeline
Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']]) ``` Yielding the following error: ```python ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho... `load_from_disk(PATH_DATA_CLS_A)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 785 }) ``` `load_from_disk(PATH_DATA_CLS_B)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 3341 }) ```
https://api.github.com/repos/huggingface/datasets
null
830,169,387
https://api.github.com/repos/huggingface/datasets/issues/2040/comments
MDU6SXNzdWU4MzAxNjkzODc=
null
2,040
https://api.github.com/repos/huggingface/datasets/issues/2040/events
false
closed
2021-03-12T11:56:28Z
null
https://api.github.com/repos/huggingface/datasets/issues/2039
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2039/reactions" }
true
https://api.github.com/repos/huggingface/datasets/issues/2039/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/songfeng", "id": 2062185, "login": "songfeng", "node_id": "MDQ6VXNlcjIwNjIxODU=", "organizations_url": "https://api.github.com/users/songfeng/orgs", "received_events_url": "https://api.github.com/users/songfeng/received_events", "repos_url": "https://api.github.com/users/songfeng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "type": "User", "url": "https://api.github.com/users/songfeng" }
https://github.com/huggingface/datasets/pull/2039
[]
false
2021-03-12T15:32:36Z
2021-03-12T15:32:36Z
null
[]
null
[]
Doc2dial rc
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2039/timeline
Added fix to handle the last turn that is a user turn.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2039.diff", "html_url": "https://github.com/huggingface/datasets/pull/2039", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2039.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2039" }
830,047,652
https://api.github.com/repos/huggingface/datasets/issues/2039/comments
MDExOlB1bGxSZXF1ZXN0NTkxNjE3ODY3
null
2,039
https://api.github.com/repos/huggingface/datasets/issues/2039/events
true
closed
2021-03-12T11:41:54Z
null
https://api.github.com/repos/huggingface/datasets/issues/2038
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/songfeng", "id": 2062185, "login": "songfeng", "node_id": "MDQ6VXNlcjIwNjIxODU=", "organizations_url": "https://api.github.com/users/songfeng/orgs", "received_events_url": "https://api.github.com/users/songfeng/received_events", "repos_url": "https://api.github.com/users/songfeng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "type": "User", "url": "https://api.github.com/users/songfeng" }
https://github.com/huggingface/datasets/issues/2038
[]
false
2021-03-16T16:27:40Z
2021-03-16T16:27:40Z
null
[ "Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```", "Fixed by #2041, thanks again @songfeng !" ]
completed
[]
outdated dataset_infos.json might fail verifications
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2038/timeline
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc.. Could you please update this file or point me how to update this file? Thank you.
https://api.github.com/repos/huggingface/datasets
null
830,036,875
https://api.github.com/repos/huggingface/datasets/issues/2038/comments
MDU6SXNzdWU4MzAwMzY4NzU=
null
2,038
https://api.github.com/repos/huggingface/datasets/issues/2038/events
false
closed
2021-03-12T09:22:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/2037
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2037/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2037/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4", "events_url": "https://api.github.com/users/miyamonz/events{/privacy}", "followers_url": "https://api.github.com/users/miyamonz/followers", "following_url": "https://api.github.com/users/miyamonz/following{/other_user}", "gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/miyamonz", "id": 6331508, "login": "miyamonz", "node_id": "MDQ6VXNlcjYzMzE1MDg=", "organizations_url": "https://api.github.com/users/miyamonz/orgs", "received_events_url": "https://api.github.com/users/miyamonz/received_events", "repos_url": "https://api.github.com/users/miyamonz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions", "type": "User", "url": "https://api.github.com/users/miyamonz" }
https://github.com/huggingface/datasets/pull/2037
[]
false
2021-03-23T06:08:16Z
2021-03-16T11:01:22Z
null
[ "The error you got is minor and appeared in the last version of pyarrow, we'll fix the CI to take this into account. You can ignore it" ]
null
[]
Fix: Wikipedia - save memory by replacing root.clear with elem.clear
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2037/timeline
see: https://github.com/huggingface/datasets/issues/2031 What I did: - replace root.clear with elem.clear - remove lines to get root element - $ make style - $ make test - some tests required some pip packages, I installed them. test results on origin/master and my branch are same. I think it's not related on my modification, isn't it? ``` ==================================================================================== short test summary info ==================================================================================== FAILED tests/test_arrow_writer.py::TypedSequenceTest::test_catch_overflow - AssertionError: OverflowError not raised ============================================================= 1 failed, 2332 passed, 5138 skipped, 70 warnings in 91.75s (0:01:31) ============================================================== make: *** [Makefile:19: test] Error 1 ``` Is there anything else I should do?
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2037.diff", "html_url": "https://github.com/huggingface/datasets/pull/2037", "merged_at": "2021-03-16T11:01:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/2037.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2037" }
829,919,685
https://api.github.com/repos/huggingface/datasets/issues/2037/comments
MDExOlB1bGxSZXF1ZXN0NTkxNTA4MTQz
null
2,037
https://api.github.com/repos/huggingface/datasets/issues/2037/events
true
closed
2021-03-12T09:09:39Z
null
https://api.github.com/repos/huggingface/datasets/issues/2036
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2036/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4", "events_url": "https://api.github.com/users/Gpwner/events{/privacy}", "followers_url": "https://api.github.com/users/Gpwner/followers", "following_url": "https://api.github.com/users/Gpwner/following{/other_user}", "gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Gpwner", "id": 19349207, "login": "Gpwner", "node_id": "MDQ6VXNlcjE5MzQ5MjA3", "organizations_url": "https://api.github.com/users/Gpwner/orgs", "received_events_url": "https://api.github.com/users/Gpwner/received_events", "repos_url": "https://api.github.com/users/Gpwner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions", "type": "User", "url": "https://api.github.com/users/Gpwner" }
https://github.com/huggingface/datasets/issues/2036
[]
false
2021-03-15T08:45:02Z
2021-03-15T08:44:44Z
null
[ "Solved!" ]
completed
[]
Cannot load wikitext
NONE
https://api.github.com/repos/huggingface/datasets/issues/2036/timeline
when I execute these codes ``` >>> from datasets import load_dataset >>> test_dataset = load_dataset("wikitext") ``` I got an error,any help? ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py ```
https://api.github.com/repos/huggingface/datasets
null
829,909,258
https://api.github.com/repos/huggingface/datasets/issues/2036/comments
MDU6SXNzdWU4Mjk5MDkyNTg=
null
2,036
https://api.github.com/repos/huggingface/datasets/issues/2036/events
false
open
2021-03-11T19:54:54Z
null
https://api.github.com/repos/huggingface/datasets/issues/2035
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2035/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2035/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
https://github.com/huggingface/datasets/issues/2035
[]
false
2021-03-16T14:53:37Z
null
null
[ "Dear @lhoestq for wikipedia dataset I also get the same error, I greatly appreciate if you could have a look into this dataset as well. Below please find the command to reproduce the error:\r\n\r\n```\r\ndataset = load_dataset(\"wikipedia\", \"20200501.bg\")\r\nprint(dataset)\r\n```\r\n\r\nYour library is my only chance to be able training the models at scale and I am grateful for your help.\r\n\r\n", "Hi @dorost1234,\r\nTry installing this library first, `pip install 'apache-beam[gcp]' --use-feature=2020-resolver` followed by loading dataset like this using beam runner.\r\n\r\n`dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\n I also read in error stack trace that:\r\n\r\n> Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc.\r\n\r\nWorked perfectly fine after this (Ignore these warnings)\r\n\r\n![image](https://user-images.githubusercontent.com/19718818/110908410-c7e2ce00-8334-11eb-8d10-7354359e9ec3.png)\r\n\r\n", "For wikipedia dataset, looks like the files it's looking for are no longer available. For `bg`, I checked [here](https://dumps.wikimedia.org/bgwiki/). For this I think `dataset_infos.json` for this dataset has to made again? You'll have to load this dataset also using beam runner.\r\n\r\n", "Hello @dorost1234,\r\n\r\nIndeed, Wikipedia datasets need a lot of preprocessing and this is done using Apache Beam. That is the reason why it is required that you install Apache Beam in order to preform this preprocessing.\r\n\r\nFor some specific default parameters (English Wikipedia), Hugging Face has already preprocessed the dataset for you (and it is stored in the cloud). That is the reason why you do not get the error for English: the preprocessing is already done by HF and you just get the preprocessed dataset; Apache Beam is not required in that case.", "Hi\nI really appreciate if huggingface can kindly provide preprocessed\ndatasets, processing these datasets require sufficiently large resources\nand I do not have unfortunately access to, and perhaps many others too.\nthanks\n\nOn Fri, Mar 12, 2021 at 9:04 AM Albert Villanova del Moral <\n***@***.***> wrote:\n\n> Hello @dorost1234 <https://github.com/dorost1234>,\n>\n> Indeed, Wikipedia datasets need a lot of preprocessing and this is done\n> using Apache Beam. That is the reason why it is required that you install\n> Apache Beam in order to preform this preprocessing.\n>\n> For some specific default parameters (English Wikipedia), Hugging Face has\n> already preprocessed the dataset for you (and it is stored in the cloud).\n> That is the reason why you do not get the error for English: the\n> preprocessing is already done by HF and you just get the preprocessed\n> dataset; Apache Beam is not required in that case.\n>\n> —\n> You are receiving this because you were mentioned.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-797310899>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMXACFQZAGMK4VGXRETTDHDI3ANCNFSM4ZA5R2UA>\n> .\n>\n", "Hi everyone\r\nthanks for the helpful pointers, I did it as @bhavitvyamalik suggested, for me this freezes on this command for several hours, \r\n\r\n`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n`\r\n\r\nDo you know how long this takes? Any specific requirements the machine should have? like very large memory or so? @lhoestq \r\n\r\nthanks \r\n\r\n\r\n", "HI @dorost1234, \r\nThe dataset size is 631.84 MiB so depending on your internet speed it'll take some time. You can monitor your internet speed meanwhile to see if it's downloading the dataset or not (use `nload` if you're using linux/mac to monitor the same). In my case it took around 3-4 mins. Since they haven't used `download_and_extract` here that's why there's no download progress bar.", "Hi\r\nthanks, my internet speed should be good, but this really freezes for me, this is how I try to get this dataset:\r\n\r\n`from datasets import load_dataset\r\ndataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')`\r\n\r\nthe output I see if different also from what you see after writing this command:\r\n\r\n`Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /users/dara/cache/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...`\r\n\r\ndo you have any idea why it might get freezed? anything I am missing @lhoestq @bhavitvyamalik. Do I need maybe to set anything special for apache-beam? \r\n\r\nthanks a lot \r\n\r\nOn Tue, Mar 16, 2021 at 9:03 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> HI @dorost1234 <https://github.com/dorost1234>,\r\n> The dataset size is 631.84 MiB so depending on your internet speed it'll\r\n> take some time. You can monitor your internet speed meanwhile to see if\r\n> it's downloading the dataset or not (use nload if you're using linux/mac\r\n> to monitor the same). In my case it took around 3-4 mins. Since they\r\n> haven't used download_and_extract here that's why there's no download\r\n> progress bar.\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-800044303>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMQIHNNLM2LGG6QKZ73TD4GDJANCNFSM4ZA5R2UA>\r\n> .\r\n>\r\n", "I tried this on another machine (followed the same procedure I've mentioned above). This is what it shows (during the freeze period) for me:\r\n```\r\n>>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\nDownloading: 5.26kB [00:00, 1.23MB/s] \r\nDownloading: 1.40kB [00:00, 327kB/s] \r\nDownloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\nWARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\r\nConnecting anonymously.\r\nWARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n```\r\nAfter around 10 minutes, here's the loading of dataset:\r\n```\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.42s/sources]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.12sources/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.14sources/s]\r\nDataset wiki40b downloaded and prepared to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n```", "Hi\r\nI honestly also now tried on another machine and nothing shows up after\r\nhours of waiting. Are you sure you have not set any specific setting? maybe\r\ngoogle cloud which seems it is used here, needs some credential setting?\r\nthanks for any suggestions on this\r\n\r\nOn Tue, Mar 16, 2021 at 10:02 AM Bhavitvya Malik ***@***.***>\r\nwrote:\r\n\r\n> I tried this on another machine (followed the same procedure I've\r\n> mentioned above). This is what it shows (during the freeze period) for me:\r\n>\r\n> >>> dataset = load_dataset(\"wiki40b\", \"cs\", beam_runner='DirectRunner')\r\n> Downloading: 5.26kB [00:00, 1.23MB/s]\r\n> Downloading: 1.40kB [00:00, 327kB/s]\r\n> Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f...\r\n> WARNING:apache_beam.internal.gcp.auth:Unable to find default credentials to use: The Application Default Credentials are not available. They are available if running in Google Compute Engine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.\r\n> Connecting anonymously.\r\n> WARNING:apache_beam.io.tfrecordio:Couldn't find python-snappy so the implementation of _TFRecordUtil._masked_crc32c is not as fast as it could be.\r\n>\r\n> After around 10 minutes, here's the loading of dataset:\r\n>\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:16<00:00, 16.42s/sources]\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.12sources/s]\r\n> 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.14sources/s]\r\n> Dataset wiki40b downloaded and prepared to /home/bhavitvya/.cache/huggingface/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f. Subsequent calls will reuse this data.\r\n>\r\n> —\r\n> You are receiving this because you were mentioned.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/2035#issuecomment-800081772>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMX6A2ZTRZUIIZVFRCDTD4NC3ANCNFSM4ZA5R2UA>\r\n> .\r\n>\r\n" ]
null
[]
wiki40b/wikipedia for almost all languages cannot be downloaded
NONE
https://api.github.com/repos/huggingface/datasets/issues/2035/timeline
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I really need majority of languages in this dataset to be able to train my models for a deadline and your great scalable super well-written library is my only hope to train the models at scale while being low on resources. thank you very much. ``` (fast) dara@vgne046:/user/dara/dev/codes/seq2seq$ python test_data.py Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to temp/dara/cache_home_2/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f... Traceback (most recent call last): File "test_data.py", line 3, in <module> dataset = load_dataset("wiki40b", "cs") File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset use_auth_token=use_auth_token, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 579, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 1105, in _download_and_prepare import apache_beam as beam File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/__init__.py", line 96, in <module> from apache_beam import io File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/__init__.py", line 23, in <module> from apache_beam.io.avroio import * File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/avroio.py", line 55, in <module> import avro File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 34, in <module> File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 30, in LoadResource NotADirectoryError: [Errno 20] Not a directory: '/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/VERSION.txt' ```
https://api.github.com/repos/huggingface/datasets
null
829,475,544
https://api.github.com/repos/huggingface/datasets/issues/2035/comments
MDU6SXNzdWU4Mjk0NzU1NDQ=
null
2,035
https://api.github.com/repos/huggingface/datasets/issues/2035/events
false
closed
2021-03-11T17:46:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/2034
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2034/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2034/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/3413464?v=4", "events_url": "https://api.github.com/users/pcyin/events{/privacy}", "followers_url": "https://api.github.com/users/pcyin/followers", "following_url": "https://api.github.com/users/pcyin/following{/other_user}", "gists_url": "https://api.github.com/users/pcyin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pcyin", "id": 3413464, "login": "pcyin", "node_id": "MDQ6VXNlcjM0MTM0NjQ=", "organizations_url": "https://api.github.com/users/pcyin/orgs", "received_events_url": "https://api.github.com/users/pcyin/received_events", "repos_url": "https://api.github.com/users/pcyin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pcyin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pcyin/subscriptions", "type": "User", "url": "https://api.github.com/users/pcyin" }
https://github.com/huggingface/datasets/pull/2034
[]
false
2021-03-11T18:06:25Z
2021-03-11T18:06:25Z
null
[]
null
[]
Fix typo
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2034/timeline
Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME `
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2034.diff", "html_url": "https://github.com/huggingface/datasets/pull/2034", "merged_at": "2021-03-11T18:06:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2034.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2034" }
829,381,388
https://api.github.com/repos/huggingface/datasets/issues/2034/comments
MDExOlB1bGxSZXF1ZXN0NTkxMDU2MTEw
null
2,034
https://api.github.com/repos/huggingface/datasets/issues/2034/events
true
closed
2021-03-11T16:08:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/2033
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2033/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2033/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2033
[]
false
2021-03-11T17:58:12Z
2021-03-11T17:58:12Z
null
[]
null
[]
Raise an error for outdated sacrebleu versions
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2033/timeline
The `sacrebleu` metric seem to only work for sacrecleu>=1.4.12 For example using sacrebleu==1.2.10, an error is raised (from metric/sacrebleu/sacrebleu.py): ```python def _compute( self, predictions, references, smooth_method="exp", smooth_value=None, force=False, lowercase=False, tokenize=scb.DEFAULT_TOKENIZER, use_effective_order=False, ): references_per_prediction = len(references[0]) if any(len(refs) != references_per_prediction for refs in references): raise ValueError("Sacrebleu requires the same number of references for each prediction") transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)] > output = scb.corpus_bleu( sys_stream=predictions, ref_streams=transformed_references, smooth_method=smooth_method, smooth_value=smooth_value, force=force, lowercase=lowercase, tokenize=tokenize, use_effective_order=use_effective_order, ) E TypeError: corpus_bleu() got an unexpected keyword argument 'smooth_method' /mnt/cache/modules/datasets_modules/metrics/sacrebleu/b390045b3d1dd4abf6a95c4a2a11ee3bcc2b7620b076204d0ddc353fa649fd86/sacrebleu.py:114: TypeError ``` I improved the error message when users have an outdated version of sacrebleu. The new error message tells the user to update sacrebleu. cc @LysandreJik
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2033.diff", "html_url": "https://github.com/huggingface/datasets/pull/2033", "merged_at": "2021-03-11T17:58:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/2033.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2033" }
829,295,339
https://api.github.com/repos/huggingface/datasets/issues/2033/comments
MDExOlB1bGxSZXF1ZXN0NTkwOTgzMDAy
null
2,033
https://api.github.com/repos/huggingface/datasets/issues/2033/events
true
closed
2021-03-11T15:18:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/2032
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m" }
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/2032/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2032/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/issues/2032
[ { "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m" } ]
false
2024-01-19T13:26:32Z
2024-01-19T13:26:32Z
null
[ "Actually table.filter returns a new table in memory, which can fill users RAM.\r\n\r\nTherefore it's not a good solution if we want to keep supporting bigger than RAM datastes" ]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Use Arrow filtering instead of writing a new arrow file for Dataset.filter
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2032/timeline
Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time. Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker. I think there are two cases: - if the dataset doesn't have an indices mapping, then one can simply use the arrow filtering on the main arrow table `dataset._data.filter(...)` - if the dataset an indices mapping, then the mask should be applied on the indices mapping table `dataset._indices.filter(...)` The indices mapping is used to map between the idx at `dataset[idx]` in `__getitem__` and the idx in the actual arrow table. The new filter method should therefore be faster, and allow users to pass either a filtering function (that returns a boolean given an example), or directly a mask. Feel free to discuss this idea in this thread :) One additional note: the refactor at #2025 would make all the pickle-related stuff work directly with the arrow filtering, so that we only need to change the Dataset.filter method without having to deal with pickle. cc @theo-m @gchhablani related issues: #1796 #1949
https://api.github.com/repos/huggingface/datasets
null
829,250,912
https://api.github.com/repos/huggingface/datasets/issues/2032/comments
MDU6SXNzdWU4MjkyNTA5MTI=
null
2,032
https://api.github.com/repos/huggingface/datasets/issues/2032/events
false
closed
2021-03-11T12:51:24Z
null
https://api.github.com/repos/huggingface/datasets/issues/2031
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2031/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2031/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4", "events_url": "https://api.github.com/users/miyamonz/events{/privacy}", "followers_url": "https://api.github.com/users/miyamonz/followers", "following_url": "https://api.github.com/users/miyamonz/following{/other_user}", "gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/miyamonz", "id": 6331508, "login": "miyamonz", "node_id": "MDQ6VXNlcjYzMzE1MDg=", "organizations_url": "https://api.github.com/users/miyamonz/orgs", "received_events_url": "https://api.github.com/users/miyamonz/received_events", "repos_url": "https://api.github.com/users/miyamonz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions", "type": "User", "url": "https://api.github.com/users/miyamonz" }
https://github.com/huggingface/datasets/issues/2031
[]
false
2021-03-22T08:33:52Z
2021-03-22T08:33:52Z
null
[ "Hi @miyamonz \r\nThanks for investigating this issue, good job !\r\nIt would be awesome to integrate your fix in the library, could you open a pull request ?", "OK! I'll send it later." ]
completed
[]
wikipedia.py generator that extracts XML doesn't release memory
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2031/timeline
I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe. I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L464-L502 `root.clear()` intend to clear memory, but it doesn't. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L490 https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L494 I replaced them with `elem.clear()`, then it seems to work correctly. here is the notebook to reproduce it. https://gist.github.com/miyamonz/dc06117302b6e85fa51cbf46dde6bb51#file-xtract_content-ipynb
https://api.github.com/repos/huggingface/datasets
null
829,122,778
https://api.github.com/repos/huggingface/datasets/issues/2031/comments
MDU6SXNzdWU4MjkxMjI3Nzg=
null
2,031
https://api.github.com/repos/huggingface/datasets/issues/2031/events
false
closed
2021-03-11T12:34:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/2030
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2030/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2030/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2030
[]
false
2021-03-18T13:29:29Z
2021-03-18T13:29:29Z
null
[ "I am wondering why only one test of \"keep_in_memory=True\" fails, when there are many other tests that test the same and it happens only in pyarrow_1..." ]
null
[]
Implement Dataset from text
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2030/timeline
Implement `Dataset.from_text`. Analogue to #1943, #1946.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2030.diff", "html_url": "https://github.com/huggingface/datasets/pull/2030", "merged_at": "2021-03-18T13:29:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/2030.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2030" }
829,110,803
https://api.github.com/repos/huggingface/datasets/issues/2030/comments
MDExOlB1bGxSZXF1ZXN0NTkwODI4NzQ4
null
2,030
https://api.github.com/repos/huggingface/datasets/issues/2030/events
true
closed
2021-03-11T12:16:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/2029
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2029/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2029/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nbroad1881", "id": 24982805, "login": "nbroad1881", "node_id": "MDQ6VXNlcjI0OTgyODA1", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "repos_url": "https://api.github.com/users/nbroad1881/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "type": "User", "url": "https://api.github.com/users/nbroad1881" }
https://github.com/huggingface/datasets/issues/2029
[]
false
2021-03-12T00:21:09Z
2021-03-12T00:21:09Z
null
[ "In your code `dataset2` doesn't contain the \"embeddings\" column, since it is created from the pandas DataFrame with columns \"text\" and \"label\".\r\n\r\nTherefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.\r\n\r\nIf you want the \"embeddings\" column back, you can create `dataset2` with\r\n```python\r\ndataset2 = load_from_disk(dataset_filename)\r\n```\r\nwhere `dataset_filename` is the place where you saved you dataset with the embeddings in the first place.", "Ok in that case HF should fix their misleading example at https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index \r\n\r\nI copy-pasted it here.\r\n\r\n> When you are done with your queries you can save your index on disk:\r\n> \r\n> ```python\r\n> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')\r\n> ```\r\n> Then reload it later:\r\n> \r\n> ```python\r\n> ds = load_dataset('crime_and_punish', split='train[:100]')\r\n> ds.load_faiss_index('embeddings', 'my_index.faiss')\r\n> ```", "Hi !\r\n\r\nThe code of the example is valid.\r\nAn index is a search engine, it's not considered a column of a dataset.\r\nWhen you do `ds.load_faiss_index(\"embeddings\", 'my_index.faiss')`, it attaches an index named \"embeddings\" to the dataset but it doesn't re-add the \"embeddings\" column. You can list the indexes of a dataset by using `ds.list_indexes()`.\r\n\r\nIf I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nThis looks misleading indeed, and we should add a note to make it more explicit that it doesn't store the column that was used to build the index.\r\n\r\nFeel free to open a PR to suggest an improvement on the documentation if you want to contribute :)", "> If I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nYes. I was trying to use the dataset in RAG and it complained that the dataset didn't have the right columns. No problems when loading the dataset with `load_from_disk` and then doing `load_faiss_index`\r\n\r\nWhat I learned was\r\n1. column and index are different\r\n2. loading the index does not create a column\r\n3. the column is not needed to be able to use the index\r\n4. RAG needs both the embeddings column and the index\r\n\r\nIf I can come up with a way to articulate this in the right spot in the docs, I'll open a PR" ]
completed
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
Loading a faiss index KeyError
NONE
https://api.github.com/repos/huggingface/datasets/issues/2029/timeline
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a file 5. Create a new dataset (dataset2) with the same text and label information as dataset1 6. Try to load the faiss index from file to dataset2 7. Get `KeyError: "Column embeddings not in the dataset"` I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU. https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing Ubuntu Version VERSION="18.04.5 LTS (Bionic Beaver)" datasets==1.4.1 faiss==1.5.3 faiss-gpu==1.7.0 torch==1.8.0+cu101 transformers==4.3.3 NVIDIA-SMI 460.56 Driver Version: 460.32.03 CUDA Version: 11.2 Tesla K80 I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index I included the exact code from the documentation at the end of the notebook to show that they don't work either.
https://api.github.com/repos/huggingface/datasets
null
829,097,290
https://api.github.com/repos/huggingface/datasets/issues/2029/comments
MDU6SXNzdWU4MjkwOTcyOTA=
null
2,029
https://api.github.com/repos/huggingface/datasets/issues/2029/events
false
closed
2021-03-11T04:41:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/2028
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2028/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2028/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4", "events_url": "https://api.github.com/users/danyaljj/events{/privacy}", "followers_url": "https://api.github.com/users/danyaljj/followers", "following_url": "https://api.github.com/users/danyaljj/following{/other_user}", "gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danyaljj", "id": 2441454, "login": "danyaljj", "node_id": "MDQ6VXNlcjI0NDE0NTQ=", "organizations_url": "https://api.github.com/users/danyaljj/orgs", "received_events_url": "https://api.github.com/users/danyaljj/received_events", "repos_url": "https://api.github.com/users/danyaljj/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions", "type": "User", "url": "https://api.github.com/users/danyaljj" }
https://github.com/huggingface/datasets/pull/2028
[]
false
2021-03-15T09:39:57Z
2021-03-15T09:39:57Z
null
[ "@lhoestq I think I have addressed all your comments. ", "Thanks! @lhoestq Let me know if you want me to address anything to get this merged. ", "It's all good thanks ;)\r\nmerging" ]
null
[]
Adding PersiNLU reading-comprehension
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2028/timeline
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2028.diff", "html_url": "https://github.com/huggingface/datasets/pull/2028", "merged_at": "2021-03-15T09:39:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/2028.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2028" }
828,721,393
https://api.github.com/repos/huggingface/datasets/issues/2028/comments
MDExOlB1bGxSZXF1ZXN0NTkwNDk1NzEx
null
2,028
https://api.github.com/repos/huggingface/datasets/issues/2028/events
true