state
stringclasses
2 values
created_at
stringlengths
20
20
active_lock_reason
null
url
stringlengths
61
61
assignee
dict
reactions
dict
draft
bool
2 classes
labels_url
stringlengths
75
75
user
dict
html_url
stringlengths
49
51
assignees
list
locked
bool
1 class
updated_at
stringlengths
20
20
closed_at
stringlengths
20
20
milestone
dict
comments
sequence
state_reason
stringclasses
3 values
labels
list
title
stringlengths
1
290
author_association
stringclasses
3 values
timeline_url
stringlengths
70
70
body
stringlengths
0
228k
repository_url
stringclasses
1 value
pull_request
dict
id
int64
773M
2.11B
comments_url
stringlengths
70
70
node_id
stringlengths
18
32
performed_via_github_app
null
number
int64
1.62k
6.64k
events_url
stringlengths
68
68
is_pull_request
bool
2 classes
closed
2021-04-16T13:21:53Z
null
https://api.github.com/repos/huggingface/datasets/issues/2229
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2229/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2229/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NikhilBartwal", "id": 42388668, "login": "NikhilBartwal", "node_id": "MDQ6VXNlcjQyMzg4NjY4", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "type": "User", "url": "https://api.github.com/users/NikhilBartwal" }
https://github.com/huggingface/datasets/issues/2229
[]
false
2021-04-19T08:56:42Z
2021-04-19T08:56:42Z
null
[ "Hi ! Sure sounds good. Also if you find other datasets that use tuples instead of str/int, you can also fix them !\r\nthanks :)", "@lhoestq I have sent a PR for fixing the issue. Would be great if you could have a look! Thanks!" ]
completed
[]
`xnli` dataset creating a tuple key while yielding instead of `str` or `int`
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2229/timeline
When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196 Since, community datasets in Tensorflow Datasets also use HF datasets, this causes a Tuple key error while loading HF's `xnli` dataset. I'm up for sending a fix for this, I think we can simply use `file_idx + "_" + row_idx` as a unique key instead of a tuple.
https://api.github.com/repos/huggingface/datasets
null
859,810,602
https://api.github.com/repos/huggingface/datasets/issues/2229/comments
MDU6SXNzdWU4NTk4MTA2MDI=
null
2,229
https://api.github.com/repos/huggingface/datasets/issues/2229/events
false
open
2021-04-16T13:04:08Z
null
https://api.github.com/repos/huggingface/datasets/issues/2228
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2228/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2228/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/22685854?v=4", "events_url": "https://api.github.com/users/jblemoine/events{/privacy}", "followers_url": "https://api.github.com/users/jblemoine/followers", "following_url": "https://api.github.com/users/jblemoine/following{/other_user}", "gists_url": "https://api.github.com/users/jblemoine/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jblemoine", "id": 22685854, "login": "jblemoine", "node_id": "MDQ6VXNlcjIyNjg1ODU0", "organizations_url": "https://api.github.com/users/jblemoine/orgs", "received_events_url": "https://api.github.com/users/jblemoine/received_events", "repos_url": "https://api.github.com/users/jblemoine/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jblemoine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jblemoine/subscriptions", "type": "User", "url": "https://api.github.com/users/jblemoine" }
https://github.com/huggingface/datasets/pull/2228
[]
false
2022-07-06T15:19:48Z
null
null
[ "Awesome thanks ! To fix the CI you just need to merge master into your branch.\r\nThe error is unrelated to your PR" ]
null
[]
[WIP] Add ArrayXD support for fixed size list.
NONE
https://api.github.com/repos/huggingface/datasets/issues/2228/timeline
Add support for fixed size list for ArrayXD when shape is known . See https://github.com/huggingface/datasets/issues/2146 Since offset are not stored anymore, the file size is now roughly equal to the actual data size.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2228.diff", "html_url": "https://github.com/huggingface/datasets/pull/2228", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2228.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2228" }
859,795,563
https://api.github.com/repos/huggingface/datasets/issues/2228/comments
MDExOlB1bGxSZXF1ZXN0NjE2ODE2MTQz
null
2,228
https://api.github.com/repos/huggingface/datasets/issues/2228/events
true
closed
2021-04-16T12:31:41Z
null
https://api.github.com/repos/huggingface/datasets/issues/2227
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2227/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2227/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
https://github.com/huggingface/datasets/pull/2227
[]
false
2021-04-16T13:49:40Z
2021-04-16T13:49:39Z
null
[]
null
[]
Use update_metadata_with_features decorator in class_encode_column method
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2227/timeline
Following @mariosasko 's comment
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2227.diff", "html_url": "https://github.com/huggingface/datasets/pull/2227", "merged_at": "2021-04-16T13:49:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/2227.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2227" }
859,771,526
https://api.github.com/repos/huggingface/datasets/issues/2227/comments
MDExOlB1bGxSZXF1ZXN0NjE2Nzk1NjMx
null
2,227
https://api.github.com/repos/huggingface/datasets/issues/2227/events
true
closed
2021-04-16T11:17:01Z
null
https://api.github.com/repos/huggingface/datasets/issues/2226
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2226/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2226/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/villmow", "id": 2743060, "login": "villmow", "node_id": "MDQ6VXNlcjI3NDMwNjA=", "organizations_url": "https://api.github.com/users/villmow/orgs", "received_events_url": "https://api.github.com/users/villmow/received_events", "repos_url": "https://api.github.com/users/villmow/repos", "site_admin": false, "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "type": "User", "url": "https://api.github.com/users/villmow" }
https://github.com/huggingface/datasets/issues/2226
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
false
2022-10-05T17:32:15Z
2022-10-05T17:32:15Z
null
[ "I found the problem. I called `set_format` on some columns before. This makes it crash. Here is a complete example to reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\nsst = load_dataset(\"sst\")\r\nsst.set_format(\"torch\", columns=[\"label\"], output_all_columns=True)\r\nds = sst[\"train\"]\r\n\r\n# crashes\r\nds.map(\r\n lambda x: {\"a\": list(range(20))},\r\n remove_columns=ds.column_names,\r\n load_from_cache_file=False,\r\n num_proc=1,\r\n batched=True,\r\n)\r\n```", "Thanks for reporting and for providing this code to reproduce the issue, this is really helpful !", "I merged a fix, it should work on `master` now :)\r\nWe'll do a new release soon !" ]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
Batched map fails when removing all columns
NONE
https://api.github.com/repos/huggingface/datasets/issues/2226/timeline
Hi @lhoestq , I'm hijacking this issue, because I'm currently trying to do the approach you recommend: > Currently the optimal setup for single-column computations is probably to do something like > > ```python > result = dataset.map(f, input_columns="my_col", remove_columns=dataset.column_names) > ``` Here is my code: (see edit, in which I added a simplified version ``` This is the error: ```bash pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000 ``` I wonder why this error occurs, when I delete every column? Can you give me a hint? ### Edit: I preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the complete dataset and print every sample before calling map. There seems to be no other problem with the dataset. I tried to simplify the code that crashes: ```python # works log.debug(dataset.column_names) log.debug(dataset) for i, sample in enumerate(dataset): log.debug(i, sample) # crashes counted_dataset = dataset.map( lambda x: {"a": list(range(20))}, input_columns=column, remove_columns=dataset.column_names, load_from_cache_file=False, num_proc=num_workers, batched=True, ) ``` ``` pyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000 ``` Edit2: May this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error: ```python # crashes counted_dataset = dataset.map( lambda x: {"a": list(range(20))}, input_columns=column, remove_columns=dataset.column_names, load_from_cache_file=False, num_proc=num_workers, batched=True, features=datasets.Features( { "a": datasets.Sequence(datasets.Value("int32")) } ) ) ``` ``` File "env/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1704, in _map_single writer.write_batch(batch) File "env/lib/python3.8/site-packages/datasets/arrow_writer.py", line 312, in write_batch col_type = schema.field(col).type if schema is not None else None File "pyarrow/types.pxi", line 1341, in pyarrow.lib.Schema.field KeyError: 'Column tokens does not exist in schema' ``` _Originally posted by @villmow in https://github.com/huggingface/datasets/issues/2193#issuecomment-820230874_
https://api.github.com/repos/huggingface/datasets
null
859,720,302
https://api.github.com/repos/huggingface/datasets/issues/2226/comments
MDU6SXNzdWU4NTk3MjAzMDI=
null
2,226
https://api.github.com/repos/huggingface/datasets/issues/2226/events
false
closed
2021-04-15T04:26:40Z
null
https://api.github.com/repos/huggingface/datasets/issues/2225
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2225/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2225/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/46733535?v=4", "events_url": "https://api.github.com/users/alexwdong/events{/privacy}", "followers_url": "https://api.github.com/users/alexwdong/followers", "following_url": "https://api.github.com/users/alexwdong/following{/other_user}", "gists_url": "https://api.github.com/users/alexwdong/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alexwdong", "id": 46733535, "login": "alexwdong", "node_id": "MDQ6VXNlcjQ2NzMzNTM1", "organizations_url": "https://api.github.com/users/alexwdong/orgs", "received_events_url": "https://api.github.com/users/alexwdong/received_events", "repos_url": "https://api.github.com/users/alexwdong/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alexwdong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alexwdong/subscriptions", "type": "User", "url": "https://api.github.com/users/alexwdong" }
https://github.com/huggingface/datasets/pull/2225
[]
false
2021-04-15T22:09:50Z
2021-04-15T21:19:09Z
null
[ "Thanks ! good catch\r\n\r\nCould you also update the metadata of this dataset ?\r\nYou can do so by running\r\n```\r\ndatasets-cli test ./datasets/newsgroup --all_configs --save_infos --ignore_verifications\r\n```\r\nThis should update the dataset_infos.json file that contains the size of all the splits for example.", "Hi,\r\n`dataset_infos.json` should be updated now.\r\n" ]
null
[]
fixed one instance of 'train' to 'test'
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2225/timeline
I believe this should be 'test' instead of 'train'
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2225.diff", "html_url": "https://github.com/huggingface/datasets/pull/2225", "merged_at": "2021-04-15T21:19:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2225.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2225" }
858,469,561
https://api.github.com/repos/huggingface/datasets/issues/2225/comments
MDExOlB1bGxSZXF1ZXN0NjE1NzAzMTY4
null
2,225
https://api.github.com/repos/huggingface/datasets/issues/2225/events
true
open
2021-04-14T14:57:20Z
null
https://api.github.com/repos/huggingface/datasets/issues/2224
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2224/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2224/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/issues/2224
[]
false
2021-04-14T14:59:13Z
null
null
[]
null
[]
Raise error if Windows max path length is not disabled
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2224/timeline
On startup, raise an error if Windows max path length is not disabled; ask the user to disable it. Linked to discussion in #2220.
https://api.github.com/repos/huggingface/datasets
null
857,983,361
https://api.github.com/repos/huggingface/datasets/issues/2224/comments
MDU6SXNzdWU4NTc5ODMzNjE=
null
2,224
https://api.github.com/repos/huggingface/datasets/issues/2224/events
false
closed
2021-04-14T12:55:24Z
null
https://api.github.com/repos/huggingface/datasets/issues/2223
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2223/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2223/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2223
[]
false
2021-04-15T19:11:25Z
2021-04-15T19:11:25Z
null
[ "> why a cache dir per test function does not work?\r\n\r\nProbably because we end up with multiple `datasets_module` in the python path. This breaks the import of all the datasets/metrics modules.\r\nIf you want to use one modules cache per test, you may need remove the `datasets_module` that was added to the python path during the test.\r\nIndeed if the module cache hasn't been initialized, then it's added to the python path by calling `init_dynamic_modules`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/ba76012a19193a35053b9e20243ff40c2b4204ab/src/datasets/load.py#L291-L291", "@lhoestq, for the moment, this PR avoids populating the `~/.cache` dir during training, which is already an improvement, isn't it?", "Yes we can merge it this way if you're fine with it !\r\nThis is a good improvement", "I will eventually try to implement a `cache_dir` per test function in another PR, but I think I should first fix some side effects in tests: each test function should be atomic and able to have its own `cache_dir` without being affected by the `cache_dir` set in other test functions.", "Yes this would be ideal !" ]
null
[]
Set test cache config
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2223/timeline
Currently, running the tests populates the default cache directory `"~/.cache"`. This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2223.diff", "html_url": "https://github.com/huggingface/datasets/pull/2223", "merged_at": "2021-04-15T19:11:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2223.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2223" }
857,870,800
https://api.github.com/repos/huggingface/datasets/issues/2223/comments
MDExOlB1bGxSZXF1ZXN0NjE1MjE4MDIz
null
2,223
https://api.github.com/repos/huggingface/datasets/issues/2223/events
true
closed
2021-04-14T12:26:52Z
null
https://api.github.com/repos/huggingface/datasets/issues/2222
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2222/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2222/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2222
[]
false
2021-04-14T15:00:25Z
2021-04-14T14:46:19Z
null
[ "Windows users should disable the max path length limit. It's a nightmare to handle it.\r\nAlso the lock path must not be changed in a random way. Otherwise from another process the lock path might not be the same and the locking mechanism won't work.", "Do you agree with handling the case where MAX_PATH is not disabled? If not, we can close this PR.\r\n\r\nIf so, would it work a deterministic lock path instead of random?", "I'd rather not handle this at all, since there will be other places in the code where the limit will break things" ]
null
[ { "color": "ffffff", "default": true, "description": "This will not be worked on", "id": 1935892913, "name": "wontfix", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix" } ]
Fix too long WindowsFileLock name
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2222/timeline
Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2222.diff", "html_url": "https://github.com/huggingface/datasets/pull/2222", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2222.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2222" }
857,847,231
https://api.github.com/repos/huggingface/datasets/issues/2222/comments
MDExOlB1bGxSZXF1ZXN0NjE1MTk5MTM5
null
2,222
https://api.github.com/repos/huggingface/datasets/issues/2222/events
true
closed
2021-04-14T12:09:18Z
null
https://api.github.com/repos/huggingface/datasets/issues/2221
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2221/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2221/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cahya-wirawan", "id": 7669893, "login": "cahya-wirawan", "node_id": "MDQ6VXNlcjc2Njk4OTM=", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "type": "User", "url": "https://api.github.com/users/cahya-wirawan" }
https://github.com/huggingface/datasets/pull/2221
[]
false
2021-04-14T13:50:19Z
2021-04-14T13:50:19Z
null
[]
null
[]
Add SLR70 - SLR80 and SLR86 to OpenSLR dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2221/timeline
I would like to add SLR70, SLR71, SLR72, SLR73, SLR74, SLR75, SLR76, SLR77, SLR78, SLR79, SLR80 and SLR86 to OpenSLR dataset. The languages are: Nigerian English, Chilean Spanish, Columbian Spanish, Peruvian Spanish, Puerto Rico Spanish, Venezuelan Spanish, Basque, Galician, Gujarati and Kannada.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2221.diff", "html_url": "https://github.com/huggingface/datasets/pull/2221", "merged_at": "2021-04-14T13:50:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/2221.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2221" }
857,833,770
https://api.github.com/repos/huggingface/datasets/issues/2221/comments
MDExOlB1bGxSZXF1ZXN0NjE1MTg4MTE5
null
2,221
https://api.github.com/repos/huggingface/datasets/issues/2221/events
true
closed
2021-04-14T10:49:58Z
null
https://api.github.com/repos/huggingface/datasets/issues/2220
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2220/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2220/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2220
[]
false
2021-04-14T14:59:50Z
2021-04-14T14:59:34Z
null
[ "How is it possible to get an infinite loop ? Can you add more details ?", "Yes, in Windows, if the filename is too long, a `FileNotFoundError` is raised. The exception should be raised in this case. Otherwise, we get into an infinite loop.\r\n\r\nIf other process has the file locked, then `PermissionError` is raised. In this case, `pass` is OK.", "Note that the filelock module comes from this project that hasn't changed in years - while still being used by ten of thousands of projects:\r\nhttps://github.com/benediktschmitt/py-filelock\r\n\r\nUnless we have proper tests for this, I wouldn't recommend to change it", "I'm pretty sure many things from the library could break for windows users that haven't disabled the max path length limit.\r\nMaybe it would be simpler to simply raise an error on startup. For exampe, for windows users the error could ask them to disable the limit if it's not been disabled yet ?" ]
null
[ { "color": "ffffff", "default": true, "description": "This will not be worked on", "id": 1935892913, "name": "wontfix", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix" } ]
Fix infinite loop in WindowsFileLock
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2220/timeline
Raise exception to avoid infinite loop.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2220.diff", "html_url": "https://github.com/huggingface/datasets/pull/2220", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2220.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2220" }
857,774,626
https://api.github.com/repos/huggingface/datasets/issues/2220/comments
MDExOlB1bGxSZXF1ZXN0NjE1MTM4NDQz
null
2,220
https://api.github.com/repos/huggingface/datasets/issues/2220/events
true
closed
2021-04-13T21:05:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/2219
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2219/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2219/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
https://github.com/huggingface/datasets/pull/2219
[]
false
2021-04-24T14:25:51Z
2021-04-16T08:50:44Z
null
[ "1) Changed the language in a few places apart from those you mentioned in README\r\n2) Reduced the size of dummy data folder by removing all other entries except the first\r\n3) Updated YAML tags by using to the past version of `datasets-tagging` app. Will update the quick fix on that repository too in a while", "@bhavitvyamalik Thanks for adding the dataset on huggingface! Can you please add a metric also for the dataset using the squad_v2 metric file? ", "@MohammedRakib you can check [#2257](https://github.com/huggingface/datasets/pull/2257)" ]
null
[]
Added CUAD dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2219/timeline
Dataset link : https://github.com/TheAtticusProject/cuad/ Working on README.md currently. Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1).
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2219.diff", "html_url": "https://github.com/huggingface/datasets/pull/2219", "merged_at": "2021-04-16T08:50:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/2219.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2219" }
857,321,242
https://api.github.com/repos/huggingface/datasets/issues/2219/comments
MDExOlB1bGxSZXF1ZXN0NjE0NzYxMzA3
null
2,219
https://api.github.com/repos/huggingface/datasets/issues/2219/events
true
open
2021-04-13T18:59:49Z
null
https://api.github.com/repos/huggingface/datasets/issues/2218
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2218/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2218/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7276193?v=4", "events_url": "https://api.github.com/users/amarasovic/events{/privacy}", "followers_url": "https://api.github.com/users/amarasovic/followers", "following_url": "https://api.github.com/users/amarasovic/following{/other_user}", "gists_url": "https://api.github.com/users/amarasovic/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amarasovic", "id": 7276193, "login": "amarasovic", "node_id": "MDQ6VXNlcjcyNzYxOTM=", "organizations_url": "https://api.github.com/users/amarasovic/orgs", "received_events_url": "https://api.github.com/users/amarasovic/received_events", "repos_url": "https://api.github.com/users/amarasovic/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amarasovic/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amarasovic/subscriptions", "type": "User", "url": "https://api.github.com/users/amarasovic" }
https://github.com/huggingface/datasets/issues/2218
[]
false
2021-04-14T21:42:27Z
null
null
[ "Hi,\r\n\r\ncurrently the datasets API doesn't have a dedicated function to remove duplicate rows, but since the LAMA dataset is not too big (it fits in RAM), we can leverage pandas to help us remove duplicates:\r\n```python\r\n>>> from datasets import load_dataset, Dataset\r\n>>> dataset = load_dataset('lama', split='train')\r\n>>> dataset = Dataset.from_pandas(dataset.to_pandas().drop_duplicates(subset=...)) # specify a subset of the columns to consider in a list or use all of the columns if None\r\n```\r\n\r\nNote that the same can be achieved with the `Dataset.filter` method but this would requrie some extra work (filter function, speed?).", "Oh, seems like my question wasn't specified well. I'm _not_ asking how to remove duplicates, but whether duplicates should be removed if I want to do the evaluation on the LAMA dataset as it was proposed in the original paper/repository? In other words, will I get the same result if evaluate on the de-duplicated dataset loaded from HF's `datasets` as the results I'd get if I use the original data format and data processing script in https://github.com/facebookresearch/LAMA? ", "So it looks like the person who added LAMA to the library chose to have one item per piece of evidence rather than one per relation - and in this case, there are duplicate pieces of evidence for the target relation\r\n\r\nIf I understand correctly, to reproduce reported results, you would have to aggregate predictions for the several pieces of evidence provided for each relation (each unique `uuid`), but the original authors will know better \r\n\r\ncc @fabiopetroni " ]
null
[]
Duplicates in the LAMA dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2218/timeline
I observed duplicates in the LAMA probing dataset, see a minimal code below. ``` >>> import datasets >>> dataset = datasets.load_dataset('lama') No config specified, defaulting to: lama/trex Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c1fe1ec0d6b5eece7bddc) >>> train_dataset = dataset['train'] >>> train_dataset[0] {'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'} >>> train_dataset[1] {'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'} ``` I checked the original data available at https://dl.fbaipublicfiles.com/LAMA/data.zip. This particular duplicated comes from: ``` {"uuid": "40b2ed1c-0961-482e-844e-32596b6117c8", "obj_uri": "Q150", "obj_label": "French", "sub_uri": "Q441235", "sub_label": "Louis Jules Trochu", "predicate_id": "P103", "evidences": [{"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}, {"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}]} ``` What is the best way to deal with these duplicates if I want to use `datasets` to probe with LAMA?
https://api.github.com/repos/huggingface/datasets
null
857,238,435
https://api.github.com/repos/huggingface/datasets/issues/2218/comments
MDU6SXNzdWU4NTcyMzg0MzU=
null
2,218
https://api.github.com/repos/huggingface/datasets/issues/2218/events
false
closed
2021-04-13T14:20:04Z
null
https://api.github.com/repos/huggingface/datasets/issues/2217
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2217/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2217/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2217
[]
false
2021-04-14T14:24:24Z
2021-04-14T14:24:23Z
null
[]
null
[]
Revert breaking change in cache_files property
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2217/timeline
#2025 changed the format of `Dataset.cache_files`. Before it was formatted like ```python [{"filename": "path/to/file.arrow", "start": 0, "end": 1337}] ``` and it was changed to ```python ["path/to/file.arrow"] ``` since there's no start/end offsets available anymore. To make this less breaking, I'm setting the format back to a list of dicts: ```python [{"filename": "path/to/file.arrow"}] ```
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2217.diff", "html_url": "https://github.com/huggingface/datasets/pull/2217", "merged_at": "2021-04-14T14:24:23Z", "patch_url": "https://github.com/huggingface/datasets/pull/2217.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2217" }
857,011,314
https://api.github.com/repos/huggingface/datasets/issues/2217/comments
MDExOlB1bGxSZXF1ZXN0NjE0NTAxNjIz
null
2,217
https://api.github.com/repos/huggingface/datasets/issues/2217/events
true
closed
2021-04-13T13:20:20Z
null
https://api.github.com/repos/huggingface/datasets/issues/2216
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2216/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2216/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/philschmid", "id": 32632186, "login": "philschmid", "node_id": "MDQ6VXNlcjMyNjMyMTg2", "organizations_url": "https://api.github.com/users/philschmid/orgs", "received_events_url": "https://api.github.com/users/philschmid/received_events", "repos_url": "https://api.github.com/users/philschmid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "type": "User", "url": "https://api.github.com/users/philschmid" }
https://github.com/huggingface/datasets/pull/2216
[]
false
2021-04-13T13:53:20Z
2021-04-13T13:53:19Z
null
[]
null
[]
added real label for glue/mrpc to test set
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2216/timeline
Added real label to `glue.py` `mrpc` task for test split.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2216.diff", "html_url": "https://github.com/huggingface/datasets/pull/2216", "merged_at": "2021-04-13T13:53:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/2216.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2216" }
856,955,534
https://api.github.com/repos/huggingface/datasets/issues/2216/comments
MDExOlB1bGxSZXF1ZXN0NjE0NDU0MjE1
null
2,216
https://api.github.com/repos/huggingface/datasets/issues/2216/events
true
closed
2021-04-13T08:24:07Z
null
https://api.github.com/repos/huggingface/datasets/issues/2215
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2215/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2215/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cahya-wirawan", "id": 7669893, "login": "cahya-wirawan", "node_id": "MDQ6VXNlcjc2Njk4OTM=", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "type": "User", "url": "https://api.github.com/users/cahya-wirawan" }
https://github.com/huggingface/datasets/pull/2215
[]
false
2021-04-13T14:05:14Z
2021-04-13T14:05:14Z
null
[ "Hi @lhoestq,\r\nCould you please help me, I got this error message in all \"ci/circleci: run_dataset_script_tests_pyarrow*\" tests:\r\n```\r\n...\r\n \"\"\"Wrapper classes for various types of tokenization.\"\"\"\r\n \r\n from bleurt.lib import bert_tokenization\r\n import tensorflow.compat.v1 as tf\r\n> import sentencepiece as spm\r\nE ModuleNotFoundError: No module named 'sentencepiece'\r\n...\r\n```\r\nI am not sure why I do get it. Thanks.\r\n", "Hi ! This issue appeared on master since the last update of `BLEURT`.\r\nI'm working on a fix. You can ignore this issue for this PR", "> Hi ! This issue appeared on master since the last update of `BLEURT`.\r\n> I'm working on a fix. You can ignore this issue for this PR\r\n\r\nThanks for the info", "Merging since the CI is fixed on master" ]
null
[]
Add datasets SLR35 and SLR36 to OpenSLR
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2215/timeline
I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2215.diff", "html_url": "https://github.com/huggingface/datasets/pull/2215", "merged_at": "2021-04-13T14:05:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2215.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2215" }
856,716,791
https://api.github.com/repos/huggingface/datasets/issues/2215/comments
MDExOlB1bGxSZXF1ZXN0NjE0MjUyNTEy
null
2,215
https://api.github.com/repos/huggingface/datasets/issues/2215/events
true
closed
2021-04-12T20:26:01Z
null
https://api.github.com/repos/huggingface/datasets/issues/2214
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2214/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2214/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4", "events_url": "https://api.github.com/users/nsaphra/events{/privacy}", "followers_url": "https://api.github.com/users/nsaphra/followers", "following_url": "https://api.github.com/users/nsaphra/following{/other_user}", "gists_url": "https://api.github.com/users/nsaphra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nsaphra", "id": 414788, "login": "nsaphra", "node_id": "MDQ6VXNlcjQxNDc4OA==", "organizations_url": "https://api.github.com/users/nsaphra/orgs", "received_events_url": "https://api.github.com/users/nsaphra/received_events", "repos_url": "https://api.github.com/users/nsaphra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nsaphra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nsaphra/subscriptions", "type": "User", "url": "https://api.github.com/users/nsaphra" }
https://github.com/huggingface/datasets/issues/2214
[]
false
2021-04-23T15:20:02Z
2021-04-23T15:20:02Z
null
[ "Hi @nsaphra, thanks for reporting.\r\n\r\nThis issue was fixed in `datasets` version 1.3.0. Could you please update `datasets` and tell me if the problem persists?\r\n```shell\r\npip install -U datasets\r\n```", "There might be a bug in the conda version of `datasets` 1.2.1 where the datasets/metric scripts are downloaded from `master` instead of the `1.2.1` repo.\r\n\r\nYou can try setting the env var `HF_SCRIPTS_VERSION=\"1.2.1\"` as a workaround. Let me know if that helps.", "I just faced the same issue. I was using 1.2.1 from conda and received the same AttributeError complaining about 'add_start_docstrings'. Uninstalling the conda installed datasets and then installing the latest datasets (version 1.5.0) using pip install solved the issue for me. I don't like mixing up conda and pip installs in the same environments but this will have to do for now, until 1.5.0 is made available through conda.", "Yep, seems to have fixed things! The conda package could really do with an update. Thanks!" ]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
NONE
https://api.github.com/repos/huggingface/datasets/issues/2214/timeline
I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package. ```python >>> from datasets import load_metric >>> metric = load_metric("glue", "sst2") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 502, in load_metric File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 66, in import_main_class File "/ext3/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 783, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ns4008/.cache/huggingface/modules/datasets_modules/metrics/glue/e4606ab9804a36bcd5a9cebb2cb65bb14b6ac78ee9e6d5981fa679a495dd55de/glue.py", line 105, in <module> @datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION) AttributeError: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' ```
https://api.github.com/repos/huggingface/datasets
null
856,333,657
https://api.github.com/repos/huggingface/datasets/issues/2214/comments
MDU6SXNzdWU4NTYzMzM2NTc=
null
2,214
https://api.github.com/repos/huggingface/datasets/issues/2214/events
false
closed
2021-04-12T14:16:59Z
null
https://api.github.com/repos/huggingface/datasets/issues/2213
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2213/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2213/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/2213
[]
false
2021-04-14T22:04:54Z
2021-04-14T13:42:25Z
null
[]
null
[]
Fix lc_quad download checksum
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2213/timeline
Fixes #2211
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2213.diff", "html_url": "https://github.com/huggingface/datasets/pull/2213", "merged_at": "2021-04-14T13:42:25Z", "patch_url": "https://github.com/huggingface/datasets/pull/2213.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2213" }
856,025,320
https://api.github.com/repos/huggingface/datasets/issues/2213/comments
MDExOlB1bGxSZXF1ZXN0NjEzNjcwODk2
null
2,213
https://api.github.com/repos/huggingface/datasets/issues/2213/events
true
closed
2021-04-12T13:49:56Z
null
https://api.github.com/repos/huggingface/datasets/issues/2212
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2212/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2212/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4", "events_url": "https://api.github.com/users/hanss0n/events{/privacy}", "followers_url": "https://api.github.com/users/hanss0n/followers", "following_url": "https://api.github.com/users/hanss0n/following{/other_user}", "gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hanss0n", "id": 21348833, "login": "hanss0n", "node_id": "MDQ6VXNlcjIxMzQ4ODMz", "organizations_url": "https://api.github.com/users/hanss0n/orgs", "received_events_url": "https://api.github.com/users/hanss0n/received_events", "repos_url": "https://api.github.com/users/hanss0n/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions", "type": "User", "url": "https://api.github.com/users/hanss0n" }
https://github.com/huggingface/datasets/issues/2212
[]
false
2023-10-03T16:09:19Z
2023-10-03T16:09:18Z
null
[ "Hi ! Apparently the data are not available from this url anymore. We'll replace it with the new url when it's available", "I saw this on their website when we request to download the dataset:\r\n![image](https://user-images.githubusercontent.com/19718818/114879600-fa458680-9e1e-11eb-9e05-f0963d68ff0f.png)\r\n\r\nCan we still request them link for the dataset and make a PR? @lhoestq @yjernite ", "I've contacted Martin (first author of the fquad paper) regarding a possible new url. Hopefully we can get one soon !", "They now made a website to force people who want to use the dataset for commercial purposes to seek a commercial license from them ...", "The script has been adopted to support manual download from the website, so I'm closing this issue." ]
completed
[]
Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2212/timeline
I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running: ```Python fquad = load_dataset("fquad") ``` which produces the following error: ``` Using custom data configuration default Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, post-processed: Unknown size, total: 9.76 MiB) to /root/.cache/huggingface/datasets/fquad/default/0.1.0/778dc2c85813d05ddd0c17087294d5f8f24820752340958070876b677af9f061... --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-48-a2721797e23b> in <module>() ----> 1 fquad = load_dataset("fquad") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 614 raise FileNotFoundError("Couldn't find file at {}".format(url)) 615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") --> 616 raise ConnectionError("Couldn't reach {}".format(url)) 617 618 # Try a second time ConnectionError: Couldn't reach https://storage.googleapis.com/illuin/fquad/train.json.zip ``` Does anyone know why that is and how to fix it?
https://api.github.com/repos/huggingface/datasets
null
855,999,133
https://api.github.com/repos/huggingface/datasets/issues/2212/comments
MDU6SXNzdWU4NTU5OTkxMzM=
null
2,212
https://api.github.com/repos/huggingface/datasets/issues/2212/events
false
closed
2021-04-12T13:38:58Z
null
https://api.github.com/repos/huggingface/datasets/issues/2211
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2211/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2211/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4", "events_url": "https://api.github.com/users/hanss0n/events{/privacy}", "followers_url": "https://api.github.com/users/hanss0n/followers", "following_url": "https://api.github.com/users/hanss0n/following{/other_user}", "gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hanss0n", "id": 21348833, "login": "hanss0n", "node_id": "MDQ6VXNlcjIxMzQ4ODMz", "organizations_url": "https://api.github.com/users/hanss0n/orgs", "received_events_url": "https://api.github.com/users/hanss0n/received_events", "repos_url": "https://api.github.com/users/hanss0n/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions", "type": "User", "url": "https://api.github.com/users/hanss0n" }
https://github.com/huggingface/datasets/issues/2211
[]
false
2021-04-14T13:42:25Z
2021-04-14T13:42:25Z
null
[ "Hi,\r\n\r\nI've already opened a PR with the fix. If you are in a hurry, just build the project from source and run:\r\n```bash\r\ndatasets-cli test datasets/lc_quad --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n", "Ah sorry, I tried searching but couldn't find any related PR. \r\n\r\nThank you! " ]
completed
[]
Getting checksum error when trying to load lc_quad dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2211/timeline
I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running: ```Python lc_quad = load_dataset("lc_quad") ``` which is giving me the following error: ``` Using custom data configuration default Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, generated: 19.77 MiB, post-processed: Unknown size, total: 23.46 MiB) to /root/.cache/huggingface/datasets/lc_quad/default/2.0.0/5a98fe174603f5dec6df07edf1c2b4d2317210d2ad61f5a393839bca4d64e5a7... --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-42-404ace83f73c> in <module>() ----> 1 lc_quad = load_dataset("lc_quad") 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/AskNowQA/LC-QuAD2.0/archive/master.zip'] ``` Does anyone know why this could be and how I fix it?
https://api.github.com/repos/huggingface/datasets
null
855,988,410
https://api.github.com/repos/huggingface/datasets/issues/2211/comments
MDU6SXNzdWU4NTU5ODg0MTA=
null
2,211
https://api.github.com/repos/huggingface/datasets/issues/2211/events
false
closed
2021-04-12T08:33:02Z
null
https://api.github.com/repos/huggingface/datasets/issues/2210
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2210/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hwijeen", "id": 29157715, "login": "hwijeen", "node_id": "MDQ6VXNlcjI5MTU3NzE1", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "repos_url": "https://api.github.com/users/hwijeen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "type": "User", "url": "https://api.github.com/users/hwijeen" }
https://github.com/huggingface/datasets/issues/2210
[]
false
2021-04-13T02:03:05Z
2021-04-13T02:03:05Z
null
[ "Hi ! Yes this is an issue with `datasets<=1.5.0`\r\nThis issue has been fixed by #2122 , we'll do a new release soon :)\r\nFor now you can test it on the `master` branch.", "Hi, thank you for your answer. I did not realize that my issue stems from the same problem. " ]
completed
[]
dataloading slow when using HUGE dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2210/timeline
Hi, When I use datasets with 600GB data, the dataloading speed increases significantly. I am experimenting with two datasets, and one is about 60GB and the other 600GB. Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training. When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause? * 60GB data ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 200.33 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ run_training_epoch | 71.994 |1 | 71.994 | 35.937 | run_training_batch | 0.64373 |100 | 64.373 | 32.133 | optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 | training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 | model_backward | 0.37552 |100 | 37.552 | 18.745 | model_forward | 0.22813 |100 | 22.813 | 11.387 | training_step | 0.22759 |100 | 22.759 | 11.361 | get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 | ``` * 600GB data ``` Action | Mean duration (s) |Num calls | Total time (s) | Percentage % | ------------------------------------------------------------------------------------------------------------------------------------ Total | - |_ | 3285.6 | 100 % | ------------------------------------------------------------------------------------------------------------------------------------ run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 | run_training_batch | 7.2596 |100 | 725.96 | 22.095 | optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 | training_step_and_backward | 7.223 |100 | 722.3 | 21.984 | model_backward | 6.9662 |100 | 696.62 | 21.202 | get_train_batch | 6.322 |100 | 632.2 | 19.241 | model_forward | 0.24902 |100 | 24.902 | 0.75789 | training_step | 0.2485 |100 | 24.85 | 0.75633 | ```
https://api.github.com/repos/huggingface/datasets
null
855,709,400
https://api.github.com/repos/huggingface/datasets/issues/2210/comments
MDU6SXNzdWU4NTU3MDk0MDA=
null
2,210
https://api.github.com/repos/huggingface/datasets/issues/2210/events
false
closed
2021-04-12T07:16:14Z
null
https://api.github.com/repos/huggingface/datasets/issues/2209
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2209/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2209/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2209
[]
false
2021-04-12T17:55:52Z
2021-04-12T17:55:52Z
null
[]
null
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
Add code of conduct to the project
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2209/timeline
Add code of conduct to the project and link it from README and CONTRIBUTING. This was already done in `transformers`.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2209.diff", "html_url": "https://github.com/huggingface/datasets/pull/2209", "merged_at": "2021-04-12T17:55:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/2209.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2209" }
855,638,232
https://api.github.com/repos/huggingface/datasets/issues/2209/comments
MDExOlB1bGxSZXF1ZXN0NjEzMzQwMTI2
null
2,209
https://api.github.com/repos/huggingface/datasets/issues/2209/events
true
closed
2021-04-11T16:08:03Z
null
https://api.github.com/repos/huggingface/datasets/issues/2208
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2208/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2208/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/2208
[]
false
2021-04-14T22:05:36Z
2021-04-14T13:40:51Z
null
[ "merging since the CI is fixed on master" ]
null
[]
Remove Python2 leftovers
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2208/timeline
This PR removes Python2 leftovers since this project aims for Python3.6+ (and as of 2020 Python2 is no longer officially supported)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2208.diff", "html_url": "https://github.com/huggingface/datasets/pull/2208", "merged_at": "2021-04-14T13:40:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/2208.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2208" }
855,343,835
https://api.github.com/repos/huggingface/datasets/issues/2208/comments
MDExOlB1bGxSZXF1ZXN0NjEzMTAxMzMw
null
2,208
https://api.github.com/repos/huggingface/datasets/issues/2208/events
true
closed
2021-04-11T10:03:56Z
null
https://api.github.com/repos/huggingface/datasets/issues/2207
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2207/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2207/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
https://github.com/huggingface/datasets/issues/2207
[]
false
2022-06-01T16:23:08Z
2022-06-01T16:21:10Z
null
[ "Hi ! The ClassLabel feature type encodes the labels as integers.\r\nThe integer corresponds to the index of the label name in the `names` list of the ClassLabel.\r\nHere that means that the labels are 'entailment' (0), 'neutral' (1), 'contradiction' (2).\r\n\r\nYou can get the label names back by using `a.features['label'].int2str(i)`.\r\n", "Hi! You can also easily reorder the label with the [`Dataset.align_labels_with_mapping`](https://huggingface.co/docs/datasets/master/en/process#align) method." ]
completed
[]
making labels consistent across the datasets
NONE
https://api.github.com/repos/huggingface/datasets/issues/2207/timeline
Hi For accessing the labels one can type ``` >>> a.features['label'] ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None) ``` The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if one try to access as above they are entailment, neutral,contradiction, it would be great to have the labels consistent. thanks
https://api.github.com/repos/huggingface/datasets
null
855,267,383
https://api.github.com/repos/huggingface/datasets/issues/2207/comments
MDU6SXNzdWU4NTUyNjczODM=
null
2,207
https://api.github.com/repos/huggingface/datasets/issues/2207/events
false
closed
2021-04-11T08:40:09Z
null
https://api.github.com/repos/huggingface/datasets/issues/2206
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2206/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4", "events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}", "followers_url": "https://api.github.com/users/yana-xuyan/followers", "following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}", "gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yana-xuyan", "id": 38536635, "login": "yana-xuyan", "node_id": "MDQ6VXNlcjM4NTM2NjM1", "organizations_url": "https://api.github.com/users/yana-xuyan/orgs", "received_events_url": "https://api.github.com/users/yana-xuyan/received_events", "repos_url": "https://api.github.com/users/yana-xuyan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions", "type": "User", "url": "https://api.github.com/users/yana-xuyan" }
https://github.com/huggingface/datasets/issues/2206
[]
false
2021-11-10T12:18:30Z
2021-11-10T12:04:28Z
null
[ "Hi,\r\n\r\nthe output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assumed range.\r\nCan you please provide a minimal reproducible example for more help?", "Hi @yana-xuyan, thanks for reporting.\r\n\r\nAs clearly @mariosasko explained, `datasets` performs some optimizations in order to reduce the size of the dataset cache files. And one of them is storing the field `special_tokens_mask` as `int8`, which means that this field can only contain integers between `-128` to `127`. As your message error states, one of the values of this field is `50259`, and therefore it cannot be stored as an `int8`.\r\n\r\nMaybe we could implement a way to disable this optimization and allow using any integer value; although the size of the cache files would be much larger.", "I'm facing same issue @mariosasko @albertvillanova \r\n\r\n```\r\nArrowInvalid: Integer value 50260 not in range: -128 to 127\r\n```\r\n\r\nTo reproduce:\r\n```python\r\nSPECIAL_TOKENS = ['<bos>','<eos>','<speaker1>','<speaker2>','<pad>']\r\nATTR_TO_SPECIAL_TOKEN = {\r\n 'bos_token': '<bos>', \r\n 'eos_token': '<eos>', \r\n 'pad_token': '<pad>',\r\n 'additional_special_tokens': ['<speaker1>', '<speaker2>']\r\n }\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(\"gpt2\", use_fast=False)\r\nnum_added_tokens =tokenizer.add_special_tokens(ATTR_TO_SPECIAL_TOKEN)\r\nvocab_size = len(self.tokenizer.encoder) + num_added_tokens\r\nvocab =tokenizer.get_vocab()\r\n\r\npad_index = tokenizer.pad_token_id\r\neos_index = tokenizer.eos_token_id\r\nbos_index = tokenizer.bos_token_id\r\nspeaker1_index = vocab[\"<speaker1>\"]\r\nspeaker2_index = vocab[\"<speaker2>\"]\r\n```\r\n\r\n```python\r\ntokenizer.decode(['50260'])\r\n'<speaker1>'\r\n```", "@mariosasko \r\nI am hitting this bug in the Bert tokenizer too. I see that @albertvillanova labeled this as a bug back in April. Has there been a fix released yet?\r\nWhat I did for now is to just disable the optimization in the HF library. @yana-xuyan and @thomas-happify, is that what you did and did that work for you?\r\n\r\n", "Hi @gregg-ADP, \r\n\r\nThis is still a bug.\r\n\r\nAs @albertvillanova has suggested, maybe it's indeed worth adding a variable to `config.py` to have a way to disable this behavior.\r\n\r\nIn the meantime, this forced optimization can be disabled by specifying `features` (of the returned examples) in the `map` call:\r\n```python\r\nfrom datasets import *\r\n... # dataset init\r\nds.map(process_example, features=Features({\"special_tokens_mask\": Sequence(Value(\"int32\")), ... rest of the features}) \r\n```\r\n\r\ncc @lhoestq so he is also aware of this issue", "Thanks for the quick reply @mariosasko. What I did was to changed the optimizer to use int32 instead of int8. \r\nWhat you're suggesting specifies the type for each feature explicitly without changing the HF code. This is definitely a better option. However, we are hitting a new error later:\r\n```\r\n File \"/Users/ccccc/PycharmProjects/aaaa-ml/venv-source/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\nTypeError: forward() got an unexpected keyword argument 'pos'\r\n\r\n```\r\nWhere 'pos' is the name of a new feature we added. Do you agree that your way of fixing the optimizer issue will not fix our new issue? If not, I will continue with this optimizer fix until we resolve our other issue.\r\n", "Hi @gwc4github,\r\n\r\nthe fix was merged a few minutes ago, and it doesn't require any changes on the user side (e.g. no need for specifying `features`). If you find time, feel free to install `datasets` from master with:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```\r\nand let us know if it works for your use case! " ]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
Got pyarrow error when loading a dataset while adding special tokens into the tokenizer
NONE
https://api.github.com/repos/huggingface/datasets/issues/2206/timeline
I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below: Traceback (most recent call last): File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single writer.write(example) File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write self.write_on_file() File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file pa_array = pa.array(typed_sequence) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__ out = out.cast(pa.list_(self.optimized_int_type)) File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127 Do you have any idea about it?
https://api.github.com/repos/huggingface/datasets
null
855,252,415
https://api.github.com/repos/huggingface/datasets/issues/2206/comments
MDU6SXNzdWU4NTUyNTI0MTU=
null
2,206
https://api.github.com/repos/huggingface/datasets/issues/2206/events
false
closed
2021-04-11T03:18:05Z
null
https://api.github.com/repos/huggingface/datasets/issues/2205
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2205/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2205/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4", "events_url": "https://api.github.com/users/gaguilar/events{/privacy}", "followers_url": "https://api.github.com/users/gaguilar/followers", "following_url": "https://api.github.com/users/gaguilar/following{/other_user}", "gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gaguilar", "id": 5833357, "login": "gaguilar", "node_id": "MDQ6VXNlcjU4MzMzNTc=", "organizations_url": "https://api.github.com/users/gaguilar/orgs", "received_events_url": "https://api.github.com/users/gaguilar/received_events", "repos_url": "https://api.github.com/users/gaguilar/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions", "type": "User", "url": "https://api.github.com/users/gaguilar" }
https://github.com/huggingface/datasets/pull/2205
[]
false
2021-04-12T17:53:34Z
2021-04-12T17:53:34Z
null
[]
null
[]
Updating citation information on LinCE readme
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2205/timeline
Hi! I just updated the citation information in this PR. It had an additional bibtex from one of the datasets used in LinCE and then the LinCE bibtex. I removed the former and added a link that shows the full list of citations for each dataset. Thanks!
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2205.diff", "html_url": "https://github.com/huggingface/datasets/pull/2205", "merged_at": "2021-04-12T17:53:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/2205.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2205" }
855,207,605
https://api.github.com/repos/huggingface/datasets/issues/2205/comments
MDExOlB1bGxSZXF1ZXN0NjEzMDAwMzYw
null
2,205
https://api.github.com/repos/huggingface/datasets/issues/2205/events
true
closed
2021-04-10T19:58:19Z
null
https://api.github.com/repos/huggingface/datasets/issues/2204
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2204/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2204/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4", "events_url": "https://api.github.com/users/marrodion/events{/privacy}", "followers_url": "https://api.github.com/users/marrodion/followers", "following_url": "https://api.github.com/users/marrodion/following{/other_user}", "gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marrodion", "id": 44571847, "login": "marrodion", "node_id": "MDQ6VXNlcjQ0NTcxODQ3", "organizations_url": "https://api.github.com/users/marrodion/orgs", "received_events_url": "https://api.github.com/users/marrodion/received_events", "repos_url": "https://api.github.com/users/marrodion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marrodion/subscriptions", "type": "User", "url": "https://api.github.com/users/marrodion" }
https://github.com/huggingface/datasets/pull/2204
[]
false
2021-04-15T13:49:46Z
2021-04-15T13:49:46Z
null
[]
null
[]
Add configurable options to `seqeval` metric
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2204/timeline
Fixes #2148 Adds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered. `seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea).
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2204.diff", "html_url": "https://github.com/huggingface/datasets/pull/2204", "merged_at": "2021-04-15T13:49:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/2204.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2204" }
855,144,431
https://api.github.com/repos/huggingface/datasets/issues/2204/comments
MDExOlB1bGxSZXF1ZXN0NjEyOTU1MzM2
null
2,204
https://api.github.com/repos/huggingface/datasets/issues/2204/events
true
closed
2021-04-10T12:10:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/2203
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2203/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2203/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6765330?v=4", "events_url": "https://api.github.com/users/hsali/events{/privacy}", "followers_url": "https://api.github.com/users/hsali/followers", "following_url": "https://api.github.com/users/hsali/following{/other_user}", "gists_url": "https://api.github.com/users/hsali/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hsali", "id": 6765330, "login": "hsali", "node_id": "MDQ6VXNlcjY3NjUzMzA=", "organizations_url": "https://api.github.com/users/hsali/orgs", "received_events_url": "https://api.github.com/users/hsali/received_events", "repos_url": "https://api.github.com/users/hsali/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hsali/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hsali/subscriptions", "type": "User", "url": "https://api.github.com/users/hsali" }
https://github.com/huggingface/datasets/pull/2203
[]
false
2021-04-23T14:33:39Z
2021-04-23T14:33:39Z
null
[ "Hi ! Can you add a description regarding this PR ? Why do you think we need to update the dummy data used to test the `banking77` dataset loading script ?", "Closing for inactivity. Feel free to re-open if you want to push this change" ]
null
[]
updated banking77 train and test data
NONE
https://api.github.com/repos/huggingface/datasets/issues/2203/timeline
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2203.diff", "html_url": "https://github.com/huggingface/datasets/pull/2203", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2203.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2203" }
855,053,595
https://api.github.com/repos/huggingface/datasets/issues/2203/comments
MDExOlB1bGxSZXF1ZXN0NjEyODg4MzA5
null
2,203
https://api.github.com/repos/huggingface/datasets/issues/2203/events
true
closed
2021-04-09T12:58:19Z
null
https://api.github.com/repos/huggingface/datasets/issues/2202
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2202/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2202/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2202
[]
false
2021-04-12T17:58:00Z
2021-04-12T17:57:59Z
null
[]
null
[]
Add classes GenerateMode, DownloadConfig and Version to the documentation
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2202/timeline
Add documentation for classes `GenerateMode`, `DownloadConfig` and `Version`. Update the docstring of `load_dataset` to create cross-reference links to the classes. Related to #2187.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2202.diff", "html_url": "https://github.com/huggingface/datasets/pull/2202", "merged_at": "2021-04-12T17:57:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/2202.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2202" }
854,501,109
https://api.github.com/repos/huggingface/datasets/issues/2202/comments
MDExOlB1bGxSZXF1ZXN0NjEyNDM2ODMx
null
2,202
https://api.github.com/repos/huggingface/datasets/issues/2202/events
true
closed
2021-04-09T12:56:19Z
null
https://api.github.com/repos/huggingface/datasets/issues/2201
null
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2201/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2201/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2201
[]
false
2021-04-12T13:32:17Z
2021-04-12T13:32:16Z
null
[]
null
[]
Fix ArrowWriter overwriting features in ArrowBasedBuilder
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2201/timeline
This should fix the issues with CSV loading experienced in #2153 and #2200. The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data. The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the features from the arrow data, discarding the features passed by the user. I fixed that and I updated the tests
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2201.diff", "html_url": "https://github.com/huggingface/datasets/pull/2201", "merged_at": "2021-04-12T13:32:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/2201.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2201" }
854,499,563
https://api.github.com/repos/huggingface/datasets/issues/2201/comments
MDExOlB1bGxSZXF1ZXN0NjEyNDM1NTE3
null
2,201
https://api.github.com/repos/huggingface/datasets/issues/2201/events
true
closed
2021-04-09T11:47:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/2200
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2200/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2200/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4", "events_url": "https://api.github.com/users/Gforky/events{/privacy}", "followers_url": "https://api.github.com/users/Gforky/followers", "following_url": "https://api.github.com/users/Gforky/following{/other_user}", "gists_url": "https://api.github.com/users/Gforky/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Gforky", "id": 4157614, "login": "Gforky", "node_id": "MDQ6VXNlcjQxNTc2MTQ=", "organizations_url": "https://api.github.com/users/Gforky/orgs", "received_events_url": "https://api.github.com/users/Gforky/received_events", "repos_url": "https://api.github.com/users/Gforky/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Gforky/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gforky/subscriptions", "type": "User", "url": "https://api.github.com/users/Gforky" }
https://github.com/huggingface/datasets/issues/2200
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
false
2021-06-04T10:37:35Z
2021-06-04T10:37:35Z
null
[ "Hi ! This might be related to #2153 \r\n\r\nYou're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\nI'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n\r\nEDIT: opened #2201", "> Hi ! This might be related to #2153\r\n> \r\n> You're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\n> I'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n> \r\n> EDIT: opened #2201\r\n\r\nGlad to hear that! Thank you for your fix, I'm new to huggingface, it's a fantastic project 😁" ]
completed
[]
_prepare_split will overwrite DatasetBuilder.info.features
NONE
https://api.github.com/repos/huggingface/datasets/issues/2200/timeline
Hi, here is my issue: I initialized a Csv datasetbuilder with specific features: ``` def get_dataset_features(data_args): features = {} if data_args.text_features: features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")}) if data_args.num_features: features.update({text_feature: hf_features.Value("float32") for text_feature in data_args.num_features.strip().split(",")}) if data_args.label_classes: features["label"] = hf_features.ClassLabel(names=data_args.label_classes.strip().split(",")) else: features["label"] = hf_features.Value("float32") return hf_features.Features(features) datasets = load_dataset(extension, data_files=data_files, sep=data_args.delimiter, header=data_args.header, column_names=data_args.column_names.split(",") if data_args.column_names else None, features=get_dataset_features(data_args=data_args)) ``` The `features` is printout as below before `builder_instance.as_dataset` is called: ``` {'label': ClassLabel(num_classes=2, names=['unacceptable', 'acceptable'], names_file=None, id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)} ```` But after the `builder_instance.as_dataset` is called for Csv dataset builder, the `features` is changed to: ``` {'label': Value(dtype='int64', id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)} ``` After digged into the code, I releazed that in `ArrowBasedBuilder._prepare_split`, the DatasetBuilder's info's features will be overwrited by `ArrowWriter`'s `_features`. But `ArrowWriter` is initailized without passing `features`. So my concern is: It's this overwrite must be done, or, should it be an option to pass features in `_prepare_split` function?
https://api.github.com/repos/huggingface/datasets
null
854,449,656
https://api.github.com/repos/huggingface/datasets/issues/2200/comments
MDU6SXNzdWU4NTQ0NDk2NTY=
null
2,200
https://api.github.com/repos/huggingface/datasets/issues/2200/events
false
closed
2021-04-09T11:01:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/2199
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2199/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2199/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2199
[]
false
2021-04-09T15:57:05Z
2021-04-09T15:57:05Z
null
[ "Hi @lhoestq, could you please check if this makes sense? Thanks.", "What about using `_indices_data_files` field in save_to_disk instead of `_indices_files` ?\r\nThis way future datasets can also be reloaded from older versions of the lib\r\n\r\n`_indices_files` was introduced in a recent PR and was not released", "Yes, I have seen it is not released yet...\r\n\r\nYou are right! It was your awesome PR on Tables which renamed this. If there is no particular reason for this renaming, yes, we could switch it back to the previous `_indices_data_files`. ;)" ]
null
[]
Fix backward compatibility in Dataset.load_from_disk
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2199/timeline
Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files". Related to #2195.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2199.diff", "html_url": "https://github.com/huggingface/datasets/pull/2199", "merged_at": "2021-04-09T15:57:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2199.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2199" }
854,417,318
https://api.github.com/repos/huggingface/datasets/issues/2199/comments
MDExOlB1bGxSZXF1ZXN0NjEyMzY0ODU3
null
2,199
https://api.github.com/repos/huggingface/datasets/issues/2199/events
true
closed
2021-04-09T09:39:06Z
null
https://api.github.com/repos/huggingface/datasets/issues/2198
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2198/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2198/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
https://github.com/huggingface/datasets/pull/2198
[]
false
2021-04-16T14:11:46Z
2021-04-16T14:11:46Z
null
[ "From offline discussions: we want to make the permissions handling consistent with `transformers`. However from discussion in https://github.com/huggingface/transformers/pull/11119 it looks like it might not be a good solution to provide this argument. Users should use umask for now, and we'll see how things evolve.\r\n\r\n@bhavitvyamalik I'm closing the PR for now if you don't mind" ]
null
[]
added file_permission in load_dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2198/timeline
As discussed in #2065 I've added `file_permission` argument in `load_dataset`. Added mainly 2 things here: 1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only) 2) Incase the user uses `map` later on to generate another cache file of dataset, it ensures the permissions of newly generated file are similar to that of` *-train.arrow` file inside cache_dir for that dataset.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2198.diff", "html_url": "https://github.com/huggingface/datasets/pull/2198", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2198.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2198" }
854,357,481
https://api.github.com/repos/huggingface/datasets/issues/2198/comments
MDExOlB1bGxSZXF1ZXN0NjEyMzE0MTIz
null
2,198
https://api.github.com/repos/huggingface/datasets/issues/2198/events
true
closed
2021-04-09T09:37:57Z
null
https://api.github.com/repos/huggingface/datasets/issues/2197
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2197/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2197/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2197
[]
false
2021-04-09T09:54:40Z
2021-04-09T09:54:39Z
null
[]
null
[]
fix missing indices_files in load_form_disk
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2197/timeline
This should fix #2195 `load_from_disk` was failing if there was no "_indices_files" field in state.json. This can happen if the dataset has no indices mapping
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2197.diff", "html_url": "https://github.com/huggingface/datasets/pull/2197", "merged_at": "2021-04-09T09:54:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/2197.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2197" }
854,356,559
https://api.github.com/repos/huggingface/datasets/issues/2197/comments
MDExOlB1bGxSZXF1ZXN0NjEyMzEzMzQw
null
2,197
https://api.github.com/repos/huggingface/datasets/issues/2197/events
true
closed
2021-04-09T03:49:19Z
null
https://api.github.com/repos/huggingface/datasets/issues/2196
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2196/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2196/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hwijeen", "id": 29157715, "login": "hwijeen", "node_id": "MDQ6VXNlcjI5MTU3NzE1", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "repos_url": "https://api.github.com/users/hwijeen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "type": "User", "url": "https://api.github.com/users/hwijeen" }
https://github.com/huggingface/datasets/issues/2196
[]
false
2021-04-12T05:25:29Z
2021-04-12T05:25:29Z
null
[ "Hi ! Files that starts with `cache-*` are cached computation files, i.e. they are the cached results of map/filter/cast/etc. operations. For example if you used `map` on your dataset to transform it, then the resulting dataset is going to be stored and cached in a `cache-*` file. These files are used to avoid having to load the dataset in RAM, even after many transforms", "Thanks @lhoestq! Hmm.. that's strange because I specifically turned off auto caching, and saved mapped result, using `save_to_disk`, to another location. At this location, the following file is created:`355G\tcache-ed205e500a7dc44c.arrow`\r\n\r\nTo my observation, both `load_dataset` and `map` creates `cache-*` files, and I wonder what the `cache-*` file from `load_dataset` is for (as I believe the same information is stored in `json-train.arrow`.", "This is a wrong report -- `cache-*` files are created only my `map`, not by `load_dataset`. " ]
completed
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
`load_dataset` caches two arrow files?
NONE
https://api.github.com/repos/huggingface/datasets/issues/2196/timeline
Hi, I am using datasets to load large json file of 587G. I checked the cached folder and found that there are two arrow files created: * `cache-ed205e500a7dc44c.arrow` - 355G * `json-train.arrow` - 582G Why is the first file created? If I delete it, would I still be able to `load_from_disk`?
https://api.github.com/repos/huggingface/datasets
null
854,126,114
https://api.github.com/repos/huggingface/datasets/issues/2196/comments
MDU6SXNzdWU4NTQxMjYxMTQ=
null
2,196
https://api.github.com/repos/huggingface/datasets/issues/2196/events
false
closed
2021-04-09T01:37:12Z
null
https://api.github.com/repos/huggingface/datasets/issues/2195
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2195/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/samsontmr", "id": 15007950, "login": "samsontmr", "node_id": "MDQ6VXNlcjE1MDA3OTUw", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "repos_url": "https://api.github.com/users/samsontmr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "type": "User", "url": "https://api.github.com/users/samsontmr" }
https://github.com/huggingface/datasets/issues/2195
[]
false
2021-04-09T09:55:09Z
2021-04-09T09:54:39Z
null
[ "Thanks for reporting @samsontmr.\r\n\r\nIt seems a backward compatibility issue...", "Thanks @samsontmr this should be fixed on master now\r\n\r\nFeel free to reopen if you're still having issues" ]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
KeyError: '_indices_files' in `arrow_dataset.py`
NONE
https://api.github.com/repos/huggingface/datasets/issues/2195/timeline
After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset. Trace: ``` Traceback (most recent call last): File "load_data.py", line 11, in <module> dataset = load_from_disk(SRC) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk if state["_indices_files"]: KeyError: '_indices_files' ``` I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions: https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634 May I suggest using `state.get()` instead of directly indexing the dictionary? @lhoestq
https://api.github.com/repos/huggingface/datasets
null
854,070,194
https://api.github.com/repos/huggingface/datasets/issues/2195/comments
MDU6SXNzdWU4NTQwNzAxOTQ=
null
2,195
https://api.github.com/repos/huggingface/datasets/issues/2195/events
false
closed
2021-04-08T21:02:48Z
null
https://api.github.com/repos/huggingface/datasets/issues/2194
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2194/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
https://github.com/huggingface/datasets/issues/2194
[]
false
2021-04-09T16:56:50Z
2021-04-09T01:52:57Z
null
[ "\r\nThis wasn't a `datasets` problem, but `transformers`' and it was solved here https://github.com/huggingface/transformers/pull/11168\r\n" ]
completed
[]
py3.7: TypeError: can't pickle _LazyModule objects
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2194/timeline
While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install: ``` git clone https://github.com/huggingface/transformers cd transformers pip install -e .[testing] export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \ examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \ --per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \ --fp16 ``` ``` Traceback (most recent call last): File "examples/language-modeling/run_clm.py", line 453, in <module> main() File "examples/language-modeling/run_clm.py", line 336, in main load_from_cache_file=not data_args.overwrite_cache, File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map for k, dataset in self.items() File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp> for k, dataset in self.items() File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map update_data=update_data, File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper self._fingerprint, transform, kwargs_for_fingerprint File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint hasher.update(transform_args[key]) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps dump(obj, file) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump Pickler(file, recurse=True).dump(obj) File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function obj=obj, File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save rv = reduce(self.proto) TypeError: can't pickle _LazyModule objects ``` ``` $ python --version Python 3.7.4 $ python -m torch.utils.collect_env Collecting environment information... PyTorch version: 1.8.0.dev20210110+cu110 Is debug build: False CUDA used to build PyTorch: 11.0 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.2 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.16.3 ``` Thanks.
https://api.github.com/repos/huggingface/datasets
null
853,909,452
https://api.github.com/repos/huggingface/datasets/issues/2194/comments
MDU6SXNzdWU4NTM5MDk0NTI=
null
2,194
https://api.github.com/repos/huggingface/datasets/issues/2194/events
false
closed
2021-04-08T18:16:14Z
null
https://api.github.com/repos/huggingface/datasets/issues/2193
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2193/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4", "events_url": "https://api.github.com/users/norabelrose/events{/privacy}", "followers_url": "https://api.github.com/users/norabelrose/followers", "following_url": "https://api.github.com/users/norabelrose/following{/other_user}", "gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/norabelrose", "id": 39116809, "login": "norabelrose", "node_id": "MDQ6VXNlcjM5MTE2ODA5", "organizations_url": "https://api.github.com/users/norabelrose/orgs", "received_events_url": "https://api.github.com/users/norabelrose/received_events", "repos_url": "https://api.github.com/users/norabelrose/repos", "site_admin": false, "starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions", "type": "User", "url": "https://api.github.com/users/norabelrose" }
https://github.com/huggingface/datasets/issues/2193
[]
false
2021-04-26T16:13:59Z
2021-04-26T16:13:59Z
null
[ "Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178 \r\n\r\nI think you can expect to have the fast version of `filter` available next week.\r\n\r\nWe'll make it only select one column, and we'll also make the overall filtering operation way faster by avoiding many arrow<->python conversions especially during writing.\r\n\r\nI'll let you know how it goes !", "@lhoestq Thanks for the response— it's great to hear that we'll be getting a much faster `filter` method soon. However, my use case does also involve using `map` over a single column in order to pre-compute roughly uniformly sized batches, and right now that is also very slow. Is there any plan to make `map` faster for single column operations?\r\n\r\nIf that's not a priority for the maintainers right now, I could try my hand at adding the feature, but I can't guarantee I would do a good job given my lack of familiarity with pyarrow.", "Currently the optimal setup for single-column computations is probably to do something like\r\n```python\r\nresult = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n```\r\nThis has two advantages:\r\n- input_columns=\"my_col\" allows to only read the column \"my_col\"\r\n- remove_columns=dataset.column_names makes `map` only keep the output of your function `f`, and it drops the other columns of the dataset instead of keeping them.\r\n\r\nLet me know if it improves speed on your side.\r\n\r\nYou can also get more speed by using `batched=True` and setting `num_proc=` for multiprocessing", "Hi @lhoestq ,\r\n\r\nI'm hijacking this issue, because I'm currently trying to do the approach you recommend:\r\n\r\n> Currently the optimal setup for single-column computations is probably to do something like\r\n> \r\n> ```python\r\n> result = dataset.map(f, input_columns=\"my_col\", remove_columns=dataset.column_names)\r\n> ```\r\n\r\nHere is my code: (see edit, in which I added a simplified version\r\n\r\n```\r\nThis is the error:\r\n```bash\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 8964 but got length 1000\r\n```\r\nI wonder why this error occurs, when I delete every column? Can you give me a hint?\r\n\r\n### Edit:\r\nI preprocessed my dataset before (using map with the features argument) and saved it to disk. May this be part of the error? I can iterate over the\r\ncomplete dataset and print every sample before calling map. There seems to be no other problem with the dataset.\r\n\r\nI tried to simplify the code that crashes:\r\n\r\n```python\r\n# works\r\nlog.debug(dataset.column_names)\r\nlog.debug(dataset)\r\nfor i, sample in enumerate(dataset):\r\n log.debug(i, sample)\r\n\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n)\r\n```\r\n\r\n```\r\npyarrow.lib.ArrowInvalid: Column 1 named tokens expected length 20 but got length 1000\r\n```\r\n\r\nEdit2: \r\n\r\nMay this be a problem with a schema I set when preprocessing the dataset before? I tried to add the `features` argument to the function and then I get a new error:\r\n\r\n```python\r\n# crashes\r\ncounted_dataset = dataset.map(\r\n lambda x: {\"a\": list(range(20))},\r\n input_columns=column,\r\n remove_columns=dataset.column_names,\r\n load_from_cache_file=False,\r\n num_proc=num_workers,\r\n batched=True,\r\n features=datasets.Features(\r\n {\r\n \"a\": datasets.Sequence(datasets.Value(\"int32\"))\r\n }\r\n )\r\n)\r\n```\r\n\r\n```\r\n File \"env/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1704, in _map_single\r\n writer.write_batch(batch)\r\n File \"env/lib/python3.8/site-packages/datasets/arrow_writer.py\", line 312, in write_batch\r\n col_type = schema.field(col).type if schema is not None else None\r\n File \"pyarrow/types.pxi\", line 1341, in pyarrow.lib.Schema.field\r\nKeyError: 'Column tokens does not exist in schema'\r\n```", "Hi ! Can you open a separate issue for that ?\r\nAlso if you could provide a google colab or a sample code to reproduce this issue that would be helpful.\r\nOn my side I was not able to reproduce this error.", "@lhoestq Sorry I'm just responding now. I'm currently using your recommendation for the map on a single column, and I've gotten it to be fast enough to sort of work for my use case by just setting `num_proc=10`, although it's still quite slow. It's clear that it is still loading the entirety of each row into memory and then discarding everything except the selected column, instead of exploiting the columnar data format to only load the selected column.\r\n\r\nMy code is like this:\r\n```\r\n self.dataset = self.dataset.sort('num_tokens')\r\n batch_dataset = self.dataset.map(\r\n\tcompute_uniform_sized_batches,\r\n\tbatched=True, batch_size=10_000, num_proc=10, input_columns=['num_tokens'],\r\n\tremove_columns=get_columns_all_equal(self.dataset),\r\n\twith_indices=True,\r\n\tfn_kwargs=dict(max_size=tokens_per_batch)\r\n)\r\nself.batches = {\r\n\tname: list(zip(split['start'], split['length']))\r\n\tfor name, split in batch_dataset.items()\r\n}\r\n```\r\nI find that the processes with higher IDs take significantly longer to complete, presumably because the dataset is sorted by article length and they're loading the entire article text into memory, instead of just the 'num_tokens' column.\r\n\r\nI should note that my batching procedure would work best if I just used `batch_size=None` and loaded the whole column into memory at once, but I found that this was intolerably slow and gave me no progress information, so I'm using the less than ideal `batch_size=10_000`.", "Hi @norabelrose ! I'm glad you managed to make this work on your side.\r\nRegarding memory usage, you can try to drop the columns that you don't want to use for your `map` for now.\r\n\r\nIn the future we'll try to find a way to not load unnecessary columns in memory in `map`. Currently the way it works is that it gets the batch as a python dict, then it updates it using the output of your mapping function, and finally it removes columns from `remove_columns`. Therefore for a moment some columns are loaded in memory even if you remove them or don't use them for your mapping function.\r\n\r\nIt would be nice to have a way to optimize memory for cases such as yours !", "@lhoestq After looking through the source code, it looks like the following solution has at least some chance of working:\r\n- refactor `Dataset.map()` so that the `input_columns` parameter is implemented by using the `self.formatted_as()` context manager with `columns=input_columns`\r\n- change `Dataset._getitem()` so that it passes `self._data.drop(drop_columns)` to the `query_table()` function whenever `format_columns` is non-None and `output_all_columns` is False, instead of `self._data` itself", "Looks like a great direction :)\r\nNote that `query_table` doesn't bring data into memory. Only `format_table` does.\r\nAlso the dataset may already have a format with `columns=` already defined so we would need to define the formatted `input_dataset` like:\r\n```python\r\n# before the `map` main for loop\r\ninput_columns = input_columns if input_columns is not None else self.column_names\r\nif not self._output_all_columns:\r\n columns = [col for col in input_columns if self._format_columns is None or col in self._format_columns]\r\n input_dataset = self.with_format(\r\n type=self._format_type,\r\n columns=columns\r\n )\r\nelse:\r\n # in this case we could find a way to filter both format_columns and unformatted columns eventually\r\n input_dataset = self\r\n# then input_dataset can be used in the main for loop of `map`\r\n```\r\n\r\nEDIT: oh and regarding streaming format versus file format for arrow, we plan to start using the file format #1933 at one point (though I'm not sure if it would improve performance)", "Good to know about `query_table` not bringing anything into memory. I was under the impression that it did because a while back I looked at my `map` operation in pdb and it looked like it was spending forever in line 93 of formatting.py, `return pa.concat_tables(....)`, although that was before the `fast_slice` interpolation search was implemented, so it may have had more to do with the slow ChunkedArray slice implementation than anything else.\r\n\r\nIf `query_table` is I/O free then the fix may be as simple as just adding this to line 1779 of arrow_dataset.py:\r\n```python\r\n# Only load the columns we actually need\r\nif input_columns:\r\n stack.enter_context(self.formatted_as(\r\n self._format_type,\r\n columns=input_columns,\r\n output_all_columns=False,\r\n **self._format_kwargs\r\n ))\r\n```\r\nIt's not clear to me why the `[col for col in input_columns if self._format_columns is None or col in self._format_columns]` check would be necessary— it seems like either `input_columns` should simply temporarily override the `_format_columns` within the `map` operation, or we should throw an error if there are any conflicts. Currently it doesn't look like this case is checked for at all within `map`, but maybe I'm just missing it.", "`query_table` simply slices/concatenates parts of the table. The actual data inside the table is not brought in memory.\r\nAlso I'm more in favor of declaring `input_dataset = self.with_format(...)` since `formatted_as` may update the dataset fingerprint of `self`, which is not expected when someone runs `map`.\r\n\r\n> It's not clear to me why the [col for col in input_columns if self._format_columns is None or col in self._format_columns] check would be necessary— it seems like either input_columns should simply temporarily override the _format_columns within the map operation, or we should throw an error if there are any conflicts. Currently it doesn't look like this case is checked for at all within map, but maybe I'm just missing it.\r\n\r\nActually yes we can just use input_columns. And we do need to add a check to make sure there are not conflicts or this could lead to confusing errors.", "That sounds good to me! I just submitted a PR (#2246) implementing your approach. I also changed how `_query_table` handles Iterable keys since it still seemed like `pa.concat_tables` was taking a long time to create the table for each batch. Now my whole `map()` operation takes 1 min 46 seconds where it used to take somewhere on the order of 10 minutes." ]
completed
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
Filtering/mapping on one column is very slow
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2193/timeline
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation. I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that— I'm not very familiar with the pyarrow API. I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset. PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible.
https://api.github.com/repos/huggingface/datasets
null
853,725,707
https://api.github.com/repos/huggingface/datasets/issues/2193/comments
MDU6SXNzdWU4NTM3MjU3MDc=
null
2,193
https://api.github.com/repos/huggingface/datasets/issues/2193/events
false
closed
2021-04-08T14:42:24Z
null
https://api.github.com/repos/huggingface/datasets/issues/2192
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2192/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2192/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LysandreJik", "id": 30755778, "login": "LysandreJik", "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "repos_url": "https://api.github.com/users/LysandreJik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "type": "User", "url": "https://api.github.com/users/LysandreJik" }
https://github.com/huggingface/datasets/pull/2192
[]
false
2021-04-08T15:47:41Z
2021-04-08T15:47:40Z
null
[]
null
[]
Fix typo in huggingface hub
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2192/timeline
pip knows how to resolve to `huggingface_hub`, but conda doesn't! The `packaging` dependency is also required for the build to complete.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2192.diff", "html_url": "https://github.com/huggingface/datasets/pull/2192", "merged_at": "2021-04-08T15:47:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/2192.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2192" }
853,547,910
https://api.github.com/repos/huggingface/datasets/issues/2192/comments
MDExOlB1bGxSZXF1ZXN0NjExNjE5NTY0
null
2,192
https://api.github.com/repos/huggingface/datasets/issues/2192/events
true
closed
2021-04-08T11:21:04Z
null
https://api.github.com/repos/huggingface/datasets/issues/2191
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2191/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2191/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2191
[]
false
2021-04-19T07:53:11Z
2021-04-19T07:53:10Z
{ "closed_at": "2021-04-20T16:50:46Z", "closed_issues": 4, "created_at": "2021-04-09T13:07:51Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-04-16T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/1", "id": 6644198, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels", "node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==", "number": 1, "open_issues": 0, "state": "closed", "title": "1.6", "updated_at": "2021-04-20T16:50:46Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/1" }
[ "I find very interesting that idea of using a fixture instead!\r\n\r\nLet me rework a little bit this PR, @lhoestq.", "@lhoestq, as this is a big refactoring, I had many problems to solve the conflicts with the master branch...\r\n\r\nTherefore, I think it is better to merge this as it is, and then to make other PRs with additional refactorings, before I get conflicts again with the master branch...", "There are still some conflicts that prevent merging.\r\nMoreover I noticed that you added one fixture per method of the Dataset object to be mocked. The code of all these fixtures is pretty much the same, feel free to factorize them into one fixture.\r\n\r\nAlso feel free to create another branch from `master` if you don't want to fix the conflicts of this branch.\r\nLet me know if I can help you on this", "@lhoestq, yes, the new conflicts appeared after today merge commits on master...\r\n\r\nI am definitely going to split this PR into smaller ones in order to avoid having to resolve many conflicts after each commit on master. There are lots of conflicts and these are painful to resolve." ]
null
[ { "color": "B67A40", "default": false, "description": "Restructuring existing code without changing its external behavior", "id": 2851292821, "name": "refactoring", "node_id": "MDU6TGFiZWwyODUxMjkyODIx", "url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring" } ]
Refactorize tests to use Dataset as context manager
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2191/timeline
Refactorize Dataset tests to use Dataset as context manager.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2191.diff", "html_url": "https://github.com/huggingface/datasets/pull/2191", "merged_at": "2021-04-19T07:53:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/2191.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2191" }
853,364,204
https://api.github.com/repos/huggingface/datasets/issues/2191/comments
MDExOlB1bGxSZXF1ZXN0NjExNDY1Nzc0
null
2,191
https://api.github.com/repos/huggingface/datasets/issues/2191/events
true
closed
2021-04-08T07:53:43Z
null
https://api.github.com/repos/huggingface/datasets/issues/2190
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2190/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2190/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4", "events_url": "https://api.github.com/users/anassalamah/events{/privacy}", "followers_url": "https://api.github.com/users/anassalamah/followers", "following_url": "https://api.github.com/users/anassalamah/following{/other_user}", "gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anassalamah", "id": 8571003, "login": "anassalamah", "node_id": "MDQ6VXNlcjg1NzEwMDM=", "organizations_url": "https://api.github.com/users/anassalamah/orgs", "received_events_url": "https://api.github.com/users/anassalamah/received_events", "repos_url": "https://api.github.com/users/anassalamah/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions", "type": "User", "url": "https://api.github.com/users/anassalamah" }
https://github.com/huggingface/datasets/issues/2190
[]
false
2021-05-24T10:03:55Z
2021-05-24T10:03:55Z
null
[ "Hi @anassalamah,\r\n\r\nCould you please try with this:\r\n```python\r\ntrain_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[98%:]')\r\n```", "Hello @albertvillanova, \r\n\r\nThanks for the suggestion. I didn't know you could do that. however, it didn't resolve the issue\r\n\r\n![image](https://user-images.githubusercontent.com/8571003/114169966-ec819400-993a-11eb-8a67-930f9a9b2290.png)\r\n" ]
completed
[]
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
NONE
https://api.github.com/repos/huggingface/datasets/issues/2190/timeline
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi. ``` train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]') val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]') # filtering out examples that are not ar-en translations but ar-hi val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True) ``` * I'm fairly new to using datasets so I might be doing something wrong
https://api.github.com/repos/huggingface/datasets
null
853,181,564
https://api.github.com/repos/huggingface/datasets/issues/2190/comments
MDU6SXNzdWU4NTMxODE1NjQ=
null
2,190
https://api.github.com/repos/huggingface/datasets/issues/2190/events
false
closed
2021-04-08T04:42:53Z
null
https://api.github.com/repos/huggingface/datasets/issues/2189
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2189/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez" }
https://github.com/huggingface/datasets/issues/2189
[]
false
2022-06-01T16:32:15Z
2022-06-01T16:32:15Z
null
[ "Hi ! We refactored save_to_disk in #2025 so this doesn't happen.\r\nFeel free to try it on master for now\r\nWe'll do a new release soon" ]
completed
[]
save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object.
NONE
https://api.github.com/repos/huggingface/datasets/issues/2189/timeline
As you can see, it saves the entire dataset. @lhoestq You can check by going through the following example, ``` from datasets import load_from_disk,concatenate_datasets loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset') n=20 kb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)] final_dataset=concatenate_datasets([kb_list[1],kb_list[2]]) final_dataset.save_to_disk('/home/gsir059/haha/k.arrow') ```
https://api.github.com/repos/huggingface/datasets
null
853,052,891
https://api.github.com/repos/huggingface/datasets/issues/2189/comments
MDU6SXNzdWU4NTMwNTI4OTE=
null
2,189
https://api.github.com/repos/huggingface/datasets/issues/2189/events
false
closed
2021-04-08T04:21:54Z
null
https://api.github.com/repos/huggingface/datasets/issues/2188
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2188/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2188/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4", "events_url": "https://api.github.com/users/thanh-p/events{/privacy}", "followers_url": "https://api.github.com/users/thanh-p/followers", "following_url": "https://api.github.com/users/thanh-p/following{/other_user}", "gists_url": "https://api.github.com/users/thanh-p/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thanh-p", "id": 78190188, "login": "thanh-p", "node_id": "MDQ6VXNlcjc4MTkwMTg4", "organizations_url": "https://api.github.com/users/thanh-p/orgs", "received_events_url": "https://api.github.com/users/thanh-p/received_events", "repos_url": "https://api.github.com/users/thanh-p/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thanh-p/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thanh-p/subscriptions", "type": "User", "url": "https://api.github.com/users/thanh-p" }
https://github.com/huggingface/datasets/issues/2188
[]
false
2021-04-08T12:13:19Z
2021-04-08T12:13:19Z
null
[ "Hi ! Thanks for reporting\r\nIf I recall correctly this has been recently fixed #1995\r\nCan you try to upgrade your local version of `datasets` ?\r\n```\r\npip install --upgrade datasets\r\n```", "Hi Ihoestq,\r\n\r\nThank you. It works after upgrading the datasets\r\n" ]
completed
[]
Duplicate data in Timit dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2188/timeline
I ran a simple code to list all texts in Timit dataset and the texts were all the same. Is this dataset corrupted? **Code:** timit = load_dataset("timit_asr") print(*timit['train']['text'], sep='\n') **Result:** Would such an act of refusal be useful? Would such an act of refusal be useful? Would such an act of refusal be useful? Would such an act of refusal be useful? ... ... Would such an act of refusal be useful?
https://api.github.com/repos/huggingface/datasets
null
853,044,166
https://api.github.com/repos/huggingface/datasets/issues/2188/comments
MDU6SXNzdWU4NTMwNDQxNjY=
null
2,188
https://api.github.com/repos/huggingface/datasets/issues/2188/events
false
open
2021-04-08T00:16:28Z
null
https://api.github.com/repos/huggingface/datasets/issues/2187
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2187/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2187/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ioana-blue", "id": 17202292, "login": "ioana-blue", "node_id": "MDQ6VXNlcjE3MjAyMjky", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "repos_url": "https://api.github.com/users/ioana-blue/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "type": "User", "url": "https://api.github.com/users/ioana-blue" }
https://github.com/huggingface/datasets/issues/2187
[]
false
2023-01-03T18:30:38Z
null
null
[ "An educated guess: does this refer to the fact that depending on the custom column names in the dataset files (csv in this case), there is a dataset loader being created? and this dataset loader - using the \"custom data configuration\" is used among all jobs running using this particular csv files? (thinking out loud here...)\r\n\r\nIf this is the case, it may be ok for my use case (have to think about it more), still a bit surprising given that datasets caching is disabled (or so I hope) by the lines I pasted above. ", "Hi ! Currently disabling the caching means that all the dataset transform like `map`, `filter` etc. ignore the cache: it doesn't write nor read processed cache files.\r\nHowever `load_dataset` reuses datasets that have already been prepared: it does reload prepared dataset files.\r\n\r\nIndeed from the documentation:\r\n> datasets.set_caching_enabled(boolean: bool)\r\n\r\n> When applying transforms on a dataset, the data are stored in cache files. The caching mechanism allows to reload an existing cache file if it’s already been computed.\r\n> Reloading a dataset is possible since the cache files are named using the dataset fingerprint, which is updated after each transform.\r\n> If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. More precisely, if the caching is disabled:\r\n> - cache files are always recreated\r\n> - cache files are written to a temporary directory that is deleted when session closes\r\n> - cache files are named using a random hash instead of the dataset fingerprint - use datasets.Dataset.save_to_disk() to save a transformed dataset or it will be deleted when session closes\r\n> - caching doesn’t affect datasets.load_dataset(). If you want to regenerate a dataset from scratch you should use the download_mode parameter in datasets.load_dataset().", "Thank you for the clarification. \r\n\r\nThis is a bit confusing. On one hand, it says that cache files are always recreated and written to a temporary directory that is removed; on the other hand the last bullet point makes me think that since the default according to the docs for `download_mode (Optional datasets.GenerateMode) – select the download/generate mode - Default to REUSE_DATASET_IF_EXISTS` => it almost sounds that it could reload prepared dataset files. Where are these files stored? I guess not in the temporary directory that is removed... \r\n\r\nI find this type of api design error-prone. When I see as a programmer `datasets.set_caching_enabled(False)` I expect no reuse of anything in the cache. ", "It would be nice if the documentation elaborated on all the possible values for `download_mode` and/or a link to `datasets.GenerateMode`. \r\nThis info here:\r\n```\r\n \"\"\"`Enum` for how to treat pre-existing downloads and data.\r\n The default mode is `REUSE_DATASET_IF_EXISTS`, which will reuse both\r\n raw downloads and the prepared dataset if they exist.\r\n The generations modes:\r\n | | Downloads | Dataset |\r\n | -----------------------------------|-----------|---------|\r\n | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse |\r\n | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh |\r\n | `FORCE_REDOWNLOAD` | Fresh | Fresh |\r\n```", "I have another question. Assuming that I understood correctly and there is reuse of datasets files when caching is disabled (!), I'm guessing there is a directory that is created based on some information on the dataset file. I'm interested in the situation where I'm loading a (custom) dataset from local disk. What information is used to create the directory/filenames where the files are stored?\r\n\r\nI'm concerned about the following scenario: if I have a file, let's say `train.csv` at path `the_path`, run once, the dataset is prepared, some models are run, etc. Now let's say there is an issue and I recreate `train.csv` at the same path `the_path`. Is there enough information in the temporary name/hash to *not* reload the *old* prepared dataset (e.g., timestamp of the file)? Or is it going to reload the *old* prepared file? ", "Thanks for the feedback, we'll work in improving this aspect of the documentation.\r\n\r\n> Where are these files stored? I guess not in the temporary directory that is removed...\r\n\r\nWe're using the Arrow file format to load datasets. Therefore each time you load a dataset, it is prepared as an arrow file on your disk. By default the file is located in the ~/.cache/huggingface/datasets/<dataset_name>/<config_id>/<version> directory.\r\n\r\n> What information is used to create the directory/filenames where the files are stored?\r\n\r\nThe config_id contains a hash that takes into account:\r\n- the dataset loader used and its source code (e.g. the \"csv\" loader)\r\n- the arguments passed to the loader (e.g. the csv delimiter)\r\n- metadata of the local data files if any (e.g. their timestamps)\r\n\r\n> I'm concerned about the following scenario: if I have a file, let's say train.csv at path the_path, run once, the dataset is prepared, some models are run, etc. Now let's say there is an issue and I recreate train.csv at the same path the_path. Is there enough information in the temporary name/hash to not reload the old prepared dataset (e.g., timestamp of the file)? Or is it going to reload the old prepared file?\r\n\r\nYes the timestamp of the local csv file is taken into account. If you edit your csv file, the config_id will change and loading the dataset will create a new arrow file.", "Thank you for all your clarifications, really helpful! \r\n\r\nIf you have the bandwidth, please do revisit the api wrt cache disabling. Anywhere in the computer stack (hardware included) where you disable the cache, one assumes there is no caching that happens. ", "That makes total sense indeed !\r\nI think we can do the change", "I have another question about caching, this time in the case where FORCE_REDOWNLOAD is used to load the dataset, the datasets cache is one directory as defined by HF_HOME and there are multiple concurrent jobs running in a cluster using the same local dataset (i.e., same local files in the cluster). Does anything in the naming convention and/or file access/locking that you're using prevent race conditions between the concurrent jobs on the caching of the local dataset they all use?\r\n\r\nI noticed some errors (can provide more details if helpful) in load_dataset/prepare_split that lead to my question above. \r\n\r\nLet me know if my question is clear, I can elaborate more if needed @lhoestq Thank you!", "I got another error that convinces me there is a race condition (one of the test files had zero samples at prediction time). I think it comes down to the fact that the `config_id` above (used in the naming for the cache) has no information on who's touching the data. If I have 2 concurrent jobs, both loading the same dataset and forcing redownload, they may step on each other foot/caching of the dataset. ", "We're using a locking mechanism to prevent two processes from writing at the same time. The locking is based on the `filelock` module.\r\nAlso directories that are being written use a suffix \".incomplete\" so that reading is not possible on a dataset being written.\r\n\r\nDo you think you could provide a simple code to reproduce the race condition you experienced ?", "I can provide details about the code I'm running (it's really-really close to some official samples from the huggingface transformers examples, I can point to the exact sample file, I kept a record of that). I can also describe in which conditions this race occurs (I'm convinced it has to do with forcing the redownloading of the dataset, I've been running hundreds of experiments before and didn't have a problem before I forced the redownload). I also can provide samples of the different stack errors I get and some details about the level of concurrency of jobs I was running. I can also try to imagine how the race manifests (I'm fairly sure that it's a combo of one job cleaning up and another job being in the middle of the run).\r\n\r\nHowever, I have to cleanup all this to make sure I'm no spilling any info I shouldn't be spilling. I'll try to do it by the end of the week, if you think all this is helpful. \r\n\r\nFor now, I have a workaround. Don't use forcing redownloading. And to be ultra careful (although I don't think this is a problem), I run a series of jobs that will prepare the datasets and I know there is no concurrency wrt the dataset. Once that's done (and I believe even having multiple jobs loading the datasets at the same time doesn't create problems, as long as REUSE_DATASET_IF_EXISTS is the policy for loading the dataset, so the filelock mechanism you're using is working in that scenario), the prepared datasets will be reused, no race possible in any way. \r\n\r\nThanks for all the details you provided, it helped me understand the underlying implementation and coming up with workarounds when I ran into issues. ", "Hi! I have the same challenge with caching, where the **.cache** folder is required even though it isn't possible for me.\r\n\r\nI'd like to run transformers in Snowflake, using Snowpark for Python, this would mean I could provide configurable transformers in real-time for business users without having data leave an environment (for security reasons). With no need for data transfer,n the compute is faster. It is a large use case - is it possible to entirely disable caching in certain scenarios?\r\n@lhoestq ?\r\n", "You can try to change the location of the cache folder using the `HF_CACHE_HOME` environment variable, and set a location where you have read/write access.", "Thanks @lhoestq \r\n\r\nI wanted to do that, however, snowflake does not allow it to write at all. I'm asking around to see if they can help me out with that issue 😅" ]
null
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
Question (potential issue?) related to datasets caching
NONE
https://api.github.com/repos/huggingface/datasets/issues/2187/timeline
I thought I had disabled datasets caching in my code, as follows: ``` from datasets import set_caching_enabled ... def main(): # disable caching in datasets set_caching_enabled(False) ``` However, in my log files I see messages like the following: ``` 04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877 04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93 ``` Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you!
https://api.github.com/repos/huggingface/datasets
null
852,939,736
https://api.github.com/repos/huggingface/datasets/issues/2187/comments
MDU6SXNzdWU4NTI5Mzk3MzY=
null
2,187
https://api.github.com/repos/huggingface/datasets/issues/2187/events
false
closed
2021-04-07T21:39:07Z
null
https://api.github.com/repos/huggingface/datasets/issues/2186
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2186/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2186/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
https://github.com/huggingface/datasets/pull/2186
[]
false
2021-04-07T21:56:35Z
2021-04-07T21:56:35Z
null
[ "cc @sebastiangehrmann" ]
null
[]
GEM: new challenge sets
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2186/timeline
This PR updates the GEM dataset to: - remove extraneous fields in WikiAuto after https://github.com/huggingface/datasets/pull/2171 fixed the source - add context and services to Schema Guided Dialog - Add new or update challenge sets for MLSUM ES and DE, XSUM, and SGD
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2186.diff", "html_url": "https://github.com/huggingface/datasets/pull/2186", "merged_at": "2021-04-07T21:56:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/2186.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2186" }
852,840,819
https://api.github.com/repos/huggingface/datasets/issues/2186/comments
MDExOlB1bGxSZXF1ZXN0NjExMDMxNzE0
null
2,186
https://api.github.com/repos/huggingface/datasets/issues/2186/events
true
closed
2021-04-07T18:22:14Z
null
https://api.github.com/repos/huggingface/datasets/issues/2185
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2185/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2185/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VictorSanh", "id": 16107619, "login": "VictorSanh", "node_id": "MDQ6VXNlcjE2MTA3NjE5", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "repos_url": "https://api.github.com/users/VictorSanh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "type": "User", "url": "https://api.github.com/users/VictorSanh" }
https://github.com/huggingface/datasets/issues/2185
[]
false
2021-10-23T07:11:15Z
2021-04-09T15:38:31Z
null
[ "Hi, one workaround would be to save the mapped(tokenized in your case) file using `save_to_disk`, and having each process load this file using `load_from_disk`. This is what I am doing, and in this case, I turn off the ability to automatically load from the cache.\r\n\r\nAlso, multiprocessing the map function seems to be slower at the moment (#1992), hope this helps you.", "Thanks @hwijeen for the workaround, feels a bit prototypical but it works! (it seems files are written twice then though)\r\n\r\n(I haven't observed slowness using multiprocessed map function but I could be wrong)", "To my understanding, files are written twice anyhow(one after load_dataset, another aftet map). It's just that you now have it at the location where you can see, whereas it was secretlely saved at caching folder(.cache/huggingface/datasets by default)! Correct me if I'm wrong!", "Slowness in multiprocessing has been observed in certain environments but not others. We're investigating ;)", "So to answer my initial question, I was just doing something stupid as I was not re-giving the `preprocessing_num_workers` arguments when launching the distributed training (and it was then set to `None`). I initially thought the hash was computed only with the `tokenize_function` but it's all arguments. Thanks @lhoestq for clarifying!", "This cache process isn't really consistent. I just changed `per_device_train_batch_size` of training script and now it rebuilding the dataset cache!!!! Why?", "Hi ! A `map` function is recomputed if the code changes or if any of the variables it uses changes. Can you check that your function doesn't use `per_device_train_batch_size` or any variable that contains `per_device_train_batch_size` ?", "My code is actually a transformer's example for training t5, I modified a bit:\r\n\r\nhttps://github.com/puraminy/transformers/blob/4b40877132eedb566043f83de8f1d29a84d71430/examples/flax/language-modeling/run_t5_mlm_flax.py#L614\r\n\r\nNo, it doesn't use `per_device_train_batch_size`. I remember it worked for several times and then for no reason or various reasons like the above it started to build the cache again, as if it had an expiration date (maybe), or maybe I had changed the code! \r\n\r\nSo, to get rid of these problems I saved cache with a name (was forced to not use multiple_processes, because otherwise it generates multiple files) and then I load it from this cache file. " ]
completed
[]
.map() and distributed training
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2185/timeline
Hi, I have a question regarding distributed training and the `.map` call on a dataset. I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`. `dataset` is then tokenized: ```python datasets = load_from_disk(dataset_path=my_path) [...] def tokenize_function(examples): return tokenizer(examples[text_column_name]) logger.info("Mapping dataset to tokenized dataset.") tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=True, ) ``` I am using 31 workers (`preprocessing_num_workers=31`) and thus it creates 31 `cache*.arrow` files in `my_path/train` (there is only a train split). When I relaunch the script, the map is tokenization is skipped in favor of loading the 31 previously cached files, and that's perfect. Everything so far was done by launching a **single process script**. I now launch the same training script in **distributed mode** (`pytorch -m torch.distributed.launch --nproc_per_node 2`). However, once it reaches the map call, it re-does the tokenization... instead of loading the 31 cached files. I tried adding the `cache_file_name` argument: `cache_file_name={"train": my_path/one_of_the_arrow_file}`, but I can't give the 31 cached files, so it probably isn't the right way to do it. **My question: what is the best way to load cached files if they were pre-processed and dumped in multiple arrow files?** It seems automatically handled for single processes but fails on distributed training. - I am following the same structure as the examples of transformers (more specifically [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) in my case) - I am using 1.5.0 version of datasets if that matters.
https://api.github.com/repos/huggingface/datasets
null
852,684,395
https://api.github.com/repos/huggingface/datasets/issues/2185/comments
MDU6SXNzdWU4NTI2ODQzOTU=
null
2,185
https://api.github.com/repos/huggingface/datasets/issues/2185/events
false
closed
2021-04-07T16:47:43Z
null
https://api.github.com/repos/huggingface/datasets/issues/2184
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2184/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2184/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" }
https://github.com/huggingface/datasets/pull/2184
[]
false
2021-04-16T11:44:37Z
2021-04-16T11:26:59Z
null
[ "Made the required changes @lhoestq , sorry it took so much time!" ]
null
[]
Implementation of class_encode_column
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2184/timeline
Addresses #2176 I'm happy to discuss the API and internals!
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2184.diff", "html_url": "https://github.com/huggingface/datasets/pull/2184", "merged_at": "2021-04-16T11:26:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/2184.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2184" }
852,597,258
https://api.github.com/repos/huggingface/datasets/issues/2184/comments
MDExOlB1bGxSZXF1ZXN0NjEwODIxMTc0
null
2,184
https://api.github.com/repos/huggingface/datasets/issues/2184/events
true
closed
2021-04-07T15:17:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/2183
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2183/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2183/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2183
[]
false
2021-04-08T08:54:45Z
2021-04-08T08:54:44Z
null
[]
null
[]
Fix s3fs tests for py36 and py37+
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2183/timeline
Recently several changes happened: 1. latest versions of `fsspec` require python>3.7 for async features 2. `s3fs` added a dependency on `aiobotocore`, which is not compatible with the `moto` s3 mock context manager This PR fixes both issues, by pinning `fsspec` and `s3fs` for python 3.6, and by using `moto` in server mode to support running the tests on python>=3.7 with the latest version of `fsspec` and `s3fs`. cc @philschmid
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2183.diff", "html_url": "https://github.com/huggingface/datasets/pull/2183", "merged_at": "2021-04-08T08:54:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/2183.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2183" }
852,518,411
https://api.github.com/repos/huggingface/datasets/issues/2183/comments
MDExOlB1bGxSZXF1ZXN0NjEwNzU3MjUz
null
2,183
https://api.github.com/repos/huggingface/datasets/issues/2183/events
true
closed
2021-04-07T13:00:18Z
null
https://api.github.com/repos/huggingface/datasets/issues/2182
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2182/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2182/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2182
[]
false
2021-04-20T14:20:12Z
2021-04-20T10:04:04Z
{ "closed_at": "2021-04-20T16:50:46Z", "closed_issues": 4, "created_at": "2021-04-09T13:07:51Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-04-16T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/1", "id": 6644198, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels", "node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==", "number": 1, "open_issues": 0, "state": "closed", "title": "1.6", "updated_at": "2021-04-20T16:50:46Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/1" }
[ "I ping @krandiash to keep him up to date.", "TODO:\r\n- [x] Add a section in the docs about this.\r\n- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~", "@lhoestq I have a question, regarding:\r\n> Also maybe we should add a warning if someone tries to specify cache_file_name= in map, filter etc. on a dataset that is in memory, since the computation is not going to be cached in this case.\r\n\r\n- It might be the case that the user has an in-memory dataset and might want to use `map` and cache it, by passing `cache_file_name=`\r\n- This is indeed allowed by the library and works as expected: the dataset is cached.\r\n\r\nWhy adding a warning?", "Yes right, I meant if `load_from_cache_file` is set to True and `cache_file_name ` is None. my bad :p" ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Set default in-memory value depending on the dataset size
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2182/timeline
Set a default value for `in_memory` depending on the size of the dataset to be loaded. Close #2179. TODO: - [x] Add a section in the docs about this. - ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2182.diff", "html_url": "https://github.com/huggingface/datasets/pull/2182", "merged_at": "2021-04-20T10:04:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/2182.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2182" }
852,384,872
https://api.github.com/repos/huggingface/datasets/issues/2182/comments
MDExOlB1bGxSZXF1ZXN0NjEwNjQ2MDIy
null
2,182
https://api.github.com/repos/huggingface/datasets/issues/2182/events
true
closed
2021-04-07T10:26:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/2181
null
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/2181/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2181/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hwijeen", "id": 29157715, "login": "hwijeen", "node_id": "MDQ6VXNlcjI5MTU3NzE1", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "repos_url": "https://api.github.com/users/hwijeen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "type": "User", "url": "https://api.github.com/users/hwijeen" }
https://github.com/huggingface/datasets/issues/2181
[]
false
2021-04-12T07:15:55Z
2021-04-12T07:15:55Z
null
[ "Hi ! Can you try to increase the block size ? For example\r\n```python\r\nblock_size_10MB = 10<<20\r\nload_dataset(\"json\", ..., block_size=block_size_10MB)\r\n```\r\nThe block size corresponds to how much bytes to process at a time from the input stream.\r\nThis will determine multi-threading granularity as well as the size of individual chunks in the dataset.\r\n\r\nYou can also try with bigger block sizes if needed", "Hi @lhoestq! Thank you for your prompt reply.\r\nI have experimented with (10<<20, 10<<28, 10<<30, 10<<33, 10<<34), since my machine has 192G of memory, but it's either the above-mentioned error or processed killed because of OOM.\r\n\r\nCould you give me a bit of background on why block size needs to be exactly calibrated?\r\nTo my understanding, small block sized should run just fine despite its slowness..\r\n\r\n\r\n", "We're using the JSON loader of pyarrow. It parses the file chunk by chunk to load the dataset.\r\nThis issue happens when there's no delimiter in one chunk of data. For json line, the delimiter is the end of line.\r\nSo with a big value for chunk_size this should have worked unless you have one extremely long line in your file.\r\n\r\nAlso what version of pyarrow are you using ?\r\n\r\nFInally I wonder if it could be an issue on pyarrow's side when using big json files. (I haven't tested big json files like yours)", "I'm using `pyarrow==3.0.0` with `datasets==1.5.0`.\r\n\r\nYour point totally makes sense. I will check if my jsonl file contains an extremely long file and let you know. \r\n\r\nHere are some different error messages that I got when tweaking `block_size`. I also suspect that this is related to the pyarrow... but I guess it would be wonderful if datasesets could give a clear guide on how to play with large datasets! (I am suddenly experiencing various issue when working with large datasets.. e.g. #1992 )\r\n```python\r\n return paj.ReadOptions(use_threads=self.use_threads, block_size=self.block_size)\r\n File \"pyarrow/_json.pyx\", line 56, in pyarrow._json.ReadOptions.__init__\r\n File \"pyarrow/_json.pyx\", line 81, in pyarrow._json.ReadOptions.block_size.__set__\r\nOverflowError: value too large to convert to int32_t\r\n```\r\n\r\n```python\r\n\r\nline 83, in _generate_tables\r\n parse_options=self.config.pa_parse_options,\r\n File \"pyarrow/_json.pyx\", line 247, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 84, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Exceeded maximum rows\r\n```", "I am getting the same error. When I tweak the block_size, I also find:\r\n`OverflowError: value too large to convert to int32_t`\r\nand \r\n`pyarrow.lib.ArrowInvalid: Exceeded maximum rows`\r\n", "I made more tests. I used a smaller dataset and I was getting the same error, which means that it was not necessarily linked to the dataset size. To make both my smaller and larger datasets work, I got rid of lists with the json file. I had the following data format:\r\n```python\r\n[\r\n {'key': \"a\", 'value': ['one', 'two', 'three']},\r\n {'key': \"b\", 'value': ['four', 'five', 'six']}\r\n]\r\n```\r\nI changed to:\r\n\r\n```python\r\n {'key': \"a\", 'value': 'one\\ntwo\\nthree'},\r\n {'key': \"b\", 'value': 'four\\nfive\\nsix']}\r\n```\r\nand that worked!\r\n\r\nI used the following to reformat my json file:\r\n```python\r\nwith open(file_name, \"w\", encoding=\"utf-8\") as f:\r\n for item in list_:\r\n f.write(json.dumps(item) + \"\\n\")\r\n```\r\nThis works with `block_size_10MB = 10 << 20` or without specifying `block_size`.", "Thanks @hwijeen for reporting and thanks @jpilaul for pointing this out.\r\n\r\nIndeed, those are different JSON-like formats:\r\n- the first one is the **standard JSON** format: all the file content is JSON-valid, thus all content is either a JSON object (between curly brackets `{...}`) or a JSON array (between square brackets `[...]`)\r\n- the second one is called **JSON Lines**: the entire file content is not JSON-valid, but only every line (newline-delimited) is JSON-valid\r\n\r\nCurrently PyArrow only supports **JSON Lines** format: \r\n- https://arrow.apache.org/docs/python/generated/pyarrow.json.read_json.html\r\n > Currently only the line-delimited JSON format is supported.\r\n- https://arrow.apache.org/docs/python/json.html\r\n > Arrow supports reading columnar data from line-delimited JSON files.", "Thanks @albertvillanova for your explanation, it is helpful to know (maybe add to docs?)!\r\nHowever, the problem I described above happened when I was dealing with jsonl files 😿\r\nAlthough I did not thoroughly inspect, I suspect the cause was the one extremely long document in my case.", "I see... I guess there is another problem going one then, related to the size." ]
completed
[]
Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
NONE
https://api.github.com/repos/huggingface/datasets/issues/2181/timeline
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project. When loading a huge json file of 500GB, pyarrow complains as follows: ``` Traceback (most recent call last): File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 531, in incomplete_dir yield tmp_dir File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 650, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 1027, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/app/.cache/huggingface/modules/datasets_modules/datasets/json/9498524fd296a6cca99c66d6c5be507d1c0991f5a814e535b507f4a66096a641/json.py", line 83, in _generate_tables parse_options=self.config.pa_parse_options, File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` When using only a small portion of the sample file, say first 100 lines, it works perfectly well.. I see that it is the error from pyarrow, but could you give me a hint or possible solutions? #369 describes the same error and #372 claims to have fixed the issue, but I have no clue why I am still getting this one. Thanks in advance!
https://api.github.com/repos/huggingface/datasets
null
852,261,607
https://api.github.com/repos/huggingface/datasets/issues/2181/comments
MDU6SXNzdWU4NTIyNjE2MDc=
null
2,181
https://api.github.com/repos/huggingface/datasets/issues/2181/events
false
closed
2021-04-07T10:23:15Z
null
https://api.github.com/repos/huggingface/datasets/issues/2180
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2180/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2180/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2180
[]
false
2021-04-07T15:50:35Z
2021-04-07T15:50:34Z
null
[]
null
[]
Add tel to xtreme tatoeba
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2180/timeline
This should fix issue #2149
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2180.diff", "html_url": "https://github.com/huggingface/datasets/pull/2180", "merged_at": "2021-04-07T15:50:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/2180.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2180" }
852,258,635
https://api.github.com/repos/huggingface/datasets/issues/2180/comments
MDExOlB1bGxSZXF1ZXN0NjEwNTQxOTA2
null
2,180
https://api.github.com/repos/huggingface/datasets/issues/2180/events
true
closed
2021-04-07T09:58:16Z
null
https://api.github.com/repos/huggingface/datasets/issues/2179
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2179/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2179/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/issues/2179
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
false
2021-04-20T10:04:04Z
2021-04-20T10:04:03Z
null
[]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
Load small datasets in-memory instead of using memory map
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2179/timeline
Currently all datasets are loaded using memory mapping by default in `load_dataset`. However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and: - its memory footprint would be small so it's ok - in-memory computations/queries would be faster - the caching on-disk would be disabled, making computations even faster (no I/O bound because of the disk) - but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed. Therefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping.
https://api.github.com/repos/huggingface/datasets
null
852,237,957
https://api.github.com/repos/huggingface/datasets/issues/2179/comments
MDU6SXNzdWU4NTIyMzc5NTc=
null
2,179
https://api.github.com/repos/huggingface/datasets/issues/2179/events
false
closed
2021-04-07T09:30:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/2178
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2178/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2178/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2178
[]
false
2021-04-20T14:20:44Z
2021-04-13T09:28:16Z
{ "closed_at": "2021-04-20T16:50:46Z", "closed_issues": 4, "created_at": "2021-04-09T13:07:51Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-04-16T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/1", "id": 6644198, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels", "node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==", "number": 1, "open_issues": 0, "state": "closed", "title": "1.6", "updated_at": "2021-04-20T16:50:46Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/1" }
[ "I addressed your comments about the docstrings and the output validation :)", "I updated the bleurt mocking method and bleurt test is passing now.\r\nI also ran the slow tests and they are passing for bleurt.", "Thanks @lhoestq and @albertvillanova !" ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Fix cast memory usage by using map on subtables
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2178/timeline
The `cast` operation on a pyarrow Table may create new arrays in memory. This is an issue since users expect memory mapped datasets to not fill up the RAM. To fix that I used `map` to write a new arrow file on disk when cast is used. To make things more convenient I introduced the `arrow` formatting of a dataset, to make it return pyarrow tables instead of python dicts. This way one can use pyarrow transforms directly when using `map`. edit: we'll use the same mechanism for `filter`
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2178.diff", "html_url": "https://github.com/huggingface/datasets/pull/2178", "merged_at": "2021-04-13T09:28:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/2178.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2178" }
852,215,058
https://api.github.com/repos/huggingface/datasets/issues/2178/comments
MDExOlB1bGxSZXF1ZXN0NjEwNTA1Mjg1
null
2,178
https://api.github.com/repos/huggingface/datasets/issues/2178/events
true
closed
2021-04-07T06:40:06Z
null
https://api.github.com/repos/huggingface/datasets/issues/2177
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2177/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2177/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/philschmid", "id": 32632186, "login": "philschmid", "node_id": "MDQ6VXNlcjMyNjMyMTg2", "organizations_url": "https://api.github.com/users/philschmid/orgs", "received_events_url": "https://api.github.com/users/philschmid/received_events", "repos_url": "https://api.github.com/users/philschmid/repos", "site_admin": false, "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "type": "User", "url": "https://api.github.com/users/philschmid" }
https://github.com/huggingface/datasets/pull/2177
[]
false
2021-04-07T08:16:01Z
2021-04-07T08:16:01Z
null
[]
null
[]
add social thumbnial
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2177/timeline
# What does this PR do? I added OpenGraph/ Twitter Card support to the docs to create nice social thumbnails. ![Bildschirmfoto 2021-04-07 um 08 36 50](https://user-images.githubusercontent.com/32632186/113821698-bac2ce80-977c-11eb-81aa-d8f16355857e.png) To be able to add these I needed to install `sphinxext-opengraph`. I came across this [issue](https://github.com/readthedocs/readthedocs.org/issues/1758) on the readthedocs repo saying that since someone has built this plugin they are not integrating and providing documentation to it. That's why I added it for creating the documentation. The repository can be found [here](https://github.com/wpilibsuite/sphinxext-opengraph/tree/main). P.S. It seemed that `make style` never ran for `docs/` i hope the changes are okay otherwise I'll revert it.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2177.diff", "html_url": "https://github.com/huggingface/datasets/pull/2177", "merged_at": "2021-04-07T08:16:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/2177.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2177" }
852,065,307
https://api.github.com/repos/huggingface/datasets/issues/2177/comments
MDExOlB1bGxSZXF1ZXN0NjEwMzc5MDYx
null
2,177
https://api.github.com/repos/huggingface/datasets/issues/2177/events
true
closed
2021-04-06T22:54:16Z
null
https://api.github.com/repos/huggingface/datasets/issues/2176
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2176/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2176/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4", "events_url": "https://api.github.com/users/nelson-liu/events{/privacy}", "followers_url": "https://api.github.com/users/nelson-liu/followers", "following_url": "https://api.github.com/users/nelson-liu/following{/other_user}", "gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nelson-liu", "id": 7272031, "login": "nelson-liu", "node_id": "MDQ6VXNlcjcyNzIwMzE=", "organizations_url": "https://api.github.com/users/nelson-liu/orgs", "received_events_url": "https://api.github.com/users/nelson-liu/received_events", "repos_url": "https://api.github.com/users/nelson-liu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions", "type": "User", "url": "https://api.github.com/users/nelson-liu" }
https://github.com/huggingface/datasets/issues/2176
[]
false
2022-06-01T16:31:49Z
2022-06-01T16:31:49Z
null
[ "Hi @nelson-liu!\r\nHere is what I do to convert a string to class label:\r\n\r\n```python\r\nfrom datasets import load_dataset, features\r\n\r\n\r\ndset = load_dataset(...)\r\ncol_name = \"the string column name\"\r\n\r\nclass_names = dset.unique(col_name)\r\nclass_feature = features.ClassLabel(names=sorted(class_names))\r\ndset = dset.map(lambda str_value: {col_name: class_feature.str2int(str_value)}, input_columns=col_name)\r\n\r\ndset = dset.cast(features.Features({\r\n ...\r\n col_name: class_feature\r\n})\r\n```\r\n", "Hi! You can use `Dataset.class_encode_column` for this. And in the next release of `datasets` (this feature is only available on `master`), you'll also be able to use `cast` to do the conversion. \r\n\r\nAn example of conversion via `cast`: \r\n```python\r\nfrom datasets import Dataset, Features, ClassLabel\r\nd = Dataset.from_dict({\"a\": [\"no\", \"yes\", \"no\"]})\r\nd = d.cast(Features({\"a\": ClassLabel(names=[\"yes\", \"no\"])}))\r\n```" ]
completed
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Converting a Value to a ClassLabel
NONE
https://api.github.com/repos/huggingface/datasets/issues/2176/timeline
Hi! In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.` Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks!
https://api.github.com/repos/huggingface/datasets
null
851,865,795
https://api.github.com/repos/huggingface/datasets/issues/2176/comments
MDU6SXNzdWU4NTE4NjU3OTU=
null
2,176
https://api.github.com/repos/huggingface/datasets/issues/2176/events
false
closed
2021-04-06T21:50:49Z
null
https://api.github.com/repos/huggingface/datasets/issues/2175
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2175/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2175/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez" }
https://github.com/huggingface/datasets/issues/2175
[]
false
2021-04-16T12:21:16Z
2021-04-16T12:21:15Z
null
[ "Actually, I found the answer [here](https://github.com/facebookresearch/faiss/wiki/FAQ#what-does-it-mean-when-a-search-returns--1-ids). \r\n\r\nSo we have to do some modifications to the code for instances where the index doesn't retrieve any IDs.", "@lhoestq @patrickvonplaten \r\n\r\nI also found another short bug in the retrieval part. Especially, when retrieving documents. If Faiss returns the -1 as the index, the retriever will always use the last element in the dataset.\r\n\r\nplease check [def get_doc_dicts function](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L222)\r\n\r\n\r\nDoes the use of the HNSW guarantee to retrieve valid indexes always? \r\n\r\n", "Hi !\r\nNo it happens sometimes to return -1, especially if your dataset is small.\r\nIf your dataset is big enough it shouldn't happen in my experience.\r\n\r\nIdeally we should ignore all the -1 that are returned. It should be possible to change that in RAG's code ", "I also checked with some indexes it returns more -1s. Specially with IVF\nwhen nprobr is very low. It doesn't happen when using HNSW though. But at\nthe moment if it happens, dataset will always return the last element.\nMaybe we should change it to repeat the most last valid retrieved doc id.\nWhat do you think?\n\nOn Wed, Apr 7, 2021, 21:09 Quentin Lhoest ***@***.***> wrote:\n\n> Hi !\n> No it happens sometimes to return -1, especially if your dataset is small.\n> If your dataset is big enough it shouldn't happen.\n>\n> Ideally we should ignore all the -1 that are returned. It should be\n> possible to change that in RAG's code\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2175#issuecomment-814746509>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGTENOTLBEZTXEO2RS3THQOMPANCNFSM42PRVYDA>\n> .\n>\n", "That would be an easy way to workaround this issue. Feel free to open a PR on `transformers` and ping me ! :)", "Sure. Will push everything together with RAG end to end. :) thanks a lot.\n\nOn Wed, Apr 7, 2021, 21:16 Quentin Lhoest ***@***.***> wrote:\n\n> That would be an easy way to workaround this issue. Feel free to open a PR\n> on transformers and ping me ! :)\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2175#issuecomment-814752589>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AEA4FGWLROCGARKN7WOJYSTTHQPH5ANCNFSM42PRVYDA>\n> .\n>\n" ]
completed
[]
dataset.search_batch() function outputs all -1 indices sometime.
NONE
https://api.github.com/repos/huggingface/datasets/issues/2175/timeline
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**. During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L231) an error issue when all retrieved indices are -1. Please refer to the screenshot of a PID worker. ![image](https://user-images.githubusercontent.com/16892570/113782387-37a67600-9786-11eb-9c29-acad661a9648.png) Here, my retrieve batch size is 2 and n_docs is 5. I can solve this by working around np. stack, but I want to ask, why we get an output index of -1. Do you have any idea :) ? Is this a problem of the index, where the faiss can't find any similar vector? Is there documentation on the output index being -1? @lhoestq
https://api.github.com/repos/huggingface/datasets
null
851,836,096
https://api.github.com/repos/huggingface/datasets/issues/2175/comments
MDU6SXNzdWU4NTE4MzYwOTY=
null
2,175
https://api.github.com/repos/huggingface/datasets/issues/2175/events
false
closed
2021-04-06T12:40:20Z
null
https://api.github.com/repos/huggingface/datasets/issues/2174
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2174/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2174/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
https://github.com/huggingface/datasets/pull/2174
[]
false
2021-04-06T12:55:53Z
2021-04-06T12:55:53Z
null
[]
null
[]
Pin docutils for better doc
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2174/timeline
The latest release of docutils make the navbar in the documentation weird and the Markdown wrongly interpreted: ![image](https://user-images.githubusercontent.com/35901082/113711773-5be55280-96b3-11eb-9b3b-9794f17709aa.png) We had the same problem in Transformers and solved it by pinning docutils (a dep of sphinx). You can see the version after the change [here](https://32769-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html).
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2174.diff", "html_url": "https://github.com/huggingface/datasets/pull/2174", "merged_at": "2021-04-06T12:55:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/2174.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2174" }
851,383,675
https://api.github.com/repos/huggingface/datasets/issues/2174/comments
MDExOlB1bGxSZXF1ZXN0NjA5ODE2OTQ2
null
2,174
https://api.github.com/repos/huggingface/datasets/issues/2174/events
true
closed
2021-04-06T12:08:34Z
null
https://api.github.com/repos/huggingface/datasets/issues/2173
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2173/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2173/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cahya-wirawan", "id": 7669893, "login": "cahya-wirawan", "node_id": "MDQ6VXNlcjc2Njk4OTM=", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "type": "User", "url": "https://api.github.com/users/cahya-wirawan" }
https://github.com/huggingface/datasets/pull/2173
[]
false
2021-04-12T16:54:46Z
2021-04-12T16:54:46Z
null
[]
null
[]
Add OpenSLR dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2173/timeline
OpenSLR (https://openslr.org/) is a site devoted to hosting speech and language resources, such as training corpora for speech recognition, and software related to speech recognition. There are around 80 speech datasets listed in OpenSLR, currently this PR includes only 9 speech datasets SLR41, SLR42, SLR43, SLR44, SLR63, SLR64, SLR65, SLR66 and SLR69 (Javanese, Khmer, Nepali and Sundanese, Malayalam, Marathi, Tamil, Telugu and Catalan). I can add other speech datasets gradually next time.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2173.diff", "html_url": "https://github.com/huggingface/datasets/pull/2173", "merged_at": "2021-04-12T16:54:45Z", "patch_url": "https://github.com/huggingface/datasets/pull/2173.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2173" }
851,359,284
https://api.github.com/repos/huggingface/datasets/issues/2173/comments
MDExOlB1bGxSZXF1ZXN0NjA5Nzk2NzI2
null
2,173
https://api.github.com/repos/huggingface/datasets/issues/2173/events
true
closed
2021-04-06T09:19:09Z
null
https://api.github.com/repos/huggingface/datasets/issues/2172
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2172/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2172/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2172
[]
false
2021-04-06T09:49:27Z
2021-04-06T09:49:26Z
null
[]
null
[]
Pin fsspec lower than 0.9.0
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2172/timeline
Today's release of `fsspec` 0.9.0 implied a new release of `s3fs` 0.6.0 but this version breaks the CI (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/5312/workflows/490f3240-cd1c-4dd1-bb60-b416771c5584/jobs/32734) for example) I'm pinning `fsspec` until this has been resolved
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2172.diff", "html_url": "https://github.com/huggingface/datasets/pull/2172", "merged_at": "2021-04-06T09:49:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/2172.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2172" }
851,229,399
https://api.github.com/repos/huggingface/datasets/issues/2172/comments
MDExOlB1bGxSZXF1ZXN0NjA5Njg4ODgx
null
2,172
https://api.github.com/repos/huggingface/datasets/issues/2172/events
true
closed
2021-04-06T07:13:11Z
null
https://api.github.com/repos/huggingface/datasets/issues/2171
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2171/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2171/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4", "events_url": "https://api.github.com/users/mounicam/events{/privacy}", "followers_url": "https://api.github.com/users/mounicam/followers", "following_url": "https://api.github.com/users/mounicam/following{/other_user}", "gists_url": "https://api.github.com/users/mounicam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mounicam", "id": 11708999, "login": "mounicam", "node_id": "MDQ6VXNlcjExNzA4OTk5", "organizations_url": "https://api.github.com/users/mounicam/orgs", "received_events_url": "https://api.github.com/users/mounicam/received_events", "repos_url": "https://api.github.com/users/mounicam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mounicam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mounicam/subscriptions", "type": "User", "url": "https://api.github.com/users/mounicam" }
https://github.com/huggingface/datasets/pull/2171
[]
false
2021-04-06T16:05:42Z
2021-04-06T16:05:09Z
null
[ "Also you can ignore the CI failing on `docs`, this has been fixed on master :)", "@lhoestq I need to update other stuff on GEM later today too, so will merge this one and remove columns in the next PR!", "Ok !" ]
null
[]
Fixed the link to wikiauto training data.
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2171/timeline
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2171.diff", "html_url": "https://github.com/huggingface/datasets/pull/2171", "merged_at": "2021-04-06T16:05:09Z", "patch_url": "https://github.com/huggingface/datasets/pull/2171.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2171" }
851,090,662
https://api.github.com/repos/huggingface/datasets/issues/2171/comments
MDExOlB1bGxSZXF1ZXN0NjA5NTY4MDcw
null
2,171
https://api.github.com/repos/huggingface/datasets/issues/2171/events
true
open
2021-04-06T03:13:18Z
null
https://api.github.com/repos/huggingface/datasets/issues/2170
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2170/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2170/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/946903?v=4", "events_url": "https://api.github.com/users/leezu/events{/privacy}", "followers_url": "https://api.github.com/users/leezu/followers", "following_url": "https://api.github.com/users/leezu/following{/other_user}", "gists_url": "https://api.github.com/users/leezu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leezu", "id": 946903, "login": "leezu", "node_id": "MDQ6VXNlcjk0NjkwMw==", "organizations_url": "https://api.github.com/users/leezu/orgs", "received_events_url": "https://api.github.com/users/leezu/received_events", "repos_url": "https://api.github.com/users/leezu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leezu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leezu/subscriptions", "type": "User", "url": "https://api.github.com/users/leezu" }
https://github.com/huggingface/datasets/issues/2170
[]
false
2021-06-16T01:10:50Z
null
null
[ "It seems that this can be fixed from user's end by including a `date` argument, like this:\r\n\r\n`dataset = datasets.load_dataset('wikipedia', '20200501.en', date='20210420')`\r\n\r\nYou can get available dates from [here](https://dumps.wikimedia.org/enwiki/).\r\n\r\nThis is not a proper fix however as all the files will still have '20200501' in their file names." ]
null
[]
Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date
NONE
https://api.github.com/repos/huggingface/datasets/issues/2170/timeline
Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides ``` 20201220/ 02-Feb-2021 01:36 - 20210101/ 21-Feb-2021 01:26 - 20210120/ 02-Mar-2021 01:25 - 20210201/ 21-Mar-2021 01:26 - 20210220/ 02-Apr-2021 01:26 - 20210301/ 03-Mar-2021 08:10 - 20210320/ 21-Mar-2021 18:13 - 20210401/ 03-Apr-2021 10:08 - latest/ 03-Apr-2021 10:08 - ``` However, the wikipedia dataset provided in the library, only supports the following configs, none of which are applicable anymore when disregarding the cached datasets: ``` ValueError: BuilderConfig 20210401.ko not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu'] ``` The cached datasets: ``` % aws s3 --no-sign-request --endpoint-url https://storage.googleapis.com ls s3://huggingface-nlp/cache/datasets/wikipedia/ PRE 20200501.de/ PRE 20200501.en/ PRE 20200501.fr/ PRE 20200501.frr/ PRE 20200501.it/ PRE 20200501.simple/ ```
https://api.github.com/repos/huggingface/datasets
null
850,913,228
https://api.github.com/repos/huggingface/datasets/issues/2170/comments
MDU6SXNzdWU4NTA5MTMyMjg=
null
2,170
https://api.github.com/repos/huggingface/datasets/issues/2170/events
false
closed
2021-04-05T15:43:20Z
null
https://api.github.com/repos/huggingface/datasets/issues/2169
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2169/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2169/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4", "events_url": "https://api.github.com/users/diego-fustes/events{/privacy}", "followers_url": "https://api.github.com/users/diego-fustes/followers", "following_url": "https://api.github.com/users/diego-fustes/following{/other_user}", "gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/diego-fustes", "id": 5707233, "login": "diego-fustes", "node_id": "MDQ6VXNlcjU3MDcyMzM=", "organizations_url": "https://api.github.com/users/diego-fustes/orgs", "received_events_url": "https://api.github.com/users/diego-fustes/received_events", "repos_url": "https://api.github.com/users/diego-fustes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions", "type": "User", "url": "https://api.github.com/users/diego-fustes" }
https://github.com/huggingface/datasets/pull/2169
[]
false
2021-04-06T15:02:58Z
2021-04-06T15:02:58Z
null
[ "Hi ! Thanks for suggesting this fix \r\nUnfortunately it looks like it's already been fixed by #2111 \r\n\r\nFeel free to share your thoughts about this PR !\r\n\r\nI'm closing this one if you don't mind." ]
null
[]
Updated WER metric implementation to avoid memory issues
NONE
https://api.github.com/repos/huggingface/datasets/issues/2169/timeline
This is in order to fix this issue: https://github.com/huggingface/datasets/issues/2078
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2169.diff", "html_url": "https://github.com/huggingface/datasets/pull/2169", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2169.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2169" }
850,456,180
https://api.github.com/repos/huggingface/datasets/issues/2169/comments
MDExOlB1bGxSZXF1ZXN0NjA5MDI2ODUz
null
2,169
https://api.github.com/repos/huggingface/datasets/issues/2169/events
true
closed
2021-04-04T20:46:21Z
null
https://api.github.com/repos/huggingface/datasets/issues/2168
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2168/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2168/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/2168
[]
false
2021-04-19T10:57:05Z
2021-04-19T09:08:55Z
null
[ "Thanks for diving into this !\r\n\r\nBefore going further, I just want to make sure if using `eval` is the right solution\r\nPersonally I'm not a big fan of `eval` since it has many security concerns. Also storing string representations of python objects in the json files is not ideal either IMO, so maybe it's possible to change this aspect instead.\r\n\r\nMaybe it would be better to convert the `_RelativeInstruction` to a string (or \"specs\") ?\r\nIt looks like `ReadInstruction.from_spec` already exists, but not the other way around.\r\nThe specs are the string representation of instructions. For example: `train+validation[:50%]`.\r\n\r\nLet me know what you think ! And thanks again, this issue has been here for a while now ^^", "@lhoestq Yes, before going with `eval`, I thought about this approach with the \"spec\". The only issue with this approach is that we have to come up with a represenation for the `rounding` arg.\r\n\r\nWhat do you think about this (maybe too verbose)?\r\n```python\r\n>>> print(ReadInstruction(\"train\", rounding=\"pct1_dropremainder\", from_=10, to=30).to_spec())\r\ntrain[10:30](pct1_dropremainder)", "Good idea !\r\n\r\nFirst we must note that the rounding is only used for percentage instructions.\r\nFor absolute instructions there's no rounding ambiguity.\r\n\r\nBy default the rounding is set to `closest`. For example if you have a train set of 999 examples and if you provide an instruction spec `\"train[:1%]\"`, you're going to get the first ten examples (while the `pct1_dropremainder ` rounding would return 9 examples).\r\n\r\nCurrently there's no way to get an instruction with a `pct1_dropremainder` rounding strategy from an instruction spec.\r\nSo we can either drop the support of `pct1_dropremainder` or define a way to use this strategy from a spec.\r\nI don't think dropping `pct1_dropremainder` would be a good idea since it allows to load each percent to all have the same number of examples (even the last one). Therefore I think your suggestion makes total sense and we should add a representation of this rounding strategy.\r\n\r\nI like what you suggested `train[10%:30%](pct1_dropremainder)` is fine, and it seems compatible with the regex that parses the instructions specs.", "@lhoestq I've made the changes as you suggested. Ready for the review.", "@lhoestq I've added a test and addressed the comments.\r\n\r\nAdditionally, `ReadInstruction` is converted to its spec form in `builder.py` to avoid a circular import that would happen if this logic was in `arrow_reader.py`. If you think it's better to have this logic in `arrow_reader.py`, the import can be delayed by putting it inside a function." ]
null
[]
Preserve split type when realoding dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2168/timeline
Fixes #2167 Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO. In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module: ```python from . import arrow_reader # gives us access to ReadInstruction and _RelativeInstruction from . import splits # gives us access to NamedSplit ``` and then define the `eval` globals as follows: ```python {**arrow_reader.__dict__, **splits.__dict__} ```
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2168.diff", "html_url": "https://github.com/huggingface/datasets/pull/2168", "merged_at": "2021-04-19T09:08:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/2168.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2168" }
849,957,941
https://api.github.com/repos/huggingface/datasets/issues/2168/comments
MDExOlB1bGxSZXF1ZXN0NjA4NjA4Nzg5
null
2,168
https://api.github.com/repos/huggingface/datasets/issues/2168/events
true
closed
2021-04-04T19:29:54Z
null
https://api.github.com/repos/huggingface/datasets/issues/2167
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2167/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2167/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/issues/2167
[]
false
2021-04-19T09:08:55Z
2021-04-19T09:08:55Z
null
[]
completed
[]
Split type not preserved when reloading the dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2167/timeline
A minimal reproducible example: ```python >>> from datasets import load_dataset, Dataset >>> dset = load_dataset("sst", split="train") >>> dset.save_to_disk("sst") >>> type(dset.split) <class 'datasets.splits.NamedSplit'> >>> dset = Dataset.load_from_disk("sst") >>> type(dset.split) # NamedSplit expected <class 'str'> ``` It seems like this bug was introduced in #2025.
https://api.github.com/repos/huggingface/datasets
null
849,944,891
https://api.github.com/repos/huggingface/datasets/issues/2167/comments
MDU6SXNzdWU4NDk5NDQ4OTE=
null
2,167
https://api.github.com/repos/huggingface/datasets/issues/2167/events
false
closed
2021-04-04T02:02:45Z
null
https://api.github.com/repos/huggingface/datasets/issues/2166
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2166/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2166/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4", "events_url": "https://api.github.com/users/vyraun/events{/privacy}", "followers_url": "https://api.github.com/users/vyraun/followers", "following_url": "https://api.github.com/users/vyraun/following{/other_user}", "gists_url": "https://api.github.com/users/vyraun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vyraun", "id": 17217068, "login": "vyraun", "node_id": "MDQ6VXNlcjE3MjE3MDY4", "organizations_url": "https://api.github.com/users/vyraun/orgs", "received_events_url": "https://api.github.com/users/vyraun/received_events", "repos_url": "https://api.github.com/users/vyraun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vyraun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vyraun/subscriptions", "type": "User", "url": "https://api.github.com/users/vyraun" }
https://github.com/huggingface/datasets/issues/2166
[]
false
2021-04-06T08:13:12Z
2021-04-06T08:13:12Z
null
[ "Hi @vyraun ! The test references for CommonGen are not publicly available: you can reach out to the original dataset authors if you would like to ask for them, but we will not be releasing them as part of GEM (March 31st was the release date for the test set inputs, references are incidentally released for some of the test sets but shouldn't really be used for benchmark submissions)\r\n\r\ncc @sebastiangehrmann", "Oh okay, thanks @yjernite ! " ]
completed
[ { "color": "72f99f", "default": false, "description": "Discussions on the datasets", "id": 2067401494, "name": "Dataset discussion", "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion" } ]
Regarding Test Sets for the GEM datasets
NONE
https://api.github.com/repos/huggingface/datasets/issues/2166/timeline
@yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)? e.g. ``` from datasets import load_dataset DATASET_NAME="common_gen" data = load_dataset("gem", DATASET_NAME) ``` The test set doesn't have the target or references. ``` data['test'][0] {'concept_set_id': 0, 'concepts': ['drill', 'field', 'run', 'team'], 'gem_id': 'common_gen-test-0', 'gem_parent_id': 'common_gen-test-0', 'references': [], 'target': ''} ```
https://api.github.com/repos/huggingface/datasets
null
849,778,545
https://api.github.com/repos/huggingface/datasets/issues/2166/comments
MDU6SXNzdWU4NDk3Nzg1NDU=
null
2,166
https://api.github.com/repos/huggingface/datasets/issues/2166/events
false
closed
2021-04-04T01:01:48Z
null
https://api.github.com/repos/huggingface/datasets/issues/2165
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2165/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2165/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4", "events_url": "https://api.github.com/users/y-rokutan/events{/privacy}", "followers_url": "https://api.github.com/users/y-rokutan/followers", "following_url": "https://api.github.com/users/y-rokutan/following{/other_user}", "gists_url": "https://api.github.com/users/y-rokutan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/y-rokutan", "id": 24562381, "login": "y-rokutan", "node_id": "MDQ6VXNlcjI0NTYyMzgx", "organizations_url": "https://api.github.com/users/y-rokutan/orgs", "received_events_url": "https://api.github.com/users/y-rokutan/received_events", "repos_url": "https://api.github.com/users/y-rokutan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/y-rokutan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/y-rokutan/subscriptions", "type": "User", "url": "https://api.github.com/users/y-rokutan" }
https://github.com/huggingface/datasets/issues/2165
[]
false
2021-08-24T15:55:35Z
2021-04-07T15:06:04Z
null
[ "Hi,\r\n\r\na HF dataset can be converted to a Torch Dataset with a simple wrapper as follows:\r\n```python\r\nfrom torch.utils.data import Dataset\r\n \r\nclass HFDataset(Dataset):\r\n def __init__(self, dset):\r\n self.dset = dset\r\n\r\n def __getitem__(self, idx):\r\n return self.dset[idx]\r\n\r\n def __len__(self):\r\n return len(self.dset)\r\n\r\ntrain_ds = HFDataset(train_ds)\r\n```\r\n@lhoestq Since the Arrow Dataset already provides `__getitem__` and `__len__`, I think we could use the [virtual subclass](https://docs.python.org/3/library/abc.html#abc.ABCMeta.register) mechanism from the `abc` module to elegantly solve this issue. This mechanism would allow the Arrow Dataset to be used in place of the Torch Dataset because the `isinstance(instance of Arrow Dataset, TorchDataset)` check would return True (DeepSpeed has this check [here](https://github.com/microsoft/DeepSpeed/blob/ab5534fc4c0f8ca21ada321f9730d723aa31288b/deepspeed/runtime/engine.py#L823)).\r\n\r\nAnd it requires a minimal change in the `arrow_dataset.py` file:\r\n```python\r\nif config.TORCH_AVAILABLE:\r\n from torch.utils.data import Dataset as TorchDataset\r\n TorchDataset.register(Dataset)\r\n```", "Interesting ! Thanks for sharing this @mariosasko . I like the idea\r\nThis looks like something we should add IMO", "@mariosasko \r\nThx for your code!\r\nIt perfectly works with a small modification for HF NLP dataset:\r\n```\r\noriginal_ds = nlp.load_dataset('scientific_papers', 'arxiv')\r\ntrain_ds = HFDataset(train_ds['train']) # needs splitting\r\n```", "@lhoestq Sadly, from Python 3.7 onwards `torch.utils.data.Dataset` doesn't support the virtual subclass mechanism due to `typing.Generic` type no longer having `abc.ABCMeta` as its metaclass.\r\n\r\nWith that in mind, another option is to remove a direct type check (`isinstance(dataset, torch.utils.data.Dataset)`) in `deepspeed.initalize` and to rewrite the checks in a manner similar to `torch.utils.data.DataLoader` ([link](https://github.com/pytorch/pytorch/blob/b80c6f863f2327c712c478f67c248b94d66b65ac/torch/utils/data/dataloader.py#L197-L239)). This is exactly why the `DataLoader` works with arbitrary objects that provide `__getitem__` and `__len__` (and in our case, the `ArrowDataset`). By doing so, their code wouldn't be any stricter in comparison to the `DataLoader`.\r\n\r\nSo if you agree, I can open an issue in their repo and fix this if they like the idea.", "That makes sense ! Feel free to open an issue on their repo and discuss this idea", "@y-rokutan Hi, now if you install `deepspeed` from master (this feature will be available in the next official release), the code should work without subclassing. Let us know if you still have any issues.", "Worth mentioning that any function that expects a `torch..Dataset` (like `torch..DataLoader`) will fail a mypy-esque typecheck if a `datasets.Dataset` is passed, even though it implements the interface correctly (I think). The virtual subclass idea was a good one- I wonder if there's another workaround given the Generic issue. What we're really talking about is something similar to the structural subtyping semantics that `typing.Protocol` defines. If `torch..DataLoader` accepted anything that supports `__getitem__` and `__len__` methods this would be much easier. Not sure if there's a way to do this without the wrapper from the perspective of `datasets`." ]
completed
[]
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2165/timeline
Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "labels"], ) engine, _, _, _ = deepspeed.initialize( args=args, model=model, model_parameters=[p for p in model.parameters() if p.requires_grad], training_data=train_ds) ``` but deepspeed.initialize accepts torch.utils.data.Dataset only. How can I convert HF-style dataset to torch-style dataset?
https://api.github.com/repos/huggingface/datasets
null
849,771,665
https://api.github.com/repos/huggingface/datasets/issues/2165/comments
MDU6SXNzdWU4NDk3NzE2NjU=
null
2,165
https://api.github.com/repos/huggingface/datasets/issues/2165/events
false
closed
2021-04-03T21:07:02Z
null
https://api.github.com/repos/huggingface/datasets/issues/2164
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2164/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2164/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/2164
[]
false
2021-04-06T14:41:09Z
2021-04-06T14:41:08Z
null
[]
null
[]
Replace assertTrue(isinstance with assertIsInstance in tests
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2164/timeline
Replaces all the occurrences of the `assertTrue(isinstance(` pattern with `assertIsInstance`.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2164.diff", "html_url": "https://github.com/huggingface/datasets/pull/2164", "merged_at": "2021-04-06T14:41:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/2164.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2164" }
849,739,759
https://api.github.com/repos/huggingface/datasets/issues/2164/comments
MDExOlB1bGxSZXF1ZXN0NjA4NDQ0MTE3
null
2,164
https://api.github.com/repos/huggingface/datasets/issues/2164/events
true
closed
2021-04-03T14:31:30Z
null
https://api.github.com/repos/huggingface/datasets/issues/2163
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2163/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2163/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
https://github.com/huggingface/datasets/pull/2163
[]
false
2021-04-06T14:40:00Z
2021-04-06T14:39:59Z
null
[ "Hi @mariosasko,\r\nJust came across this PR and I was wondering if we can use\r\n`description = \"\\n\\n\".join(OrderedDict.fromkeys([info.description for info in dataset_infos]))`\r\n\r\nThis will obviate the need for `unique` and is almost as fast as `set`. We could have used `dict` inplace of `OrderedDict` but it's available 3.7+ onwards", "Hi,\r\n\r\nlet's see what @lhoestq thinks. Although my approach adds more code, it's more readable IMO.", "Yeah, that's true. Your approach is more readable." ]
null
[]
Concat only unique fields in DatasetInfo.from_merge
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2163/timeline
I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case. Fixes #2103
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2163.diff", "html_url": "https://github.com/huggingface/datasets/pull/2163", "merged_at": "2021-04-06T14:39:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/2163.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2163" }
849,669,366
https://api.github.com/repos/huggingface/datasets/issues/2163/comments
MDExOlB1bGxSZXF1ZXN0NjA4Mzk0NDMz
null
2,163
https://api.github.com/repos/huggingface/datasets/issues/2163/events
true
closed
2021-04-02T10:11:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/2162
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2162/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2162/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
https://github.com/huggingface/datasets/issues/2162
[]
false
2022-10-05T13:20:24Z
2022-10-05T13:20:24Z
null
[ "This looks like an issue with the cc100 dataset itself but not sure\r\nDid you try loading cc100 on your machine ?", "Hi\nloading works fine, but the viewer only is broken\nthanks\n\nOn Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> This looks like an issue with the cc100 dataset itself but not sure\n> Did you try loading cc100 on your machine ?\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2162#issuecomment-814793809>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMRUO33JSOYGT6RETWLTHQWNLANCNFSM42IUOR6Q>\n> .\n>\n", "Hi! This visualization tool is deprecated now. The viewer at https://huggingface.co/datasets/cc100 works fine, so I'm closing this issue." ]
completed
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
visualization for cc100 is broken
NONE
https://api.github.com/repos/huggingface/datasets/issues/2162/timeline
Hi visualization through dataset viewer for cc100 is broken https://huggingface.co/datasets/viewer/ thanks a lot
https://api.github.com/repos/huggingface/datasets
null
849,129,201
https://api.github.com/repos/huggingface/datasets/issues/2162/comments
MDU6SXNzdWU4NDkxMjkyMDE=
null
2,162
https://api.github.com/repos/huggingface/datasets/issues/2162/events
false
closed
2021-04-02T10:06:46Z
null
https://api.github.com/repos/huggingface/datasets/issues/2161
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2161/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2161/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
https://github.com/huggingface/datasets/issues/2161
[]
false
2022-10-05T13:26:51Z
2022-10-05T13:26:51Z
null
[ "Not yet but it’s on the short/mid-term roadmap (requested by many indeed).", "oh, great, really awesome feature to have, thank you very much for the great, fabulous work", "We'll work on dataset streaming soon. This should allow you to only load the examples you need ;)", "thanks a lot Quentin, this would be really really a great feature to have\n\nOn Wed, Apr 7, 2021 at 12:14 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> We'll work on dataset streaming soon. This should allow you to only load\n> the examples you need ;)\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2161#issuecomment-814791922>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMROD62QAKIJMAKWISTTHQWBVANCNFSM42IUI5JQ>\n> .\n>\n", "Is streaming completed? On the 1.8.0 docs it is mentioned (https://huggingface.co/docs/datasets/dataset_streaming.html), but when following the example I get the following error:\r\n\r\n```\r\n>>> dataset2 = load_dataset(\"amazon_us_reviews\", \"Pet_Products_v1_00\", split='train', streaming=True)\r\n\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-21-1eedab26cff1> in <module>()\r\n----> 1 en_dataset = load_dataset('oscar', \"unshuffled_deduplicated_en\", split='train', streaming=True)\r\n\r\n3 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)\r\n 339 if value is not None:\r\n 340 if not hasattr(builder_config, key):\r\n--> 341 raise ValueError(f\"BuilderConfig {builder_config} doesn't have a '{key}' key.\")\r\n 342 setattr(builder_config, key, value)\r\n 343 \r\n\r\nValueError: BuilderConfig OscarConfig(name='unshuffled_deduplicated_en', version=1.0.0, data_dir=None, data_files=None, description='Unshuffled and deduplicated, English OSCAR dataset') doesn't have a 'streaming' key.\r\n```\r\n\r\nUPDATE: Managed to get streaming working by building from source and installing the additional `datasets[streaming]` package:\r\n\r\n```\r\n!pip install git+https://github.com/huggingface/datasets.git\r\n!pip install datasets[streaming]\r\n```", "Hi ! Streaming is available on `master` only right now. We'll make a new release 1.9.0 on Monday :)" ]
completed
[]
any possibility to download part of large datasets only?
NONE
https://api.github.com/repos/huggingface/datasets/issues/2161/timeline
Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
https://api.github.com/repos/huggingface/datasets
null
849,127,041
https://api.github.com/repos/huggingface/datasets/issues/2161/comments
MDU6SXNzdWU4NDkxMjcwNDE=
null
2,161
https://api.github.com/repos/huggingface/datasets/issues/2161/events
false
closed
2021-04-02T07:56:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/2160
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2160/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2160/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
https://github.com/huggingface/datasets/issues/2160
[]
false
2021-04-02T10:14:32Z
2021-04-02T10:14:31Z
null
[ "Hi.\r\nI cannot always reproduce this issue, and on later runs I did not see it so far. Sometimes also I set 8 processes but I see less being showed, is this normal, here only 5 are shown for 8 being set, thanks\r\n\r\n```\r\n#3: 11%|███████████████▊ | 172/1583 [00:46<06:21, 3.70ba/s]\r\n#4: 9%|█████████████▏ | 143/1583 [00:46<07:46, 3.09ba/s]\r\n#7: 6%|█████████ | 98/1583 [00:45<11:34, 2.14ba/s]\r\n#5: 8%|███████████▍ | 124/1583 [00:46<09:03, 2.68ba/s]\r\n#6: 7%|██████████▏ \r\n```", "closing since I cannot reproduce it again, thanks " ]
completed
[]
data_args.preprocessing_num_workers almost freezes
NONE
https://api.github.com/repos/huggingface/datasets/issues/2160/timeline
Hi @lhoestq I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves on till a point and then this freezes almost for sometime during tokenization steps and then this is back again, overall to me taking more time than normal case, I appreciate your advice on how I can use this option properly to speed up. thanks
https://api.github.com/repos/huggingface/datasets
null
849,052,921
https://api.github.com/repos/huggingface/datasets/issues/2160/comments
MDU6SXNzdWU4NDkwNTI5MjE=
null
2,160
https://api.github.com/repos/huggingface/datasets/issues/2160/events
false
closed
2021-04-01T23:28:36Z
null
https://api.github.com/repos/huggingface/datasets/issues/2159
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2159/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2159/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
https://github.com/huggingface/datasets/issues/2159
[]
false
2021-04-02T10:05:19Z
2021-04-02T10:05:19Z
null
[ "closing since I think this is cc100, just the name has been changed. thanks " ]
completed
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
adding ccnet dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2159/timeline
## Adding a Dataset - **Name:** ccnet - **Description:** Common Crawl - **Paper:** https://arxiv.org/abs/1911.00359 - **Data:** https://github.com/facebookresearch/cc_net - **Motivation:** this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite important for cross-lingual reseach Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). thanks
https://api.github.com/repos/huggingface/datasets
null
848,851,962
https://api.github.com/repos/huggingface/datasets/issues/2159/comments
MDU6SXNzdWU4NDg4NTE5NjI=
null
2,159
https://api.github.com/repos/huggingface/datasets/issues/2159/events
false
closed
2021-04-01T14:13:20Z
null
https://api.github.com/repos/huggingface/datasets/issues/2158
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2158/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2158/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/9447991?v=4", "events_url": "https://api.github.com/users/emanuelevivoli/events{/privacy}", "followers_url": "https://api.github.com/users/emanuelevivoli/followers", "following_url": "https://api.github.com/users/emanuelevivoli/following{/other_user}", "gists_url": "https://api.github.com/users/emanuelevivoli/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/emanuelevivoli", "id": 9447991, "login": "emanuelevivoli", "node_id": "MDQ6VXNlcjk0NDc5OTE=", "organizations_url": "https://api.github.com/users/emanuelevivoli/orgs", "received_events_url": "https://api.github.com/users/emanuelevivoli/received_events", "repos_url": "https://api.github.com/users/emanuelevivoli/repos", "site_admin": false, "starred_url": "https://api.github.com/users/emanuelevivoli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emanuelevivoli/subscriptions", "type": "User", "url": "https://api.github.com/users/emanuelevivoli" }
https://github.com/huggingface/datasets/issues/2158
[]
false
2022-10-05T13:22:02Z
2022-10-05T13:22:02Z
null
[ "Thanks for reporting !\r\nThe viewer doesn't have all the dependencies of the datasets. We may add openpyxl to be able to show this dataset properly", "This viewer tool is deprecated now and the new viewer at https://huggingface.co/datasets/fake_news_english works fine, so I'm closing this issue" ]
completed
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
viewer "fake_news_english" error
NONE
https://api.github.com/repos/huggingface/datasets/issues/2158/timeline
When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error: > ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional dependency for reading xlsx files' for instance' as well as the error Traceback.
https://api.github.com/repos/huggingface/datasets
null
848,506,746
https://api.github.com/repos/huggingface/datasets/issues/2158/comments
MDU6SXNzdWU4NDg1MDY3NDY=
null
2,158
https://api.github.com/repos/huggingface/datasets/issues/2158/events
false
closed
2021-03-31T19:38:29Z
null
https://api.github.com/repos/huggingface/datasets/issues/2157
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2157/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2157/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
https://github.com/huggingface/datasets/pull/2157
[]
false
2021-04-06T07:19:19Z
2021-04-06T07:19:19Z
null
[]
null
[]
updated user permissions based on umask
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2157/timeline
Updated user permissions based on running user's umask (#2065). Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2157.diff", "html_url": "https://github.com/huggingface/datasets/pull/2157", "merged_at": "2021-04-06T07:19:19Z", "patch_url": "https://github.com/huggingface/datasets/pull/2157.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2157" }
847,205,239
https://api.github.com/repos/huggingface/datasets/issues/2157/comments
MDExOlB1bGxSZXF1ZXN0NjA2MjM1NjUx
null
2,157
https://api.github.com/repos/huggingface/datasets/issues/2157/events
true
closed
2021-03-31T19:33:48Z
null
https://api.github.com/repos/huggingface/datasets/issues/2156
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2156/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2156/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhavitvyamalik", "id": 19718818, "login": "bhavitvyamalik", "node_id": "MDQ6VXNlcjE5NzE4ODE4", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "type": "User", "url": "https://api.github.com/users/bhavitvyamalik" }
https://github.com/huggingface/datasets/pull/2156
[]
false
2021-03-31T19:34:24Z
2021-03-31T19:34:24Z
null
[]
null
[]
User permissions
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2156/timeline
Updated user permissions based on running user's umask. Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well)
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2156.diff", "html_url": "https://github.com/huggingface/datasets/pull/2156", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2156.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2156" }
847,198,295
https://api.github.com/repos/huggingface/datasets/issues/2156/comments
MDExOlB1bGxSZXF1ZXN0NjA2MjI5MTky
null
2,156
https://api.github.com/repos/huggingface/datasets/issues/2156/events
true
closed
2021-03-31T14:36:10Z
null
https://api.github.com/repos/huggingface/datasets/issues/2155
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2155/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2155/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2155
[]
false
2021-04-01T16:46:30Z
2021-03-31T15:42:08Z
null
[ "Just note that docstrings injected from PyArrow do not follow the same convention for formatting types in `Args` or `Returns` as we do... Not a big problem, anyway! 😄 " ]
null
[]
Add table classes to the documentation
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2155/timeline
Following #2025 , I added the table classes to the documentation cc @albertvillanova
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2155.diff", "html_url": "https://github.com/huggingface/datasets/pull/2155", "merged_at": "2021-03-31T15:42:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/2155.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2155" }
846,786,897
https://api.github.com/repos/huggingface/datasets/issues/2155/comments
MDExOlB1bGxSZXF1ZXN0NjA1ODU3MTU4
null
2,155
https://api.github.com/repos/huggingface/datasets/issues/2155/events
true
closed
2021-03-31T14:22:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/2154
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2154/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2154/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4", "events_url": "https://api.github.com/users/versae/events{/privacy}", "followers_url": "https://api.github.com/users/versae/followers", "following_url": "https://api.github.com/users/versae/following{/other_user}", "gists_url": "https://api.github.com/users/versae/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/versae", "id": 173537, "login": "versae", "node_id": "MDQ6VXNlcjE3MzUzNw==", "organizations_url": "https://api.github.com/users/versae/orgs", "received_events_url": "https://api.github.com/users/versae/received_events", "repos_url": "https://api.github.com/users/versae/repos", "site_admin": false, "starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/versae/subscriptions", "type": "User", "url": "https://api.github.com/users/versae" }
https://github.com/huggingface/datasets/pull/2154
[]
false
2021-04-01T09:27:00Z
2021-04-01T09:16:08Z
null
[ "Awesome!" ]
null
[]
Adding the NorNE dataset for Norwegian POS and NER
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2154/timeline
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names. See #1720.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2154.diff", "html_url": "https://github.com/huggingface/datasets/pull/2154", "merged_at": "2021-04-01T09:16:08Z", "patch_url": "https://github.com/huggingface/datasets/pull/2154.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2154" }
846,763,960
https://api.github.com/repos/huggingface/datasets/issues/2154/comments
MDExOlB1bGxSZXF1ZXN0NjA1ODM2Mjc1
null
2,154
https://api.github.com/repos/huggingface/datasets/issues/2154/events
true
closed
2021-03-31T08:30:09Z
null
https://api.github.com/repos/huggingface/datasets/issues/2153
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2153/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2153/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4", "events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}", "followers_url": "https://api.github.com/users/GuillemGSubies/followers", "following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}", "gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/GuillemGSubies", "id": 37592763, "login": "GuillemGSubies", "node_id": "MDQ6VXNlcjM3NTkyNzYz", "organizations_url": "https://api.github.com/users/GuillemGSubies/orgs", "received_events_url": "https://api.github.com/users/GuillemGSubies/received_events", "repos_url": "https://api.github.com/users/GuillemGSubies/repos", "site_admin": false, "starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions", "type": "User", "url": "https://api.github.com/users/GuillemGSubies" }
https://github.com/huggingface/datasets/issues/2153
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
false
2022-10-05T13:29:12Z
2022-10-05T13:29:12Z
null
[ "Hi ! Thanks for reporting. I opened a PR to fix this issue: #2201", "Nice question which helped me a lot! I have wasted a lot of time to the `DatasetDict` creation from a csv file. Hope the document of this module add some simple examples.", "Hi :) We're indeed working on tutorials that we will add to the docs !" ]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
load_dataset ignoring features
NONE
https://api.github.com/repos/huggingface/datasets/issues/2153/timeline
First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything. I'm using datasets 1.5.0 ![image](https://user-images.githubusercontent.com/37592763/113114369-8f376580-920b-11eb-900d-94365b59f04b.png) As you can see, when I load the dataset, the ClassLabels are ignored, I have to cast the dataset in order to make it work. Code to reproduce: ```python import datasets data_location = "/data/prueba_multiclase" features = datasets.Features( {"texto": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["false", "true"])} ) dataset = datasets.load_dataset( "csv", data_files=data_location, delimiter="\t", features=features ) ``` Dataset I used: [prueba_multiclase.zip](https://github.com/huggingface/datasets/files/6235022/prueba_multiclase.zip) (it has to be unzipped) Thank you! ❤️
https://api.github.com/repos/huggingface/datasets
null
846,181,502
https://api.github.com/repos/huggingface/datasets/issues/2153/comments
MDU6SXNzdWU4NDYxODE1MDI=
null
2,153
https://api.github.com/repos/huggingface/datasets/issues/2153/events
false
closed
2021-03-31T03:21:19Z
null
https://api.github.com/repos/huggingface/datasets/issues/2152
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2152/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2152/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4", "events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}", "followers_url": "https://api.github.com/users/JieyuZhao/followers", "following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}", "gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JieyuZhao", "id": 22306304, "login": "JieyuZhao", "node_id": "MDQ6VXNlcjIyMzA2MzA0", "organizations_url": "https://api.github.com/users/JieyuZhao/orgs", "received_events_url": "https://api.github.com/users/JieyuZhao/received_events", "repos_url": "https://api.github.com/users/JieyuZhao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions", "type": "User", "url": "https://api.github.com/users/JieyuZhao" }
https://github.com/huggingface/datasets/pull/2152
[]
false
2021-04-01T10:20:37Z
2021-04-01T10:20:36Z
null
[]
null
[]
Update README.md
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2152/timeline
Updated some descriptions of Wino_Bias dataset.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2152.diff", "html_url": "https://github.com/huggingface/datasets/pull/2152", "merged_at": "2021-04-01T10:20:36Z", "patch_url": "https://github.com/huggingface/datasets/pull/2152.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2152" }
845,751,273
https://api.github.com/repos/huggingface/datasets/issues/2152/comments
MDExOlB1bGxSZXF1ZXN0NjA0ODk0MDkz
null
2,152
https://api.github.com/repos/huggingface/datasets/issues/2152/events
true
closed
2021-03-30T16:58:44Z
null
https://api.github.com/repos/huggingface/datasets/issues/2151
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2151/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2151/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2151
[]
false
2021-06-23T17:41:02Z
2021-04-19T16:07:18Z
{ "closed_at": "2021-04-20T16:50:46Z", "closed_issues": 4, "created_at": "2021-04-09T13:07:51Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-04-16T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/1", "id": 6644198, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels", "node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==", "number": 1, "open_issues": 0, "state": "closed", "title": "1.6", "updated_at": "2021-04-20T16:50:46Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/1" }
[ "@lhoestq I am going to implement the consolidation step you mentioned in #1870.", "@lhoestq I was thinking that the order of the TableBlocks is not relevant, isn't it?\r\n\r\nI mean, in order to consolidate _consecutive_ in-memory table blocks, in this case:\r\n```\r\nblocks = [in_memory_1, memory_mapped, in_memory_2]\r\n```\r\nI could reorder the list:\r\n```\r\nblocks = [in_memory_1, in_memory_2, memory_mapped]\r\n```\r\nso that the first 2 can be consolidated into a single one:\r\n```\r\nblocks = [in_memory_3, memory_mapped]\r\n```", "I think the order is important, users won't expect the dataset to be \"shuffled\" when they add a new item", "> I think the order is important, users won't expect the dataset to be \"shuffled\" when they add a new item\r\n\r\nOK, therefore I leave `_consolidate_blocks` as it is, which currently keeps the order of the blocks (no shuffling).", "Thank you guys for implementing this. Minor thing I noticed in the [documentation](https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.concatenate_datasets): it says \"Converts a list of Dataset with **the same schema** into a single Dataset\". With the addition of the axis parameter, perhaps this should be reworded, no?" ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Add support for axis in concatenate datasets
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2151/timeline
Add support for `axis` (0 or 1) in `concatenate_datasets`. Close #853.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2151.diff", "html_url": "https://github.com/huggingface/datasets/pull/2151", "merged_at": "2021-04-19T16:07:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/2151.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2151" }
844,886,081
https://api.github.com/repos/huggingface/datasets/issues/2151/comments
MDExOlB1bGxSZXF1ZXN0NjA0MDg5MDMw
null
2,151
https://api.github.com/repos/huggingface/datasets/issues/2151/events
true
closed
2021-03-30T15:51:56Z
null
https://api.github.com/repos/huggingface/datasets/issues/2150
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2150/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2150/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2150
[]
false
2021-03-31T10:37:15Z
2021-03-31T10:37:14Z
null
[]
null
[]
Allow pickling of big in-memory tables
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2150/timeline
This should fix issue #2134 Pickling is limited to <4GiB objects, it's not possible to pickle a big arrow table (for multiprocessing for example). For big tables, we have to write them on disk and only pickle the path to the table.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2150.diff", "html_url": "https://github.com/huggingface/datasets/pull/2150", "merged_at": "2021-03-31T10:37:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/2150.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2150" }
844,776,448
https://api.github.com/repos/huggingface/datasets/issues/2150/comments
MDExOlB1bGxSZXF1ZXN0NjAzOTg3OTcx
null
2,150
https://api.github.com/repos/huggingface/datasets/issues/2150/events
true
closed
2021-03-30T15:26:34Z
null
https://api.github.com/repos/huggingface/datasets/issues/2149
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2149/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2149/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "events_url": "https://api.github.com/users/jerryIsHere/events{/privacy}", "followers_url": "https://api.github.com/users/jerryIsHere/followers", "following_url": "https://api.github.com/users/jerryIsHere/following{/other_user}", "gists_url": "https://api.github.com/users/jerryIsHere/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jerryIsHere", "id": 50871412, "login": "jerryIsHere", "node_id": "MDQ6VXNlcjUwODcxNDEy", "organizations_url": "https://api.github.com/users/jerryIsHere/orgs", "received_events_url": "https://api.github.com/users/jerryIsHere/received_events", "repos_url": "https://api.github.com/users/jerryIsHere/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jerryIsHere/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jerryIsHere/subscriptions", "type": "User", "url": "https://api.github.com/users/jerryIsHere" }
https://github.com/huggingface/datasets/issues/2149
[]
false
2022-10-05T13:28:30Z
2022-10-05T13:28:30Z
null
[ "Good catch ! Thanks for reporting\r\n\r\nI just opened #2180 to fix this", "Fixed in #2180" ]
completed
[]
Telugu subset missing for xtreme tatoeba dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2149/timeline
from nlp import load_dataset train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation'] ValueError: BuilderConfig tatoeba.tel not found. but language tel is actually included in xtreme: https://github.com/google-research/xtreme/blob/master/utils_preprocess.py def tatoeba_preprocess(args): lang3_dict = { 'afr':'af', 'ara':'ar', 'bul':'bg', 'ben':'bn', 'deu':'de', 'ell':'el', 'spa':'es', 'est':'et', 'eus':'eu', 'pes':'fa', 'fin':'fi', 'fra':'fr', 'heb':'he', 'hin':'hi', 'hun':'hu', 'ind':'id', 'ita':'it', 'jpn':'ja', 'jav':'jv', 'kat':'ka', 'kaz':'kk', 'kor':'ko', 'mal':'ml', 'mar':'mr', 'nld':'nl', 'por':'pt', 'rus':'ru', 'swh':'sw', 'tam':'ta', **_'tel':'te'_**, 'tha':'th', 'tgl':'tl', <----here 'tur':'tr', 'urd':'ur', 'vie':'vi', 'cmn':'zh', 'eng':'en', }
https://api.github.com/repos/huggingface/datasets
null
844,734,076
https://api.github.com/repos/huggingface/datasets/issues/2149/comments
MDU6SXNzdWU4NDQ3MzQwNzY=
null
2,149
https://api.github.com/repos/huggingface/datasets/issues/2149/events
false
closed
2021-03-30T15:04:06Z
null
https://api.github.com/repos/huggingface/datasets/issues/2148
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2148/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2148/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4", "events_url": "https://api.github.com/users/marrodion/events{/privacy}", "followers_url": "https://api.github.com/users/marrodion/followers", "following_url": "https://api.github.com/users/marrodion/following{/other_user}", "gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marrodion", "id": 44571847, "login": "marrodion", "node_id": "MDQ6VXNlcjQ0NTcxODQ3", "organizations_url": "https://api.github.com/users/marrodion/orgs", "received_events_url": "https://api.github.com/users/marrodion/received_events", "repos_url": "https://api.github.com/users/marrodion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marrodion/subscriptions", "type": "User", "url": "https://api.github.com/users/marrodion" }
https://github.com/huggingface/datasets/issues/2148
[]
false
2021-04-15T13:49:46Z
2021-04-15T13:49:46Z
null
[ "Hi @marrodion. \r\n\r\nThanks for pointing this out. It would be great to incorporate this metric-specific enhancement.\r\n\r\nAnother possibility would be to require the user to input the scheme as a string `mode=\"strict\", scheme=\"IOB2\"` and then dynamically import the corresponding module using Python `importlib`:\r\n```python\r\nif scheme:\r\n scheme = importlib.import_module(f\"seqeval.scheme.{scheme}\")\r\n```\r\n\r\nFeel free to create a Pull Request to make this contribution." ]
completed
[]
Add configurable options to `seqeval` metric
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2148/timeline
Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation). However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs in `Seqeval._compute` https://github.com/huggingface/datasets/blob/85cf7ff920c90ca2e12bedca12b36d2a043c3da2/metrics/seqeval/seqeval.py#L109 Things that would be relevant are, for example, supporting `mode="strict", scheme=IOB2` to count only full entity match as a true positive and omit partial matches. The only problem I see is that the spirit of `metrics` seems to not require additional imports from user. `seqeval` only supports schemes as objects, without any string aliases. It can be solved naively with mapping like `{"IOB2": seqeval.scheme.IOB2}`. Or just left as is and require user to explicitly import scheme from `seqeval` if he wants to configure it past the default implementation. If that makes sense, I am happy to implement the change.
https://api.github.com/repos/huggingface/datasets
null
844,700,910
https://api.github.com/repos/huggingface/datasets/issues/2148/comments
MDU6SXNzdWU4NDQ3MDA5MTA=
null
2,148
https://api.github.com/repos/huggingface/datasets/issues/2148/events
false
closed
2021-03-30T14:55:43Z
null
https://api.github.com/repos/huggingface/datasets/issues/2147
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2147/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2147/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2147
[]
false
2021-03-31T13:11:05Z
2021-03-31T13:11:05Z
null
[]
null
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
Render docstring return type as inline
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2147/timeline
This documentation setting will avoid having the return type in a separate line under `Return type`. See e.g. current docs for `Dataset.to_csv`.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2147.diff", "html_url": "https://github.com/huggingface/datasets/pull/2147", "merged_at": "2021-03-31T13:11:05Z", "patch_url": "https://github.com/huggingface/datasets/pull/2147.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2147" }
844,687,831
https://api.github.com/repos/huggingface/datasets/issues/2147/comments
MDExOlB1bGxSZXF1ZXN0NjAzOTA3NjM4
null
2,147
https://api.github.com/repos/huggingface/datasets/issues/2147/events
true
open
2021-03-30T14:46:09Z
null
https://api.github.com/repos/huggingface/datasets/issues/2146
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2146/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2146/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/22685854?v=4", "events_url": "https://api.github.com/users/jblemoine/events{/privacy}", "followers_url": "https://api.github.com/users/jblemoine/followers", "following_url": "https://api.github.com/users/jblemoine/following{/other_user}", "gists_url": "https://api.github.com/users/jblemoine/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jblemoine", "id": 22685854, "login": "jblemoine", "node_id": "MDQ6VXNlcjIyNjg1ODU0", "organizations_url": "https://api.github.com/users/jblemoine/orgs", "received_events_url": "https://api.github.com/users/jblemoine/received_events", "repos_url": "https://api.github.com/users/jblemoine/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jblemoine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jblemoine/subscriptions", "type": "User", "url": "https://api.github.com/users/jblemoine" }
https://github.com/huggingface/datasets/issues/2146
[]
false
2021-04-16T13:07:02Z
null
null
[ "Hi ! In the arrow file we store all the integers as uint8.\r\nSo your arrow file should weigh around `height x width x n_channels x n_images` bytes.\r\n\r\nWhat feature type do your TFDS dataset have ?\r\n\r\nIf it uses a `tfds.features.Image` type, then what is stored is the encoded data (as png or jpg for example). Since these encodings are made for compression, the resulting tfrecord is smaller that the arrow file.\r\n\r\nWe are working on adding a similar feature in `datasets`: the ability to store the encoded data instead of the raw integers for images, but also for audio data. This way, arrow files will have similar sizes as tfrecords for images.", "Thanks for the prompt response. You're right about the encoding, I have the `tfds.features.Image` feature type you mentioned.\r\nHowever, as described in the `dataset_info.json`, my dataset is made of 1479 (224x224x3) images. 1479 x 224 x 224 x 3 = 222630912 bytes which is far from the actual size 520803408 bytes. \r\n\r\nAnyway I look forward to the Image feature type in `datasets`. ", "@lhoestq I changed the data structure so I have a 2D Array feature type instead of a 3D Array by grouping the two last dimensions ( a 224x672 2D Array instead of a 224x224x3 3D Array). The file size is now 223973964 bytes, nearly half the previous size! Which is around of what I would expect.\r\nI found similar behavior in existing `datasets` collection, when comparing black and white vs color image, for example MNIST vs CIFAR. ", "Interesting !\r\nThis may be because of the offsets that are stored with the array data.\r\n\r\nCurrently the offsets are stored even if the `shape` of the arrays is fixed. This was needed because of some issues with pyarrow a few months ago. I think these issues have been addressed now, so we can probably try to remove them to make the file lighter.\r\n\r\nIdeally in your case the floats data should be 220 MB for both Array2D and Array3D", "Yeah for sure, can you be a bit more specific about where the offset is stored in the code base ? And any reference to pyarrow issues if you have some. I would be very interested in contributing to `datasets` by trying to fix this issue. ", "Pyarrow has two types of lists: variable length lists and fixed size lists.\r\nCurrently we store the ArrayXD data as variable length lists. They take more disk space because they must store both actual data and offsets.\r\nIn the `datasets` code this is done here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/dbac87c8a083f806467f5afc4ec9b401a7e4c15c/src/datasets/features.py#L346-L352\r\n\r\nTo use a fixed length list, one should use the `list_size` argument of `pyarrow.list_()`.\r\nI believe this would work directly modulo some changes in the numpy conversion here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/dbac87c8a083f806467f5afc4ec9b401a7e4c15c/src/datasets/features.py#L381-L395" ]
null
[]
Dataset file size on disk is very large with 3D Array
NONE
https://api.github.com/repos/huggingface/datasets/issues/2146/timeline
Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`. `{ "description": "", "citation": "", "homepage": "", "license": "", "features": { "image": { "shape": [224, 224, 3], "dtype": "uint8", "id": null, "_type": "Array3D", } }, "post_processed": null, "supervised_keys": null, "builder_name": "shot_type_image_dataset", "config_name": "default", "version": { "version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0, }, "splits": { "train": { "name": "train", "num_bytes": 520803408, "num_examples": 1479, "dataset_name": "shot_type_image_dataset", } }, "download_checksums": { "": { "num_bytes": 16940447118, "checksum": "5854035705efe08b0ed8f3cf3da7b4d29cba9055c2d2d702c79785350d72ee03", } }, "download_size": 16940447118, "post_processing_size": null, "dataset_size": 520803408, "size_in_bytes": 17461250526, }` I have created the same dataset with tensorflow_dataset and it takes only 125MB on disk. I am wondering, is it normal behavior ? I understand `Datasets` uses Arrow for serialization wheres tf uses TF Records. This might be a problem for large dataset. Thanks for your help.
https://api.github.com/repos/huggingface/datasets
null
844,673,244
https://api.github.com/repos/huggingface/datasets/issues/2146/comments
MDU6SXNzdWU4NDQ2NzMyNDQ=
null
2,146
https://api.github.com/repos/huggingface/datasets/issues/2146/events
false
closed
2021-03-30T14:02:14Z
null
https://api.github.com/repos/huggingface/datasets/issues/2145
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2145/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2145/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
https://github.com/huggingface/datasets/pull/2145
[]
false
2021-04-29T14:50:44Z
2021-04-29T14:50:43Z
{ "closed_at": "2021-05-31T16:20:53Z", "closed_issues": 3, "created_at": "2021-04-09T13:16:31Z", "creator": { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }, "description": "Next minor release", "due_on": "2021-05-14T07:00:00Z", "html_url": "https://github.com/huggingface/datasets/milestone/3", "id": 6644287, "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels", "node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==", "number": 3, "open_issues": 0, "state": "closed", "title": "1.7", "updated_at": "2021-05-31T16:20:53Z", "url": "https://api.github.com/repos/huggingface/datasets/milestones/3" }
[ "#2274 has been merged. You can now merge master into this branch and use `assert_arrow_metadata_are_synced_with_dataset_features(dset)` to make sure that the metadata are good :)" ]
null
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
Implement Dataset add_column
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2145/timeline
Implement `Dataset.add_column`. Close #1954.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2145.diff", "html_url": "https://github.com/huggingface/datasets/pull/2145", "merged_at": "2021-04-29T14:50:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/2145.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2145" }
844,603,518
https://api.github.com/repos/huggingface/datasets/issues/2145/comments
MDExOlB1bGxSZXF1ZXN0NjAzODMxOTE2
null
2,145
https://api.github.com/repos/huggingface/datasets/issues/2145/events
true
open
2021-03-30T10:38:31Z
null
https://api.github.com/repos/huggingface/datasets/issues/2144
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2144/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2144/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/26637405?v=4", "events_url": "https://api.github.com/users/TomPyonsuke/events{/privacy}", "followers_url": "https://api.github.com/users/TomPyonsuke/followers", "following_url": "https://api.github.com/users/TomPyonsuke/following{/other_user}", "gists_url": "https://api.github.com/users/TomPyonsuke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TomPyonsuke", "id": 26637405, "login": "TomPyonsuke", "node_id": "MDQ6VXNlcjI2NjM3NDA1", "organizations_url": "https://api.github.com/users/TomPyonsuke/orgs", "received_events_url": "https://api.github.com/users/TomPyonsuke/received_events", "repos_url": "https://api.github.com/users/TomPyonsuke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TomPyonsuke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TomPyonsuke/subscriptions", "type": "User", "url": "https://api.github.com/users/TomPyonsuke" }
https://github.com/huggingface/datasets/issues/2144
[]
false
2021-04-01T09:21:17Z
null
null
[ "That's how I loaded the dataset\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')\r\n```", "Hi ! It looks like the arrow file in the folder\r\n`/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931` is corrupted.\r\n\r\nCan you take a look and check that it's 18.3GB ?\r\n\r\nIf not, then maybe you need to redownload it:\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache', download_mode=\"force_redownload\")\r\n```", "> Hi ! It looks like the arrow file in the folder\r\n> `/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931` is corrupted.\r\n> \r\n> Can you take a look and check that it's 18.3GB ?\r\n> \r\n> If not, then maybe you need to redownload it:\r\n> \r\n> ```python\r\n> from datasets import load_dataset\r\n> ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache', download_mode=\"force_redownload\")\r\n> ```\r\n\r\nHi Ihoestq, thanks for the reply! Actually i think my issue is i couldn't download the dataset beyond 10.7G. It feels like the whole dataset is split into different volumes and after the first one was downloaded it crashed before proceeding to the next one. I did try 'force_redownload' mode but still got the same issue.", "I just tried on my side and got no issues.\r\nWhen downloading the dataset again, did it crash at 10.7GB as well ?", "> I just tried on my side and got no issues.\r\n> When downloading the dataset again, did it crash at 10.7GB as well ?\r\n\r\nYes i have tried it multiple times on different machines. I am wondering if you could share the screenshot of your dependency versions and i will try to make them the same as yours?", "I tried using `datasets` from `master` on macos with python 3.7.2\r\nI also have `requests==2.23.0` and `tqdm==4.45.0`." ]
null
[]
Loading wikipedia 20200501.en throws pyarrow related error
NONE
https://api.github.com/repos/huggingface/datasets/issues/2144/timeline
**Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931... Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.6k/14.6k [00:00<00:00, 5.41MB/s] Downloading: 59%|███████████████████████████████████████████████████████████████████████████████████████▊ | 10.7G/18.3G [11:30<08:08, 15.5MB/s] Dataset wikipedia downloaded and prepared to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931. Subsequent calls will reuse this data. Traceback (most recent call last): File "load_wiki.py", line 2, in <module> ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache') File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 751, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 746, in as_dataset map_tuple=True, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 763, in _build_single_dataset in_memory=in_memory, File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 835, in _as_dataset in_memory=in_memory, File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 215, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 236, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 171, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename pa_table = ArrowReader.read_table(filename, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 324, in read_table pa_table = f.read_all() File "pyarrow/ipc.pxi", line 544, in pyarrow.lib.RecordBatchReader.read_all File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status OSError: Expected to be able to read 9176784 bytes for message body, got 4918712 **Detailed version info** datasets==1.5.0 - dataclasses [required: Any, installed: 0.8] - dill [required: Any, installed: 0.3.3] - fsspec [required: Any, installed: 0.8.7] - importlib-metadata [required: Any, installed: 1.7.0] - zipp [required: >=0.5, installed: 3.1.0] - huggingface-hub [required: <0.1.0, installed: 0.0.7] - filelock [required: Any, installed: 3.0.12] - importlib-metadata [required: Any, installed: 1.7.0] - zipp [required: >=0.5, installed: 3.1.0] - requests [required: Any, installed: 2.24.0] - certifi [required: >=2017.4.17, installed: 2020.6.20] - chardet [required: >=3.0.2,<4, installed: 3.0.4] - idna [required: >=2.5,<3, installed: 2.6] - urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10] - tqdm [required: Any, installed: 4.49.0] - importlib-metadata [required: Any, installed: 1.7.0] - zipp [required: >=0.5, installed: 3.1.0] - multiprocess [required: Any, installed: 0.70.11.1] - dill [required: >=0.3.3, installed: 0.3.3] - numpy [required: >=1.17, installed: 1.17.0] - pandas [required: Any, installed: 1.1.5] - numpy [required: >=1.15.4, installed: 1.17.0] - python-dateutil [required: >=2.7.3, installed: 2.8.0] - six [required: >=1.5, installed: 1.15.0] - pytz [required: >=2017.2, installed: 2020.1] - pyarrow [required: >=0.17.1, installed: 3.0.0] - numpy [required: >=1.16.6, installed: 1.17.0] - requests [required: >=2.19.0, installed: 2.24.0] - certifi [required: >=2017.4.17, installed: 2020.6.20] - chardet [required: >=3.0.2,<4, installed: 3.0.4] - idna [required: >=2.5,<3, installed: 2.6] - urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10] - tqdm [required: >=4.27,<4.50.0, installed: 4.49.0] - xxhash [required: Any, installed: 2.0.0]
https://api.github.com/repos/huggingface/datasets
null
844,352,067
https://api.github.com/repos/huggingface/datasets/issues/2144/comments
MDU6SXNzdWU4NDQzNTIwNjc=
null
2,144
https://api.github.com/repos/huggingface/datasets/issues/2144/events
false
closed
2021-03-30T10:00:42Z
null
https://api.github.com/repos/huggingface/datasets/issues/2143
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2143/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2143/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m" }
https://github.com/huggingface/datasets/pull/2143
[ { "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m" }, { "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis" } ]
false
2021-06-11T13:20:41Z
2021-06-11T13:20:36Z
null
[]
null
[]
task casting via load_dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2143/timeline
wip not satisfied with the API, it means as a dataset implementer I need to write a function with boilerplate and write classes for each `<dataset><task>` "facet".
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2143.diff", "html_url": "https://github.com/huggingface/datasets/pull/2143", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2143.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2143" }
844,313,228
https://api.github.com/repos/huggingface/datasets/issues/2143/comments
MDExOlB1bGxSZXF1ZXN0NjAzNTc0NjI0
null
2,143
https://api.github.com/repos/huggingface/datasets/issues/2143/events
true
closed
2021-03-29T23:47:02Z
null
https://api.github.com/repos/huggingface/datasets/issues/2142
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2142/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2142/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
https://github.com/huggingface/datasets/pull/2142
[]
false
2021-03-30T00:10:02Z
2021-03-30T00:10:02Z
null
[]
null
[]
Gem V1.1
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2142/timeline
This branch updates the GEM benchmark to its 1.1 version which includes: - challenge sets for most tasks - detokenized TurkCorpus to match the rest of the text simplification subtasks - fixed inputs for TurkCorpus and ASSET test sets - 18 languages in WikiLingua cc @sebastianGehrmann
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2142.diff", "html_url": "https://github.com/huggingface/datasets/pull/2142", "merged_at": "2021-03-30T00:10:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/2142.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2142" }
843,919,420
https://api.github.com/repos/huggingface/datasets/issues/2142/comments
MDExOlB1bGxSZXF1ZXN0NjAzMjQwMzUy
null
2,142
https://api.github.com/repos/huggingface/datasets/issues/2142/events
true
closed
2021-03-29T23:38:26Z
null
https://api.github.com/repos/huggingface/datasets/issues/2141
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2141/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2141/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
https://github.com/huggingface/datasets/pull/2141
[]
false
2021-03-31T13:27:50Z
2021-03-31T13:27:50Z
null
[ "Hi @lhoestq \r\nThanks a lot for taking time checking it. I update \"dataset_infos.json\", I added description to the function of _generate_samples in wikiann.py but I was not sure about the format to write in README. thanks. ", "Thanks !\r\n\r\nFor the fields description in the dataset card, something like this does the job:\r\n```\r\n- `tokens`: a `list` of `string` features.\r\n- `langs`: a `list` of `string` features that correspond to the language of each token.\r\n- `ner_tags`: a `list` of classification labels, with possible values including `O` (0), `B-PER` (1), `I-PER` (2), `B-ORG` (3), `I-ORG` (4), `B-LOC` (5), `I-LOC` (6).\r\n- `spans`: a `list` of `string` features, that is the list of named entities in the input text formatted as ``<TAG>: <mention>``\r\n```\r\n\r\nAlso for information, I think the trailer of rick and morty season 5 is out now :)", "Hi @lhoestq \r\nthank you! This is updated now, please feel free to let me know if I need to modify something :) thanks " ]
null
[]
added spans field for the wikiann datasets
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2141/timeline
Hi @lhoestq I tried to add spans to the wikiann datasets. Thanks a lot for kindly having a look. This addresses https://github.com/huggingface/datasets/issues/2130. Best regards Rabeeh
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2141.diff", "html_url": "https://github.com/huggingface/datasets/pull/2141", "merged_at": "2021-03-31T13:27:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/2141.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2141" }
843,914,790
https://api.github.com/repos/huggingface/datasets/issues/2141/comments
MDExOlB1bGxSZXF1ZXN0NjAzMjM2MjUw
null
2,141
https://api.github.com/repos/huggingface/datasets/issues/2141/events
true
closed
2021-03-29T21:32:23Z
null
https://api.github.com/repos/huggingface/datasets/issues/2140
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2140/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2140/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4", "events_url": "https://api.github.com/users/dkajtoch/events{/privacy}", "followers_url": "https://api.github.com/users/dkajtoch/followers", "following_url": "https://api.github.com/users/dkajtoch/following{/other_user}", "gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dkajtoch", "id": 32985207, "login": "dkajtoch", "node_id": "MDQ6VXNlcjMyOTg1MjA3", "organizations_url": "https://api.github.com/users/dkajtoch/orgs", "received_events_url": "https://api.github.com/users/dkajtoch/received_events", "repos_url": "https://api.github.com/users/dkajtoch/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions", "type": "User", "url": "https://api.github.com/users/dkajtoch" }
https://github.com/huggingface/datasets/pull/2140
[]
false
2021-04-09T09:32:18Z
2021-04-09T09:32:18Z
null
[ "@lhoestq I updated files" ]
null
[]
add banking77 dataset
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2140/timeline
Intent classification/detection dataset from banking category with 77 unique intents.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2140.diff", "html_url": "https://github.com/huggingface/datasets/pull/2140", "merged_at": "2021-04-09T09:32:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/2140.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2140" }
843,830,451
https://api.github.com/repos/huggingface/datasets/issues/2140/comments
MDExOlB1bGxSZXF1ZXN0NjAzMTYxMjYx
null
2,140
https://api.github.com/repos/huggingface/datasets/issues/2140/events
true
closed
2021-03-29T18:23:54Z
null
https://api.github.com/repos/huggingface/datasets/issues/2139
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2139/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2139/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/22480495?v=4", "events_url": "https://api.github.com/users/PedroMLF/events{/privacy}", "followers_url": "https://api.github.com/users/PedroMLF/followers", "following_url": "https://api.github.com/users/PedroMLF/following{/other_user}", "gists_url": "https://api.github.com/users/PedroMLF/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PedroMLF", "id": 22480495, "login": "PedroMLF", "node_id": "MDQ6VXNlcjIyNDgwNDk1", "organizations_url": "https://api.github.com/users/PedroMLF/orgs", "received_events_url": "https://api.github.com/users/PedroMLF/received_events", "repos_url": "https://api.github.com/users/PedroMLF/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PedroMLF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PedroMLF/subscriptions", "type": "User", "url": "https://api.github.com/users/PedroMLF" }
https://github.com/huggingface/datasets/issues/2139
[]
false
2021-03-30T09:12:53Z
2021-03-30T09:12:53Z
null
[ "Hi !\r\nI think this has been fixed recently on `master`.\r\nCan you try again by installing `datasets` from `master` ?\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```", "Hi!\r\n\r\nUsing that version of the code solves the issue. Thanks!" ]
completed
[]
TypeError when using save_to_disk in a dataset loaded with ReadInstruction split
NONE
https://api.github.com/repos/huggingface/datasets/issues/2139/timeline
Hi, Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`. Here is the minimal reproducible example: ```python from datasets import load_dataset from datasets import ReadInstruction data_1 = load_dataset( "wikiann", "en", split="validation", ) data_1.save_to_disk("temporary_path_1") print("Save with regular split works.") data_2 = load_dataset( "wikiann", "en", split=ReadInstruction("validation", to=50, unit="%"), ) data_2.save_to_disk("temporary_path_2") ``` and the corresponding output: ``` Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9) Save with regular split works. Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9) Traceback (most recent call last): File "bug.py", line 20, in <module> data_2.save_to_disk("temporary_path_2") File "/xxxxx/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 645, in save_to_disk json.dump(state, state_file, indent=2, sort_keys=True) File "/usr/lib/python3.7/json/__init__.py", line 179, in dump for chunk in iterable: File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode o = _default(o) File "/usr/lib/python3.7/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type ReadInstruction is not JSON serializable ``` Let me know if there is some misuse from my end. Thanks in advance.
https://api.github.com/repos/huggingface/datasets
null
843,662,613
https://api.github.com/repos/huggingface/datasets/issues/2139/comments
MDU6SXNzdWU4NDM2NjI2MTM=
null
2,139
https://api.github.com/repos/huggingface/datasets/issues/2139/events
false
closed
2021-03-29T15:52:27Z
null
https://api.github.com/repos/huggingface/datasets/issues/2138
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2138/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2138/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6931004?v=4", "events_url": "https://api.github.com/users/chutaklee/events{/privacy}", "followers_url": "https://api.github.com/users/chutaklee/followers", "following_url": "https://api.github.com/users/chutaklee/following{/other_user}", "gists_url": "https://api.github.com/users/chutaklee/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chutaklee", "id": 6931004, "login": "chutaklee", "node_id": "MDQ6VXNlcjY5MzEwMDQ=", "organizations_url": "https://api.github.com/users/chutaklee/orgs", "received_events_url": "https://api.github.com/users/chutaklee/received_events", "repos_url": "https://api.github.com/users/chutaklee/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chutaklee/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chutaklee/subscriptions", "type": "User", "url": "https://api.github.com/users/chutaklee" }
https://github.com/huggingface/datasets/pull/2138
[]
false
2021-04-06T16:16:11Z
2021-04-06T07:14:38Z
null
[]
null
[]
Add CER metric
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2138/timeline
Add Character Error Rate (CER) metric that is used in evaluation in ASR. I also have written unittests (hopefully thorough enough) but I'm not sure how to integrate them into the existed codebase. ```python from cer import CER cer = CER() class TestCER(unittest.TestCase): def test_cer_case_senstive(self): refs = ['White House'] preds = ['white house'] # S = 2, D = 0, I = 0, N = 11, CER = 2 / 11 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.1818181818) < 1e-6) def test_cer_whitespace(self): refs = ['were wolf'] preds = ['werewolf'] # S = 0, D = 0, I = 1, N = 9, CER = 1 / 9 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.1111111) < 1e-6) refs = ['werewolf'] preds = ['weae wolf'] # S = 1, D = 1, I = 0, N = 8, CER = 0.25 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.25) < 1e-6) # consecutive whitespaces case 1 refs = ['were wolf'] preds = ['were wolf'] # S = 0, D = 0, I = 0, N = 9, CER = 0 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.0) < 1e-6) # consecutive whitespaces case 2 refs = ['were wolf'] preds = ['were wolf'] # S = 0, D = 0, I = 0, N = 9, CER = 0 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.0) < 1e-6) def test_cer_sub(self): refs = ['werewolf'] preds = ['weaewolf'] # S = 1, D = 0, I = 0, N = 8, CER = 0.125 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.125) < 1e-6) def test_cer_del(self): refs = ['werewolf'] preds = ['wereawolf'] # S = 0, D = 1, I = 0, N = 8, CER = 0.125 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.125) < 1e-6) def test_cer_insert(self): refs = ['werewolf'] preds = ['wereolf'] # S = 0, D = 0, I = 1, N = 8, CER = 0.125 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.125) < 1e-6) def test_cer_equal(self): refs = ['werewolf'] char_error_rate = cer.compute(predictions=refs, references=refs) self.assertEqual(char_error_rate, 0.0) def test_cer_list_of_seqs(self): refs = ['werewolf', 'I am your father'] char_error_rate = cer.compute(predictions=refs, references=refs) self.assertEqual(char_error_rate, 0.0) refs = ['werewolf', 'I am your father', 'doge'] preds = ['werxwolf', 'I am your father', 'doge'] # S = 1, D = 0, I = 0, N = 28, CER = 1 / 28 char_error_rate = cer.compute(predictions=preds, references=refs) self.assertTrue(abs(char_error_rate - 0.03571428) < 1e-6) def test_cer_unicode(self): ref = [u'我能吞下玻璃而不伤身体'] pred = [u' 能吞虾玻璃而 不霜身体啦'] # S = 3, D = 2, I = 0, N = 11 # CER = 5 / 11 char_error_rate = cer.compute(predictions=pred, references=ref) self.assertTrue(abs(char_error_rate - 0.4545454545) < 1e-6) ref = [u'我能吞', u'下玻璃而不伤身体'] pred = [u'我 能 吞 下 玻 璃', u'而不伤身体'] # S = 0, D = 5, I = 0, N = 11 # CER = 5 / 11 char_error_rate = cer.compute(predictions=pred, references=ref) self.assertTrue(abs(char_error_rate - 0.454545454545) < 1e-6) ref = [u'我能吞下玻璃而不伤身体'] char_error_rate = cer.compute(predictions=ref, references=ref) self.assertFalse(char_error_rate, 0.0) def test_cer_empty(self): ref = '' pred = 'Hypothesis' with self.assertRaises(ValueError): char_error_rate = cer.compute(predictions=pred, references=ref) if __name__ == '__main__': unittest.main() ```
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2138.diff", "html_url": "https://github.com/huggingface/datasets/pull/2138", "merged_at": "2021-04-06T07:14:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/2138.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2138" }
843,508,402
https://api.github.com/repos/huggingface/datasets/issues/2138/comments
MDExOlB1bGxSZXF1ZXN0NjAyODc4NzU2
null
2,138
https://api.github.com/repos/huggingface/datasets/issues/2138/events
true
closed
2021-03-29T15:46:12Z
null
https://api.github.com/repos/huggingface/datasets/issues/2137
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2137/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2137/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
https://github.com/huggingface/datasets/pull/2137
[]
false
2021-03-31T10:35:56Z
2021-03-31T10:35:55Z
null
[]
null
[]
Fix missing infos from concurrent dataset loading
MEMBER
https://api.github.com/repos/huggingface/datasets/issues/2137/timeline
This should fix issue #2131 When calling `load_dataset` at the same time from 2 workers, one of the worker could have missing split infos when reloading the dataset from the cache.
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2137.diff", "html_url": "https://github.com/huggingface/datasets/pull/2137", "merged_at": "2021-03-31T10:35:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/2137.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2137" }
843,502,835
https://api.github.com/repos/huggingface/datasets/issues/2137/comments
MDExOlB1bGxSZXF1ZXN0NjAyODc0MDYw
null
2,137
https://api.github.com/repos/huggingface/datasets/issues/2137/events
true
closed
2021-03-29T15:34:13Z
null
https://api.github.com/repos/huggingface/datasets/issues/2136
null
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2136/reactions" }
false
https://api.github.com/repos/huggingface/datasets/issues/2136/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4", "events_url": "https://api.github.com/users/adamlin120/events{/privacy}", "followers_url": "https://api.github.com/users/adamlin120/followers", "following_url": "https://api.github.com/users/adamlin120/following{/other_user}", "gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adamlin120", "id": 31605305, "login": "adamlin120", "node_id": "MDQ6VXNlcjMxNjA1MzA1", "organizations_url": "https://api.github.com/users/adamlin120/orgs", "received_events_url": "https://api.github.com/users/adamlin120/received_events", "repos_url": "https://api.github.com/users/adamlin120/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions", "type": "User", "url": "https://api.github.com/users/adamlin120" }
https://github.com/huggingface/datasets/pull/2136
[]
false
2021-03-31T12:48:02Z
2021-03-31T12:48:01Z
null
[]
null
[]
fix dialogue action slot name and value
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2136/timeline
fix #2128
https://api.github.com/repos/huggingface/datasets
{ "diff_url": "https://github.com/huggingface/datasets/pull/2136.diff", "html_url": "https://github.com/huggingface/datasets/pull/2136", "merged_at": "2021-03-31T12:48:01Z", "patch_url": "https://github.com/huggingface/datasets/pull/2136.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2136" }
843,492,015
https://api.github.com/repos/huggingface/datasets/issues/2136/comments
MDExOlB1bGxSZXF1ZXN0NjAyODY0ODY5
null
2,136
https://api.github.com/repos/huggingface/datasets/issues/2136/events
true
closed
2021-03-29T10:47:50Z
null
https://api.github.com/repos/huggingface/datasets/issues/2135
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2135/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2135/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
https://github.com/huggingface/datasets/issues/2135
[]
false
2021-03-30T10:20:23Z
2021-03-30T10:20:23Z
null
[ "Hi ! Indeed only the languages of the `translate-train` data are included...\r\nI can't find a link to download the english train set on https://github.com/facebookresearch/MLQA though, do you know where we can download it ?", "Hi @lhoestq \r\nthank you very much for coming back to me, now I see, you are right, in the link you sent I see split of {split}-context-{context_language}-question-{question_language}.json with context_language=question_language=en, TFDS most probably has extracted english ones from these files as en language files, but translate-train/test do not have en indeed. thanks a lot for the great explanations", "I close the ticket, since I do not see any en existing, they have trained on \"SQuAD V1.1\" instead. Thanks. " ]
completed
[]
en language data from MLQA dataset is missing
CONTRIBUTOR
https://api.github.com/repos/huggingface/datasets/issues/2135/timeline
Hi I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue.
https://api.github.com/repos/huggingface/datasets
null
843,246,344
https://api.github.com/repos/huggingface/datasets/issues/2135/comments
MDU6SXNzdWU4NDMyNDYzNDQ=
null
2,135
https://api.github.com/repos/huggingface/datasets/issues/2135/events
false
closed
2021-03-29T10:43:15Z
null
https://api.github.com/repos/huggingface/datasets/issues/2134
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2134/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2134/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/5815801?v=4", "events_url": "https://api.github.com/users/prokopCerny/events{/privacy}", "followers_url": "https://api.github.com/users/prokopCerny/followers", "following_url": "https://api.github.com/users/prokopCerny/following{/other_user}", "gists_url": "https://api.github.com/users/prokopCerny/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/prokopCerny", "id": 5815801, "login": "prokopCerny", "node_id": "MDQ6VXNlcjU4MTU4MDE=", "organizations_url": "https://api.github.com/users/prokopCerny/orgs", "received_events_url": "https://api.github.com/users/prokopCerny/received_events", "repos_url": "https://api.github.com/users/prokopCerny/repos", "site_admin": false, "starred_url": "https://api.github.com/users/prokopCerny/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prokopCerny/subscriptions", "type": "User", "url": "https://api.github.com/users/prokopCerny" }
https://github.com/huggingface/datasets/issues/2134
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
false
2021-05-03T17:59:21Z
2021-05-03T17:59:21Z
null
[ "Hi !\r\nIndeed `save_to_disk` doesn't call pickle anymore. Though the `OverflowError` can still appear for in-memory datasets bigger than 4GB. This happens when doing this for example:\r\n```python\r\nimport pyarrow as pa\r\nimport pickle\r\n\r\narr = pa.array([0] * ((4 * 8 << 30) // 64))\r\ntable = pa.Table.from_arrays([a], names=[\"foo\"])\r\npickle.dumps(table) # fails with an OverflowError\r\npickle.dumps(table, 4) # works !\r\n```\r\nWe'll do the change to use `protocol=4`.\r\n\r\nMoreover I've also seen other users complain about this error\r\n```\r\nstruct.error: 'I' format requires 0 <= number <= 4294967295\r\n```\r\n\r\nIt looks like something related to the 4GB limit as well but I'm not able to reproduce on my side.\r\nDo you think you can provide a script that reproduces the issue ?\r\nHow big is your dataset ? (number of bytes, number of rows)\r\n\r\n", "Hi!\r\nSo I've managed to created a minimum working (well technically crashing) example for the multiprocessing case, I create a huge list of zeros, like in your example, and then I try to .map(None, num_proc=2) over it, which then crashes, here's the code:\r\n\r\n```python\r\nfrom datasets import Dataset\r\n\r\nif __name__ == '__main__':\r\n ton_of_zeroes = [0] * ((12 * 8 << 30) // 64)\r\n large_dataset = Dataset.from_dict({'col': ton_of_zeroes})\r\n print(\"Start\")\r\n large_dataset.map(function=None, num_proc=2)\r\n print(\"Done - should not print\")\r\n```\r\n\r\nThe amount of zeros could probably be reduced, I haven't tried to minimize it to find the breaking point, I just increased it from your code (which by quick glance I assumed tried to allocate over 4 GiB)\r\n\r\nRunning this results in the following traceback:\r\n\r\n```\r\nParameter 'indices'=[ 0 1 2 ... 805306365 805306366 805306367] of the transform datasets.arrow_dataset.Dataset.select couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.\r\nTraceback (most recent call last):\r\n File \"./crash_multiproc_pickle.py\", line 7, in <module>\r\n large_dataset.map(function=None, num_proc=2)\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1485, in map\r\n transformed_shards = [r.get() for r in results]\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1485, in <listcomp>\r\n transformed_shards = [r.get() for r in results]\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py\", line 657, in get\r\n raise self._value\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py\", line 431, in _handle_tasks\r\n put(task)\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py\", line 209, in send\r\n self._send_bytes(_ForkingPickler.dumps(obj))\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py\", line 54, in dumps\r\n cls(buf, protocol, *args, **kwds).dump(obj)\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py\", line 454, in dump\r\n StockPickler.dump(self, obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 437, in dump\r\n self.save(obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 662, in save_reduce\r\n save(state)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py\", line 941, in save_module_dict\r\n StockPickler.save_dict(pickler, obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 859, in save_dict\r\n self._batch_setitems(obj.items())\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 885, in _batch_setitems\r\n save(v)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 846, in _batch_appends\r\n save(tmp[0])\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 789, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 819, in save_list\r\n self._batch_appends(obj)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 843, in _batch_appends\r\n save(x)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 549, in save\r\n self.save_reduce(obj=obj, *rv)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 638, in save_reduce\r\n save(args)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 774, in save_tuple\r\n save(element)\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 504, in save\r\n f(self, obj) # Call unbound method with explicit self\r\n File \"/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py\", line 732, in save_bytes\r\n self._write_large_bytes(BINBYTES + pack(\"<I\", n), obj)\r\nstruct.error: 'I' format requires 0 <= number <= 4294967295\r\n```\r\n\r\nMy datasets usually have hundreds of thousands to low millions of rows, with each row containing a list of 10 strings and list of vectors of different length (the strings tokenized), which in the worst case have 10\\*512\\*8 = 40960 bytes (but usually it is much smaller, as the vectors tend to be shorter. I need these groups of text lines to create training data for the Inverse Cloze Task.\r\n\r\nAnyway I don't think my particular dataset is relevant, as the tiny script I created also manages to crash.\r\nBut I think the issue is the same as the save_to_disk, from the traceback it seems that in multiprocessing, it tries to use dill to return the result of the map workers, which tries to pickle the data and can't do it, probably because it's again using the older pickle protocol. That's my guess anyway.", "I just merged a fix #2150 that allows to pickle tables bigger than 4GiB\r\nFeel free to try it on the `master` branch !", "awesome! I started getting this error as well when I tried to tokenize with a longer sequence length", "@prokopCerny does this fix work for you? I found that with the latest master, my container with 500GB RAM starts crashing when I try to map a large dataset using `num_proc`.\r\n\r\n@lhoestq would it be possible to implement some logic to keep the individual cache files small (say below 100mb)? I find this helps with loading large datasets, but the \"hack\" I was using (increasing `num_proc` to a large number) doesn't work anymore with the latest master; my container crashes even with `num_proc=200` now", "Closing since the original issue was fixed in #2150 \r\nFeel free to reopen if you are still experiencing it.\r\nFor the other problems, please open separate issues" ]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
Saving large in-memory datasets with save_to_disk crashes because of pickling
NONE
https://api.github.com/repos/huggingface/datasets/issues/2134/timeline
Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so I decided to do these steps completely out of the datasets library. So my workflow is to do several .map() on datasets object, then for the operation which is faster in memory to extract the necessary columns from the dataset and then drop it whole, do the transformation in memory, and then create a fresh Dataset object using .from_dict() or other method. When I then try to call save_to_disk(path) on the dataset, it crashes because of pickling, which appears to be because of using old pickle protocol which doesn't support large files (over 4 GiB). ``` Traceback (most recent call last): File "./tokenize_and_chunkify_in_memory.py", line 80, in <module> main() File "./tokenize_and_chunkify_in_memory.py", line 75, in main tokenize_and_chunkify(config) File "./tokenize_and_chunkify_in_memory.py", line 60, in tokenize_and_chunkify contexts_dataset.save_to_disk(chunked_path) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 457, in save_to_disk self = pickle.loads(pickle.dumps(self)) OverflowError: cannot serialize a bytes object larger than 4 GiB ``` From what I've seen this issue may be possibly fixed, as the line `self = pickle.loads(pickle.dumps(self))` does not appear to be present in the current state of the repository. To save these datasets to disk, I've resorted to calling .map() over them with `function=None` and specifying the .arrow cache file, and then creating a new dataset using the .from_file() method, which I can then safely save to disk. Additional issue when working with these large in-memory datasets is when using multiprocessing, is again to do with pickling. I've tried to speed up the mapping with function=None by specifying num_proc to the available cpu count, and I again get issues with transferring the dataset, with the following traceback. I am not sure if I should open a separate issue for that. ``` Traceback (most recent call last): File "./tokenize_and_chunkify_in_memory.py", line 94, in <module> main() File "./tokenize_and_chunkify_in_memory.py", line 89, in main tokenize_and_chunkify(config) File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp> transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get raise self._value File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks put(task) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce save(state) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes self._write_large_bytes(BINBYTES + pack("<I", n), obj) struct.error: 'I' format requires 0 <= number <= 4294967295Traceback (most recent call last): File "./tokenize_and_chunkify_in_memory.py", line 94, in <module> main() File "./tokenize_and_chunkify_in_memory.py", line 89, in main tokenize_and_chunkify(config) File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp> transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get raise self._value File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks put(task) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce save(state) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes self._write_large_bytes(BINBYTES + pack("<I", n), obj) struct.error: 'I' format requires 0 <= number <= 4294967295 ```
https://api.github.com/repos/huggingface/datasets
null
843,242,849
https://api.github.com/repos/huggingface/datasets/issues/2134/comments
MDU6SXNzdWU4NDMyNDI4NDk=
null
2,134
https://api.github.com/repos/huggingface/datasets/issues/2134/events
false
closed
2021-03-29T09:03:09Z
null
https://api.github.com/repos/huggingface/datasets/issues/2133
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2133/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2133/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
https://github.com/huggingface/datasets/issues/2133
[]
false
2021-03-30T17:40:57Z
2021-03-30T17:40:57Z
null
[ "If you print those questions, you get readable texts:\r\n```python\r\n>>> questions = [\r\n... \"\\u0645\\u062a\\u0649 \\u0628\\u062f\\u0627\\u062a \\u0627\\u0644\\u0645\\u062c\\u0644\\u0629 \\u0627\\u0644\\u0645\\u062f\\u0631\\u0633\\u064a\\u0629 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645 \\u0628\\u0627\\u0644\\u0646\\u0634\\u0631?\",\r\n... \"\\u0643\\u0645 \\u0645\\u0631\\u0629 \\u064a\\u062a\\u0645 \\u0646\\u0634\\u0631\\u0647\\u0627 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0645\\u0627 \\u0647\\u064a \\u0627\\u0644\\u0648\\u0631\\u0642\\u0629 \\u0627\\u0644\\u064a\\u0648\\u0645\\u064a\\u0629 \\u0644\\u0644\\u0637\\u0644\\u0627\\u0628 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0643\\u0645 \\u0639\\u062f\\u062f \\u0627\\u0644\\u0627\\u0648\\u0631\\u0627\\u0642 \\u0627\\u0644\\u0627\\u062e\\u0628\\u0627\\u0631\\u064a\\u0629 \\u0644\\u0644\\u0637\\u0644\\u0627\\u0628 \\u0627\\u0644\\u062a\\u064a \\u0648\\u062c\\u062f\\u062a \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\",\r\n... \"\\u0641\\u064a \\u0627\\u064a \\u0633\\u0646\\u0629 \\u0628\\u062f\\u0627\\u062a \\u0648\\u0631\\u0642\\u0629 \\u0627\\u0644\\u0637\\u0627\\u0644\\u0628 \\u0627\\u0644\\u062d\\u0633 \\u0627\\u0644\\u0633\\u0644\\u064a\\u0645 \\u0628\\u0627\\u0644\\u0646\\u0634\\u0631 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u0645?\"\r\n... ]\r\n>>> print(questions)\r\n['متى بدات المجلة المدرسية في نوتردام بالنشر?', 'كم مرة يتم نشرها في نوتردام?', 'ما هي الورقة اليومية للطلاب في نوتردام?', 'كم عدد الاوراق الاخبارية للطلاب التي وجدت في نوتردام?', 'في اي سنة بدات ورقة الطالب الحس السليم بالنشر في نوتردام?']\r\n```\r\nI don't think we can change this", "Hi @dorost1234.\r\n\r\nIn Python 3, strings are sequences of Unicode _code points_. Unicode is a specification that maps all characters (and emoji symbols) with its unique representation in terms of code points. That is what you see: Unicode code points (represented by a \\u escaped sequence of 16-bit hex values).\r\n\r\nCharacters are usually represented (on screen and papers) with a graphical element called _glyph_. That is what you would like to see: glyphs. But Python does not care about glyphs: that is the job of the GUI or the terminal; glyphs are what you get with the `print` function (if your terminal is properly configured to display those glyphs).\r\n\r\nYou have more detailed information about Unicode in the Python documentation: https://docs.python.org/3/howto/unicode.html", "thank you so much for the insightful comments. " ]
completed
[]
bug in mlqa dataset
NONE
https://api.github.com/repos/huggingface/datasets/issues/2133/timeline
Hi Looking into MLQA dataset for langauge "ar": ``` "question": [ "\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?", "\u0643\u0645 \u0645\u0631\u0629 \u064a\u062a\u0645 \u0646\u0634\u0631\u0647\u0627 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?", "\u0645\u0627 \u0647\u064a \u0627\u0644\u0648\u0631\u0642\u0629 \u0627\u0644\u064a\u0648\u0645\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?", "\u0643\u0645 \u0639\u062f\u062f \u0627\u0644\u0627\u0648\u0631\u0627\u0642 \u0627\u0644\u0627\u062e\u0628\u0627\u0631\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0627\u0644\u062a\u064a \u0648\u062c\u062f\u062a \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?", "\u0641\u064a \u0627\u064a \u0633\u0646\u0629 \u0628\u062f\u0627\u062a \u0648\u0631\u0642\u0629 \u0627\u0644\u0637\u0627\u0644\u0628 \u0627\u0644\u062d\u0633 \u0627\u0644\u0633\u0644\u064a\u0645 \u0628\u0627\u0644\u0646\u0634\u0631 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?" ] ``` the questions are in the wrong format, and not readable, could you please have a look? thanks @lhoestq
https://api.github.com/repos/huggingface/datasets
null
843,149,680
https://api.github.com/repos/huggingface/datasets/issues/2133/comments
MDU6SXNzdWU4NDMxNDk2ODA=
null
2,133
https://api.github.com/repos/huggingface/datasets/issues/2133/events
false
open
2021-03-29T08:56:21Z
null
https://api.github.com/repos/huggingface/datasets/issues/2132
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2132/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2132/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
https://github.com/huggingface/datasets/issues/2132
[]
false
2021-04-04T09:57:15Z
null
null
[ "You can filter the languages this way:\r\n```python\r\ntydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\r\n```\r\n\r\nOtherwise maybe we can have one configuration per language ?\r\nWhat do you think of this for example ?\r\n\r\n```python\r\nload_dataset(\"tydiqa\", \"primary_task.en\")\r\n```", "Hi\nthank you very much for the great response, this will be really wonderful\nto have one configuration per language, as one need the dataset in majority\nof case per language for cross-lingual evaluations.\nThis becomes also then more close to TFDS format, which is separated per\nlanguage https://www.tensorflow.org/datasets/catalog/tydi_qa which will be\nreally awesome to have.\nthanks\n\nOn Mon, Mar 29, 2021 at 6:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> You can filter the languages this way:\n>\n> tydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\n>\n> Otherwise maybe we can have one configuration per language ?\n> What do you think of this for example ?\n>\n> load_dataset(\"tydiqa\", \"primary_task.en\")\n>\n> —\n> You are receiving this because you authored the thread.\n> Reply to this email directly, view it on GitHub\n> <https://github.com/huggingface/datasets/issues/2132#issuecomment-809516799>,\n> or unsubscribe\n> <https://github.com/notifications/unsubscribe-auth/AS37NMXPW2PWSQ2RHG73O7TTGCY4LANCNFSM4Z7ER7IA>\n> .\n>\n", "@lhoestq I greatly appreciate any updates on this. thanks a lot" ]
null
[]
TydiQA dataset is mixed and is not split per language
NONE
https://api.github.com/repos/huggingface/datasets/issues/2132/timeline
Hi @lhoestq Currently TydiQA is mixed and user can only access the whole training set of all languages: https://www.tensorflow.org/datasets/catalog/tydi_qa for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenient for user to have them split and I appreciate your help on this. Meanwhile, till hopefully this is split per language, I greatly appreciate telling me how I can preprocess and get data per language. thanks a lot
https://api.github.com/repos/huggingface/datasets
null
843,142,822
https://api.github.com/repos/huggingface/datasets/issues/2132/comments
MDU6SXNzdWU4NDMxNDI4MjI=
null
2,132
https://api.github.com/repos/huggingface/datasets/issues/2132/events
false
closed
2021-03-29T08:45:58Z
null
https://api.github.com/repos/huggingface/datasets/issues/2131
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
{ "+1": 0, "-1": 0, "confused": 1, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2131/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2131/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/23011317?v=4", "events_url": "https://api.github.com/users/andy-yangz/events{/privacy}", "followers_url": "https://api.github.com/users/andy-yangz/followers", "following_url": "https://api.github.com/users/andy-yangz/following{/other_user}", "gists_url": "https://api.github.com/users/andy-yangz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andy-yangz", "id": 23011317, "login": "andy-yangz", "node_id": "MDQ6VXNlcjIzMDExMzE3", "organizations_url": "https://api.github.com/users/andy-yangz/orgs", "received_events_url": "https://api.github.com/users/andy-yangz/received_events", "repos_url": "https://api.github.com/users/andy-yangz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andy-yangz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andy-yangz/subscriptions", "type": "User", "url": "https://api.github.com/users/andy-yangz" }
https://github.com/huggingface/datasets/issues/2131
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
false
2021-04-10T11:08:55Z
2021-04-10T11:08:55Z
null
[ "Hi ! Thanks for reporting\r\nI was able to reproduce this issue. This was caused by missing split infos if a worker reloads the cache of the other worker.\r\n\r\nI just opened https://github.com/huggingface/datasets/pull/2137 to fix this issue", "The PR got merged :)\r\nFeel free to try it out on the `master` branch", "Sorry for the late reply. \r\nNow everything just works well XD" ]
completed
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object
NONE
https://api.github.com/repos/huggingface/datasets/issues/2131/timeline
version: 1.5.0 met a very strange error, I am training large scale language model, and need train on 2 machines(workers). And sometimes I will get this error `TypeError: 'NoneType' object is not iterable` This is traceback ``` 71 |   | Traceback (most recent call last): -- | -- | -- 72 |   | File "run_gpt.py", line 316, in <module> 73 |   | main() 74 |   | File "run_gpt.py", line 222, in main 75 |   | delimiter="\t", column_names=["input_ids", "attention_mask", "chinese_ref"]) 76 |   | File "/data/miniconda3/lib/python3.7/site-packages/datasets/load.py", line 747, in load_dataset 77 |   | use_auth_token=use_auth_token, 78 |   | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 513, in download_and_prepare 79 |   | self.download_post_processing_resources(dl_manager) 80 |   | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 673, in download_post_processing_resources 81 |   | for split in self.info.splits: 82 |   | TypeError: 'NoneType' object is not iterable 83 |   | WARNING:datasets.builder:Reusing dataset csv (/usr/local/app/.cache/huggingface/datasets/csv/default-1c257ebd48e225e7/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2) 84 |   | Traceback (most recent call last): 85 |   | File "/data/miniconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main 86 |   | "__main__", mod_spec) 87 |   | File "/data/miniconda3/lib/python3.7/runpy.py", line 85, in _run_code 88 |   | exec(code, run_globals) 89 |   | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in <module> 90 |   | main() 91 |   | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main 92 |   | sigkill_handler(signal.SIGTERM, None) # not coming back 93 |   | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler 94 |   | raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) ``` On worker 1 it loads the dataset well, however on worker 2 will get this error. And I will meet this error from time to time, sometimes it just goes well.
https://api.github.com/repos/huggingface/datasets
null
843,133,112
https://api.github.com/repos/huggingface/datasets/issues/2131/comments
MDU6SXNzdWU4NDMxMzMxMTI=
null
2,131
https://api.github.com/repos/huggingface/datasets/issues/2131/events
false
closed
2021-03-29T08:23:00Z
null
https://api.github.com/repos/huggingface/datasets/issues/2130
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2130/reactions" }
null
https://api.github.com/repos/huggingface/datasets/issues/2130/labels{/name}
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234" }
https://github.com/huggingface/datasets/issues/2130
[]
false
2021-08-27T14:44:18Z
2021-08-27T14:44:18Z
null
[ "Here please find TFDS format of this dataset: https://www.tensorflow.org/datasets/catalog/wikiann\r\nwhere there is a span column, this is really necessary to be able to use the data, and I appreciate your help @lhoestq ", "Hi !\r\nApparently you can get the spans from the NER tags using `tags_to_spans` defined here:\r\n\r\nhttps://github.com/tensorflow/datasets/blob/c7096bd38e86ed240b8b2c11ecab9893715a7d55/tensorflow_datasets/text/wikiann/wikiann.py#L81-L126\r\n\r\nIt would be nice to include the `spans` field in this dataset as in TFDS. This could be a good first issue for new contributors !\r\n\r\nThe objective is to use `tags_to_spans` in the `_generate_examples` method [here](https://github.com/huggingface/nlp/blob/c98e4b8f23e3770c401c6d9326e243e1ffd599ec/datasets/wikiann/wikiann.py#L292-L316) to create he `spans` for each example.", "Hi @lhoestq \r\nthank you very much for the help, it would be very nice to have it included, here is the full code, one need to also convert tags to string first:\r\n\r\n```\r\nimport datasets \r\nfrom datasets import load_dataset\r\n\r\ndef tags_to_spans(tags):\r\n \"\"\"Convert tags to spans.\"\"\"\r\n spans = set()\r\n span_start = 0\r\n span_end = 0\r\n active_conll_tag = None\r\n for index, string_tag in enumerate(tags):\r\n # Actual BIO tag.\r\n bio_tag = string_tag[0]\r\n assert bio_tag in [\"B\", \"I\", \"O\"], \"Invalid Tag\"\r\n conll_tag = string_tag[2:]\r\n if bio_tag == \"O\":\r\n # The span has ended.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n active_conll_tag = None\r\n # We don't care about tags we are\r\n # told to ignore, so we do nothing.\r\n continue\r\n elif bio_tag == \"B\":\r\n # We are entering a new span; reset indices and active tag to new span.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n active_conll_tag = conll_tag\r\n span_start = index\r\n span_end = index\r\n elif bio_tag == \"I\" and conll_tag == active_conll_tag:\r\n # We're inside a span.\r\n span_end += 1\r\n else:\r\n # This is the case the bio label is an \"I\", but either:\r\n # 1) the span hasn't started - i.e. an ill formed span.\r\n # 2) We have IOB1 tagging scheme.\r\n # We'll process the previous span if it exists, but also include this\r\n # span. This is important, because otherwise, a model may get a perfect\r\n # F1 score whilst still including false positive ill-formed spans.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n active_conll_tag = conll_tag\r\n span_start = index\r\n span_end = index\r\n # Last token might have been a part of a valid span.\r\n if active_conll_tag:\r\n spans.add((active_conll_tag, (span_start, span_end)))\r\n # Return sorted list of spans\r\n return sorted(list(spans), key=lambda x: x[1][0])\r\n\r\ndataset = load_dataset('wikiann', 'en', split=\"train\")\r\nner_tags = {\r\n 0:\"O\",\r\n 1:\"B-PER\",\r\n 2:\"I-PER\",\r\n 3:\"B-ORG\",\r\n 4:\"I-ORG\",\r\n 5:\"B-LOC\",\r\n 6:\"I-LOC\"\r\n}\r\n\r\ndef get_spans(tokens, tags):\r\n \"\"\"Convert tags to textspans.\"\"\"\r\n spans = tags_to_spans(tags)\r\n text_spans = [\r\n x[0] + \": \" + \" \".join([tokens[i]\r\n for i in range(x[1][0], x[1][1] + 1)])\r\n for x in spans\r\n ]\r\n if not text_spans:\r\n text_spans = [\"None\"]\r\n return text_spans\r\n\r\n\r\nfor i, d in enumerate(dataset):\r\n tokens = d['tokens']\r\n tags = d['ner_tags']\r\n tags = [ner_tags[i] for i in tags]\r\n spans = get_spans(tokens, tags)\r\n print(\"spans \", spans)\r\n print(d)\r\n if i > 10:\r\n break; \r\n```\r\nI am not sure how to contribute to the repository and how things work, could you let me know how one can access the datasets to be able to contribute to the repository? Maybe I could do it then\r\nthanks \r\n", "Cool ! Let me give you some context:\r\n\r\n#### Contribution guide\r\n\r\nYou can find the contribution guide here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md\r\n\r\nIt explains how to set up your dev environment in a few steps.\r\n\r\n#### Dataset loading\r\n\r\nEach Dataset is defined by a Table that have many rows (one row = one example) and columns (one column = one feature).\r\nTo change how a dataset is constructed, you have to modify its dataset script that you can find here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/master/datasets/wikiann/wikiann.py\r\n\r\nIt includes everything needed to load the WikiANN dataset.\r\nYou can load locally a modified version of `wikiann.py` with `load_dataset(\"path/to/wikiann.py\")`.\r\n\r\n#### Define a new column\r\n\r\nEach column has a name and a type. You can see how the features of WikiANN are defined here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/c98e4b8f23e3770c401c6d9326e243e1ffd599ec/datasets/wikiann/wikiann.py#L245-L263\r\n\r\nIdeally we would have one additional feature \"spans\":\r\n```python\r\n \"spans\": datasets.Sequence(datasets.Value(\"string\")),\r\n```\r\n\r\n#### Compute the content of each row\r\n\r\nTo build the WikiANN rows, the _generate_examples method from [here](https://github.com/huggingface/nlp/blob/c98e4b8f23e3770c401c6d9326e243e1ffd599ec/datasets/wikiann/wikiann.py#L292-L316) is used. This function `yield` one python dictionary for each example:\r\n```python\r\nyield guid_index, {\"tokens\": tokens, \"ner_tags\": ner_tags, \"langs\": langs}\r\n```\r\n\r\nThe objective would be to return instead something like\r\n```python\r\nspans = spans = get_spans(tokens, tags)\r\nyield guid_index, {\"tokens\": tokens, \"ner_tags\": ner_tags, \"langs\": langs, \"spans\": spans}\r\n```\r\n\r\nLet me know if you have questions !", "The PR was merged. Issue should be closed.\r\n\r\nCC: @lhoestq " ]
completed
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
wikiann dataset is missing columns
NONE
https://api.github.com/repos/huggingface/datasets/issues/2130/timeline
Hi Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
https://api.github.com/repos/huggingface/datasets
null
843,111,936
https://api.github.com/repos/huggingface/datasets/issues/2130/comments
MDU6SXNzdWU4NDMxMTE5MzY=
null
2,130
https://api.github.com/repos/huggingface/datasets/issues/2130/events
false