url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.62B
node_id
stringlengths
18
32
number
int64
1
5.62k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
1 value
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
2 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/2416
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2416/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2416/comments
https://api.github.com/repos/huggingface/datasets/issues/2416/events
https://github.com/huggingface/datasets/pull/2416
903,932,299
MDExOlB1bGxSZXF1ZXN0NjU1MTM3NDUy
2,416
Add KLUE dataset
{ "login": "jungwhank", "id": 53588015, "node_id": "MDQ6VXNlcjUzNTg4MDE1", "avatar_url": "https://avatars.githubusercontent.com/u/53588015?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jungwhank", "html_url": "https://github.com/jungwhank", "followers_url": "https://api.github.com/users/jungwhank/followers", "following_url": "https://api.github.com/users/jungwhank/following{/other_user}", "gists_url": "https://api.github.com/users/jungwhank/gists{/gist_id}", "starred_url": "https://api.github.com/users/jungwhank/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungwhank/subscriptions", "organizations_url": "https://api.github.com/users/jungwhank/orgs", "repos_url": "https://api.github.com/users/jungwhank/repos", "events_url": "https://api.github.com/users/jungwhank/events{/privacy}", "received_events_url": "https://api.github.com/users/jungwhank/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-27T15:49:51"
"2021-06-09T15:00:02"
"2021-06-04T17:45:15"
CONTRIBUTOR
null
Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks). Please let me know if there's anything missing in the code or README. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2416/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2416/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2416", "html_url": "https://github.com/huggingface/datasets/pull/2416", "diff_url": "https://github.com/huggingface/datasets/pull/2416.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2416.patch", "merged_at": "2021-06-04T17:45:15" }
true
https://api.github.com/repos/huggingface/datasets/issues/2415
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2415/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2415/comments
https://api.github.com/repos/huggingface/datasets/issues/2415/events
https://github.com/huggingface/datasets/issues/2415
903,923,097
MDU6SXNzdWU5MDM5MjMwOTc=
2,415
Cached dataset not loaded
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "It actually seems to happen all the time in above configuration:\r\n* the function `filter_by_duration` correctly loads cached processed dataset\r\n* the function `prepare_dataset` is always reexecuted\r\n\r\nI end up solving the issue by saving to disk my dataset at the end but I'm still wondering if it's a bug or limitation here.", "Hi ! The hash used for caching `map` results is the fingerprint of the resulting dataset. It is computed using three things:\r\n- the old fingerprint of the dataset\r\n- the hash of the function\r\n- the hash of the other parameters passed to `map`\r\n\r\nYou can compute the hash of your function (or any python object) with\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\n\r\nmy_func = lambda x: x + 1\r\nprint(Hasher.hash(my_func))\r\n```\r\n\r\nIf `prepare_dataset` is always executed, maybe this is because your `processor` has a different hash each time you want to execute it.", "> If `prepare_dataset` is always executed, maybe this is because your `processor` has a different hash each time you want to execute it.\r\n\r\nYes I think that was the issue.\r\n\r\nFor the hash of the function:\r\n* does it consider just the name or the actual code of the function\r\n* does it consider variables that are not passed explicitly as parameters to the functions (such as the processor here)", "> does it consider just the name or the actual code of the function\r\n\r\nIt looks at the name and the actual code and all variables such as recursively. It uses `dill` to do so, which is based on `pickle`.\r\nBasically the hash is computed using the pickle bytes of your function (computed using `dill` to support most python objects).\r\n\r\n> does it consider variables that are not passed explicitly as parameters to the functions (such as the processor here)\r\n\r\nYes it does thanks to recursive pickling.", "Thanks for these explanations. I'm closing the issue." ]
"2021-05-27T15:40:06"
"2021-06-02T13:15:47"
"2021-06-02T13:15:47"
CONTRIBUTOR
null
## Describe the bug I have a large dataset (common_voice, english) where I use several map and filter functions. Sometimes my cached datasets after specific functions are not loaded. I always use the same arguments, same functions, no seed… ## Steps to reproduce the bug ```python def filter_by_duration(batch): return ( batch["duration"] <= 10 and batch["duration"] >= 1 and len(batch["target_text"]) > 5 ) def prepare_dataset(batch): batch["input_values"] = processor( batch["speech"], sampling_rate=batch["sampling_rate"][0] ).input_values with processor.as_target_processor(): batch["labels"] = processor(batch["target_text"]).input_ids return batch train_dataset = train_dataset.filter( filter_by_duration, remove_columns=["duration"], num_proc=data_args.preprocessing_num_workers, ) # PROBLEM HERE -> below function is reexecuted and cache is not loaded train_dataset = train_dataset.map( prepare_dataset, remove_columns=train_dataset.column_names, batch_size=training_args.per_device_train_batch_size, batched=True, num_proc=data_args.preprocessing_num_workers, ) # Later in script set_caching_enabled(False) # apply map on trained model to eval/test sets ``` ## Expected results The cached dataset should always be reloaded. ## Actual results The function is reexecuted. I have access to cached files `cache-xxxxx.arrow`. Is there a way I can somehow load manually 2 versions and see how the hash was created for debug purposes (to know if it's an issue with dataset or function)? ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.6.2 - Platform: Linux-5.8.0-45-generic-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2415/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2415/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2414
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2414/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2414/comments
https://api.github.com/repos/huggingface/datasets/issues/2414/events
https://github.com/huggingface/datasets/pull/2414
903,877,096
MDExOlB1bGxSZXF1ZXN0NjU1MDg5OTIw
2,414
Update README.md
{ "login": "cryoff", "id": 15029054, "node_id": "MDQ6VXNlcjE1MDI5MDU0", "avatar_url": "https://avatars.githubusercontent.com/u/15029054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cryoff", "html_url": "https://github.com/cryoff", "followers_url": "https://api.github.com/users/cryoff/followers", "following_url": "https://api.github.com/users/cryoff/following{/other_user}", "gists_url": "https://api.github.com/users/cryoff/gists{/gist_id}", "starred_url": "https://api.github.com/users/cryoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cryoff/subscriptions", "organizations_url": "https://api.github.com/users/cryoff/orgs", "repos_url": "https://api.github.com/users/cryoff/repos", "events_url": "https://api.github.com/users/cryoff/events{/privacy}", "received_events_url": "https://api.github.com/users/cryoff/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-27T14:53:19"
"2021-06-28T13:46:14"
"2021-06-28T13:04:56"
CONTRIBUTOR
null
Provides description of data instances and dataset features
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2414/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2414/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2414", "html_url": "https://github.com/huggingface/datasets/pull/2414", "diff_url": "https://github.com/huggingface/datasets/pull/2414.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2414.patch", "merged_at": "2021-06-28T13:04:56" }
true
https://api.github.com/repos/huggingface/datasets/issues/2413
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2413/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2413/comments
https://api.github.com/repos/huggingface/datasets/issues/2413/events
https://github.com/huggingface/datasets/issues/2413
903,777,557
MDU6SXNzdWU5MDM3Nzc1NTc=
2,413
AttributeError: 'DatasetInfo' object has no attribute 'task_templates'
{ "login": "jungwhank", "id": 53588015, "node_id": "MDQ6VXNlcjUzNTg4MDE1", "avatar_url": "https://avatars.githubusercontent.com/u/53588015?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jungwhank", "html_url": "https://github.com/jungwhank", "followers_url": "https://api.github.com/users/jungwhank/followers", "following_url": "https://api.github.com/users/jungwhank/following{/other_user}", "gists_url": "https://api.github.com/users/jungwhank/gists{/gist_id}", "starred_url": "https://api.github.com/users/jungwhank/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungwhank/subscriptions", "organizations_url": "https://api.github.com/users/jungwhank/orgs", "repos_url": "https://api.github.com/users/jungwhank/repos", "events_url": "https://api.github.com/users/jungwhank/events{/privacy}", "received_events_url": "https://api.github.com/users/jungwhank/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! Can you try using a more up-to-date version ? We added the task_templates in `datasets` 1.7.0.\r\n\r\nIdeally when you're working on new datasets, you should install and use the local version of your fork of `datasets`. Here I think you tried to run the 1.7.0 tests with the 1.6.2 code" ]
"2021-05-27T13:44:28"
"2021-06-01T01:05:47"
"2021-06-01T01:05:47"
CONTRIBUTOR
null
## Describe the bug Hello, I'm trying to add dataset and contribute, but test keep fail with below cli. ` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<my_dataset>` ## Steps to reproduce the bug It seems like a bug when I see an error with the existing dataset, not the dataset I'm trying to add. ` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_<any_dataset>` ## Expected results All test passed ## Actual results ``` # check that dataset is not empty self.parent.assertListEqual(sorted(dataset_builder.info.splits.keys()), sorted(dataset)) for split in dataset_builder.info.splits.keys(): # check that loaded datset is not empty self.parent.assertTrue(len(dataset[split]) > 0) # check that we can cast features for each task template > task_templates = dataset_builder.info.task_templates E AttributeError: 'DatasetInfo' object has no attribute 'task_templates' tests/test_dataset_common.py:175: AttributeError ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.6.2 - Platform: Darwin-20.4.0-x86_64-i386-64bit - Python version: 3.7.7 - PyTorch version (GPU?): 1.7.0 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2413/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2412
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2412/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2412/comments
https://api.github.com/repos/huggingface/datasets/issues/2412/events
https://github.com/huggingface/datasets/issues/2412
903,769,151
MDU6SXNzdWU5MDM3NjkxNTE=
2,412
Docstring mistake: dataset vs. metric
{ "login": "PhilipMay", "id": 229382, "node_id": "MDQ6VXNlcjIyOTM4Mg==", "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilipMay", "html_url": "https://github.com/PhilipMay", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "repos_url": "https://api.github.com/users/PhilipMay/repos", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> I can provide a PR l8er...\r\n\r\nSee #2425 " ]
"2021-05-27T13:39:11"
"2021-06-01T08:18:04"
"2021-06-01T08:18:04"
CONTRIBUTOR
null
This: https://github.com/huggingface/datasets/blob/d95b95f8cf3cb0cff5f77a675139b584dcfcf719/src/datasets/load.py#L582 Should better be something like: `a metric identifier on HuggingFace AWS bucket (list all available metrics and ids with ``datasets.list_metrics()``)` I can provide a PR l8er...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2412/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2411
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2411/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2411/comments
https://api.github.com/repos/huggingface/datasets/issues/2411/events
https://github.com/huggingface/datasets/pull/2411
903,671,778
MDExOlB1bGxSZXF1ZXN0NjU0OTAzNjg2
2,411
Add DOI badge to README
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-27T12:36:47"
"2021-05-27T13:42:54"
"2021-05-27T13:42:54"
MEMBER
null
Once published the latest release, the DOI badge has been automatically generated by Zenodo.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2411/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2411/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2411", "html_url": "https://github.com/huggingface/datasets/pull/2411", "diff_url": "https://github.com/huggingface/datasets/pull/2411.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2411.patch", "merged_at": "2021-05-27T13:42:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/2410
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2410/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2410/comments
https://api.github.com/repos/huggingface/datasets/issues/2410/events
https://github.com/huggingface/datasets/pull/2410
903,613,676
MDExOlB1bGxSZXF1ZXN0NjU0ODUwMjY4
2,410
fix #2391 add original answers in kilt-TriviaQA
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-27T11:54:29"
"2021-06-15T12:35:57"
"2021-06-14T17:29:10"
CONTRIBUTOR
null
cc @yjernite is it ok like this?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2410/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2410/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2410", "html_url": "https://github.com/huggingface/datasets/pull/2410", "diff_url": "https://github.com/huggingface/datasets/pull/2410.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2410.patch", "merged_at": "2021-06-14T17:29:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/2409
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2409/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2409/comments
https://api.github.com/repos/huggingface/datasets/issues/2409/events
https://github.com/huggingface/datasets/pull/2409
903,441,398
MDExOlB1bGxSZXF1ZXN0NjU0Njk3NjA0
2,409
Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-27T09:07:00"
"2021-06-08T16:00:55"
"2021-05-27T09:33:41"
MEMBER
null
As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2409/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2409/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2409", "html_url": "https://github.com/huggingface/datasets/pull/2409", "diff_url": "https://github.com/huggingface/datasets/pull/2409.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2409.patch", "merged_at": "2021-05-27T09:33:41" }
true
https://api.github.com/repos/huggingface/datasets/issues/2408
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2408/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2408/comments
https://api.github.com/repos/huggingface/datasets/issues/2408/events
https://github.com/huggingface/datasets/pull/2408
903,422,648
MDExOlB1bGxSZXF1ZXN0NjU0NjgxMzE4
2,408
Fix head_qa keys
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-27T08:50:19"
"2021-05-27T09:05:37"
"2021-05-27T09:05:36"
MEMBER
null
There were duplicate in the keys, as mentioned in #2382
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2408/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2408", "html_url": "https://github.com/huggingface/datasets/pull/2408", "diff_url": "https://github.com/huggingface/datasets/pull/2408.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2408.patch", "merged_at": "2021-05-27T09:05:36" }
true
https://api.github.com/repos/huggingface/datasets/issues/2407
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2407/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2407/comments
https://api.github.com/repos/huggingface/datasets/issues/2407/events
https://github.com/huggingface/datasets/issues/2407
903,111,755
MDU6SXNzdWU5MDMxMTE3NTU=
2,407
.map() function got an unexpected keyword argument 'cache_file_name'
{ "login": "cindyxinyiwang", "id": 7390482, "node_id": "MDQ6VXNlcjczOTA0ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/7390482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cindyxinyiwang", "html_url": "https://github.com/cindyxinyiwang", "followers_url": "https://api.github.com/users/cindyxinyiwang/followers", "following_url": "https://api.github.com/users/cindyxinyiwang/following{/other_user}", "gists_url": "https://api.github.com/users/cindyxinyiwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/cindyxinyiwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cindyxinyiwang/subscriptions", "organizations_url": "https://api.github.com/users/cindyxinyiwang/orgs", "repos_url": "https://api.github.com/users/cindyxinyiwang/repos", "events_url": "https://api.github.com/users/cindyxinyiwang/events{/privacy}", "received_events_url": "https://api.github.com/users/cindyxinyiwang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @cindyxinyiwang,\r\nDid you try adding `.arrow` after `cache_file_name` argument? Here I think they're expecting something like that only for a cache file:\r\nhttps://github.com/huggingface/datasets/blob/e08362256fb157c0b3038437fc0d7a0bbb50de5c/src/datasets/arrow_dataset.py#L1556-L1558", "Hi ! `cache_file_name` is an argument of the `Dataset.map` method. Can you check that your `dataset` is indeed a `Dataset` object ?\r\n\r\nIf you loaded several splits, then it would actually be a `DatasetDict` (one dataset per split, in a dictionary).\r\nIn this case, since there are several datasets in the dict, the `DatasetDict.map` method requires a `cache_file_names` argument (with an 's'), so that you can provide one file name per split.", "I think you are right. I used cache_file_names={data1: name1, data2: name2} and it works. Thank you!" ]
"2021-05-27T01:54:26"
"2021-05-27T13:46:40"
"2021-05-27T13:46:40"
NONE
null
## Describe the bug I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected keyword argument 'cache_file_name'". I believe I'm using the latest dataset 1.6.2. Also seems like the document and the actual code indicates there is an argument 'cache_file_name' for the .map() function. Here is the code I use ## Steps to reproduce the bug ```datasets = load_from_disk(dataset_path=my_path) [...] def tokenize_function(examples): return tokenizer(examples[text_column_name]) logger.info("Mapping dataset to tokenized dataset.") tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=True, cache_file_name="my_tokenized_file" ) ``` ## Actual results tokenized_datasets = datasets.map( TypeError: map() got an unexpected keyword argument 'cache_file_name' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.6.2 - Platform:Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.10 - Python version:3.8.5 - PyArrow version:3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2407/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2407/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2406
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2406/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2406/comments
https://api.github.com/repos/huggingface/datasets/issues/2406/events
https://github.com/huggingface/datasets/issues/2406
902,643,844
MDU6SXNzdWU5MDI2NDM4NDQ=
2,406
Add guide on using task templates to documentation
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-05-26T16:28:26"
"2022-10-05T17:07:00"
"2022-10-05T17:07:00"
MEMBER
null
Once we have a stable API on the text classification and question answering task templates, add a guide on how to use them in the documentation.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2406/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2406/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2405
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2405/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2405/comments
https://api.github.com/repos/huggingface/datasets/issues/2405/events
https://github.com/huggingface/datasets/pull/2405
901,227,658
MDExOlB1bGxSZXF1ZXN0NjUyNzA2OTk1
2,405
Add dataset tags
{ "login": "OyvindTafjord", "id": 6453366, "node_id": "MDQ6VXNlcjY0NTMzNjY=", "avatar_url": "https://avatars.githubusercontent.com/u/6453366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/OyvindTafjord", "html_url": "https://github.com/OyvindTafjord", "followers_url": "https://api.github.com/users/OyvindTafjord/followers", "following_url": "https://api.github.com/users/OyvindTafjord/following{/other_user}", "gists_url": "https://api.github.com/users/OyvindTafjord/gists{/gist_id}", "starred_url": "https://api.github.com/users/OyvindTafjord/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/OyvindTafjord/subscriptions", "organizations_url": "https://api.github.com/users/OyvindTafjord/orgs", "repos_url": "https://api.github.com/users/OyvindTafjord/repos", "events_url": "https://api.github.com/users/OyvindTafjord/events{/privacy}", "received_events_url": "https://api.github.com/users/OyvindTafjord/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-25T18:57:29"
"2021-05-26T16:54:16"
"2021-05-26T16:40:07"
CONTRIBUTOR
null
The dataset tags were provided by Peter Clark following the guide.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2405/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2405/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2405", "html_url": "https://github.com/huggingface/datasets/pull/2405", "diff_url": "https://github.com/huggingface/datasets/pull/2405.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2405.patch", "merged_at": "2021-05-26T16:40:07" }
true
https://api.github.com/repos/huggingface/datasets/issues/2404
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2404/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2404/comments
https://api.github.com/repos/huggingface/datasets/issues/2404/events
https://github.com/huggingface/datasets/pull/2404
901,179,832
MDExOlB1bGxSZXF1ZXN0NjUyNjYzOTcz
2,404
Paperswithcode dataset mapping
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-25T18:14:26"
"2021-05-26T11:21:25"
"2021-05-26T11:17:18"
MEMBER
null
This is a continuation of https://github.com/huggingface/huggingface_hub/pull/43, encoded directly inside dataset cards. As discussed: - `paperswithcode_id: null` when the dataset doesn't exist on paperswithcode's side. - I've added this new key at the end of the yaml instead of ordering all keys alphabetically as pyyaml's default. No strong opinion on that one though
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2404/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2404/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2404", "html_url": "https://github.com/huggingface/datasets/pull/2404", "diff_url": "https://github.com/huggingface/datasets/pull/2404.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2404.patch", "merged_at": "2021-05-26T11:17:18" }
true
https://api.github.com/repos/huggingface/datasets/issues/2403
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2403/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2403/comments
https://api.github.com/repos/huggingface/datasets/issues/2403/events
https://github.com/huggingface/datasets/pull/2403
900,059,014
MDExOlB1bGxSZXF1ZXN0NjUxNjcxMTMw
2,403
Free datasets with cache file in temp dir on exit
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-24T22:15:11"
"2021-05-26T17:25:19"
"2021-05-26T16:39:29"
CONTRIBUTOR
null
This PR properly cleans up the memory-mapped tables that reference the cache files inside the temp dir. Since the built-in `_finalizer` of `TemporaryDirectory` can't be modified, this PR defines its own `TemporaryDirectory` class that accepts a custom clean-up function. Fixes #2402
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2403/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2403/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2403", "html_url": "https://github.com/huggingface/datasets/pull/2403", "diff_url": "https://github.com/huggingface/datasets/pull/2403.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2403.patch", "merged_at": "2021-05-26T16:39:29" }
true
https://api.github.com/repos/huggingface/datasets/issues/2402
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2402/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2402/comments
https://api.github.com/repos/huggingface/datasets/issues/2402/events
https://github.com/huggingface/datasets/issues/2402
900,025,329
MDU6SXNzdWU5MDAwMjUzMjk=
2,402
PermissionError on Windows when using temp dir for caching
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
"2021-05-24T21:22:59"
"2021-05-26T16:39:29"
"2021-05-26T16:39:29"
CONTRIBUTOR
null
Currently, the following code raises a PermissionError on master if working on Windows: ```python # run as a script or call exit() in REPL to initiate the temp dir cleanup from datasets import * d = load_dataset("sst", split="train", keep_in_memory=False) set_caching_enabled(False) d.map(lambda ex: ex) ``` Error stack trace: ``` Traceback (most recent call last): File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\weakref.py", line 624, in _exitfunc f() File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\weakref.py", line 548, in __call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\tempfile.py", line 799, in _cleanup _shutil.rmtree(name) File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 500, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 395, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\shutil.py", line 393, in _rmtree_unsafe os.unlink(fullname) PermissionError: [WinError 5] Access is denied: 'C:\\Users\\Mario\\AppData\\Local\\Temp\\tmp20epyhmq\\cache-87a87ffb5a956e68.arrow' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2402/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2402/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2401
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2401/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2401/comments
https://api.github.com/repos/huggingface/datasets/issues/2401/events
https://github.com/huggingface/datasets/issues/2401
899,910,521
MDU6SXNzdWU4OTk5MTA1MjE=
2,401
load_dataset('natural_questions') fails with "ValueError: External features info don't match the dataset"
{ "login": "jonrbates", "id": 15602718, "node_id": "MDQ6VXNlcjE1NjAyNzE4", "avatar_url": "https://avatars.githubusercontent.com/u/15602718?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonrbates", "html_url": "https://github.com/jonrbates", "followers_url": "https://api.github.com/users/jonrbates/followers", "following_url": "https://api.github.com/users/jonrbates/following{/other_user}", "gists_url": "https://api.github.com/users/jonrbates/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonrbates/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonrbates/subscriptions", "organizations_url": "https://api.github.com/users/jonrbates/orgs", "repos_url": "https://api.github.com/users/jonrbates/repos", "events_url": "https://api.github.com/users/jonrbates/events{/privacy}", "received_events_url": "https://api.github.com/users/jonrbates/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I faced the similar problem. Downgrading datasets to 1.5.0 fixed it.", "Thanks for reporting, I'm looking into it", "I just opened #2438 to fix this :)", "Hi ! This has been fixed in the 1.8.0 release of `datasets`" ]
"2021-05-24T18:38:53"
"2021-06-09T09:07:25"
"2021-06-09T09:07:25"
NONE
null
## Describe the bug load_dataset('natural_questions') throws ValueError ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset('natural_questions', split='validation[:10]') ``` ## Expected results Call to load_dataset returns data. ## Actual results ``` Using custom data configuration default Reusing dataset natural_questions (/mnt/d/huggingface/datasets/natural_questions/default/0.0.2/19bc04755018a3ad02ee74f7045cde4ba9b4162cb64450a87030ab786b123b76) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-2-d55ab8a8cc1c> in <module> ----> 1 datasets = load_dataset('natural_questions', split='validation[:10]', cache_dir='/mnt/d/huggingface/datasets') ~/miniconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 757 ) --> 758 ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) 759 if save_infos: 760 builder_instance._save_infos() ~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in as_dataset(self, split, run_post_process, ignore_verifications, in_memory) 735 736 # Create a dataset for each of the given splits --> 737 datasets = utils.map_nested( 738 partial( 739 self._build_single_dataset, ~/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types) 193 # Singleton 194 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 195 return function(data_struct) 196 197 disable_tqdm = bool(logger.getEffectiveLevel() > INFO) ~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in _build_single_dataset(self, split, run_post_process, ignore_verifications, in_memory) 762 763 # Build base dataset --> 764 ds = self._as_dataset( 765 split=split, 766 in_memory=in_memory, ~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in _as_dataset(self, split, in_memory) 838 in_memory=in_memory, 839 ) --> 840 return Dataset(**dataset_kwargs) 841 842 def _post_process(self, dataset: Dataset, resources_paths: Dict[str, str]) -> Optional[Dataset]: ~/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint) 271 assert self._fingerprint is not None, "Fingerprint can't be None in a Dataset object" 272 if self.info.features.type != inferred_features.type: --> 273 raise ValueError( 274 "External features info don't match the dataset:\nGot\n{}\nwith type\n{}\n\nbut expected something like\n{}\nwith type\n{}".format( 275 self.info.features, self.info.features.type, inferred_features, inferred_features.type ValueError: External features info don't match the dataset: Got {'id': Value(dtype='string', id=None), 'document': {'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'html': Value(dtype='string', id=None), 'tokens': Sequence(feature={'token': Value(dtype='string', id=None), 'is_html': Value(dtype='bool', id=None)}, length=-1, id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': Sequence(feature={'id': Value(dtype='string', id=None), 'long_answer': {'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None)}, 'short_answers': Sequence(feature={'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)}, length=-1, id=None), 'yes_no_answer': ClassLabel(num_classes=2, names=['NO', 'YES'], names_file=None, id=None)}, length=-1, id=None)} with type struct<annotations: struct<id: list<item: string>, long_answer: list<item: struct<start_token: int64, end_token: int64, start_byte: int64, end_byte: int64>>, short_answers: list<item: struct<end_byte: list<item: int64>, end_token: list<item: int64>, start_byte: list<item: int64>, start_token: list<item: int64>, text: list<item: string>>>, yes_no_answer: list<item: int64>>, document: struct<title: string, url: string, html: string, tokens: struct<is_html: list<item: bool>, token: list<item: string>>>, id: string, question: struct<text: string, tokens: list<item: string>>> but expected something like {'id': Value(dtype='string', id=None), 'document': {'html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'tokens': {'is_html': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None), 'token': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'url': Value(dtype='string', id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': {'id': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'long_answer': [{'end_byte': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'start_token': Value(dtype='int64', id=None)}], 'short_answers': [{'end_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'end_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}], 'yes_no_answer': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}} with type struct<annotations: struct<id: list<item: string>, long_answer: list<item: struct<end_byte: int64, end_token: int64, start_byte: int64, start_token: int64>>, short_answers: list<item: struct<end_byte: list<item: int64>, end_token: list<item: int64>, start_byte: list<item: int64>, start_token: list<item: int64>, text: list<item: string>>>, yes_no_answer: list<item: int64>>, document: struct<html: string, title: string, tokens: struct<is_html: list<item: bool>, token: list<item: string>>, url: string>, id: string, question: struct<text: string, tokens: list<item: string>>> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.6.2 - Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2401/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2401/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2400
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2400/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2400/comments
https://api.github.com/repos/huggingface/datasets/issues/2400/events
https://github.com/huggingface/datasets/issues/2400
899,867,212
MDU6SXNzdWU4OTk4NjcyMTI=
2,400
Concatenate several datasets with removed columns is not working.
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\ndid you fill out the env info section manually or by copy-pasting the output of the `datasets-cli env` command?\r\n\r\nThis code should work without issues on 1.6.2 version (I'm working on master (1.6.2.dev0 version) and can't reproduce this error).", "@mariosasko you are right I was still on `1.5.0`. " ]
"2021-05-24T17:40:15"
"2021-05-25T05:52:01"
"2021-05-25T05:51:59"
MEMBER
null
## Describe the bug You can't concatenate datasets when you removed columns before. ## Steps to reproduce the bug ```python from datasets import load_dataset, concatenate_datasets wikiann= load_dataset("wikiann","en") wikiann["train"] = wikiann["train"].remove_columns(["langs","spans"]) wikiann["test"] = wikiann["test"].remove_columns(["langs","spans"]) assert wikiann["train"].features.type == wikiann["test"].features.type concate = concatenate_datasets([wikiann["train"],wikiann["test"]]) ``` ## Expected results Merged dataset ## Actual results ```python ValueError: External features info don't match the dataset: Got {'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'ner_tags': Sequence(feature=ClassLabel(num_classes=7, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC'], names_file=None, id=None), length=-1, id=None), 'langs': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'spans': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} with type struct<langs: list<item: string>, ner_tags: list<item: int64>, spans: list<item: string>, tokens: list<item: string>> but expected something like {'ner_tags': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} with type struct<ner_tags: list<item: int64>, tokens: list<item: string>> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: ~1.6.2~ 1.5.0 - Platform: macos - Python version: 3.8.5 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2400/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2400/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2399
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2399/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2399/comments
https://api.github.com/repos/huggingface/datasets/issues/2399/events
https://github.com/huggingface/datasets/pull/2399
899,853,610
MDExOlB1bGxSZXF1ZXN0NjUxNDk0OTc2
2,399
Add env variable for MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-24T17:19:15"
"2021-05-27T09:07:15"
"2021-05-26T16:07:54"
MEMBER
null
Add env variable for `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES`. This will allow to turn off default behavior: loading in memory (and not caching) small datasets. Fix #2387.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2399/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2399/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2399", "html_url": "https://github.com/huggingface/datasets/pull/2399", "diff_url": "https://github.com/huggingface/datasets/pull/2399.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2399.patch", "merged_at": "2021-05-26T16:07:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/2398
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2398/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2398/comments
https://api.github.com/repos/huggingface/datasets/issues/2398/events
https://github.com/huggingface/datasets/issues/2398
899,511,837
MDU6SXNzdWU4OTk1MTE4Mzc=
2,398
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
{ "login": "anassalamah", "id": 8571003, "node_id": "MDQ6VXNlcjg1NzEwMDM=", "avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anassalamah", "html_url": "https://github.com/anassalamah", "followers_url": "https://api.github.com/users/anassalamah/followers", "following_url": "https://api.github.com/users/anassalamah/following{/other_user}", "gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}", "starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions", "organizations_url": "https://api.github.com/users/anassalamah/orgs", "repos_url": "https://api.github.com/users/anassalamah/repos", "events_url": "https://api.github.com/users/anassalamah/events{/privacy}", "received_events_url": "https://api.github.com/users/anassalamah/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "These ranges seem to be valid English. Closing." ]
"2021-05-24T10:03:34"
"2022-10-05T17:13:49"
"2022-10-05T17:13:49"
NONE
null
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi. ``` train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]') val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]') # filtering out examples that are not ar-en translations but ar-hi val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True) ``` * I'm fairly new to using datasets so I might be doing something wrong
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2398/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2398/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2397
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2397/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2397/comments
https://api.github.com/repos/huggingface/datasets/issues/2397/events
https://github.com/huggingface/datasets/pull/2397
899,427,378
MDExOlB1bGxSZXF1ZXN0NjUxMTMxMTY0
2,397
Fix number of classes in indic_glue sna.bn dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-24T08:18:55"
"2021-05-25T16:32:16"
"2021-05-25T16:32:16"
MEMBER
null
As read in the [paper](https://www.aclweb.org/anthology/2020.findings-emnlp.445.pdf), Table 11.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2397/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 1, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2397/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2397", "html_url": "https://github.com/huggingface/datasets/pull/2397", "diff_url": "https://github.com/huggingface/datasets/pull/2397.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2397.patch", "merged_at": "2021-05-25T16:32:16" }
true
https://api.github.com/repos/huggingface/datasets/issues/2395
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2395/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2395/comments
https://api.github.com/repos/huggingface/datasets/issues/2395/events
https://github.com/huggingface/datasets/pull/2395
898,762,730
MDExOlB1bGxSZXF1ZXN0NjUwNTk3NjI0
2,395
`pretty_name` for dataset in YAML tags
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[ { "id": 4564477500, "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution", "name": "dataset contribution", "color": "0e8a16", "default": false, "description": "Contribution to a dataset script" } ]
closed
false
null
[]
null
[]
"2021-05-22T09:24:45"
"2022-09-23T13:29:14"
"2022-09-23T13:29:13"
CONTRIBUTOR
null
I'm updating `pretty_name` for datasets in YAML tags as discussed with @lhoestq. Here are the first 10, please let me know if they're looking good. If dataset has 1 config, I've added `pretty_name` as `config_name: full_name_of_dataset` as config names were `plain_text`, `default`, `squad` etc (not so important in this case) whereas when dataset has >1 configs, I've added `config_name: full_name_of_dataset+config_name` so as to let user know about the `config` here.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2395/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2395/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2395", "html_url": "https://github.com/huggingface/datasets/pull/2395", "diff_url": "https://github.com/huggingface/datasets/pull/2395.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2395.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2392
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2392/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2392/comments
https://api.github.com/repos/huggingface/datasets/issues/2392/events
https://github.com/huggingface/datasets/pull/2392
898,156,795
MDExOlB1bGxSZXF1ZXN0NjUwMDYxOTE3
2,392
Update text classification template labels in DatasetInfo __post_init__
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-21T15:29:41"
"2021-05-28T11:37:35"
"2021-05-28T11:37:32"
MEMBER
null
This PR implements the idea discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. The main reason for doing so is so avoid duplicating the label definitions in both `DatasetInfo.features` and `DatasetInfo.task_templates`. To avoid storing state in `DatasetInfo.__post_init__`, the current implementation flushes `DatasetInfo.task_templates` before the features are cast in `Dataset.prepare_for_task` (thanks to @mariosasko for this idea!). Here is an example of the current workflow: ```python ds1 = load_dataset("./datasets/emotion/") # cast features and flush templates ds2 = ds1.prepare_for_task("text-classification") assert ds2.info.task_templates is None ``` Note that if users want to pass a `TextClassification` template to `prepare_for_task`, we require them to set `TextClassification.labels` to match the dataset's features corresponding to `label_column`: ```python ds1 = load_dataset("./datasets/emotion/") # TextClassification.labels is None by default => invalid template task = TextClassification(text_column="text", label_column="label") # Raises ValueError ds1.prepare_for_task(task) # Specifying the labels => valid template task = TextClassification(text_column="text", label_column="label", labels=['anger', 'fear', 'joy', 'love', 'sadness', 'surprise']) ds1.prepare_for_task(task) ``` This PR also adds: * New tests + fixed some old tests that weren't testing `assertRaises` properly * A decorator to share docstrings across common functions. This allows us to document `DatasetDict.prepare_for_task` and `Dataset.prepare_for_task` in one place. * Fixes to avoid side-effects from in-place replacements of `DatasetInfo.task_templates` in `DatasetInfo.__post_init__`. Thanks to @lhoestq for figuring this out! * Removal of `FeaturesWithLazyClassLabel` since we now create a new instance of `TextClassification` in `DatasetInfo.__post_init__` and avoid the side-effects first pointed out by @mariosasko ### PR Description from original WIP Hi @yjernite and @lhoestq, here's a first stab at the suggestion discussed in #2389 to update the `labels` of the `TextClassification` template in the `DatasetInfo.__post_init__`. One problem I've spotted is that my current implementation introduces state into the `__post_init__`: * When we call `load_dataset`, `DatasetInfo.features` are the "raw" features without any casting so we can access the column names by the `label_column` specified in `TextClassification` * When we call `Dataset.prepare_for_task` we run into a problem because the `DatasetInfo.features` are first cast into the new schema which triggers a `KeyError` when we update the infos [here](https://github.com/huggingface/datasets/blob/8b2a78520828e0cc13c14a31f413a5395ef25110/src/datasets/arrow_dataset.py#L1959). Here's an explicit example of what I mean with the stack trace appended below: ```python from datasets import load_dataset # this works ds = load_dataset("emotion") # we can verify the task template is correctly set ds["train"].info.task_templates # returns [TextClassification(labels=('sadness', 'joy', 'love', 'anger', 'fear', 'surprise'), text_column='text', label_column='label')] # but this fails because the _post_init__ is looking for the original column names ds.prepare_for_task("text-classification") ``` ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-4-54a43019b319> in <module> ----> 1 ds.prepare_for_task("text-classification") ~/git/datasets/src/datasets/dataset_dict.py in prepare_for_task(self, task) 807 """ 808 self._check_values_type() --> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()}) ~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0) 807 """ 808 self._check_values_type() --> 809 return DatasetDict({k: dataset.prepare_for_task(task=task) for k, dataset in self.items()}) ~/git/datasets/src/datasets/arrow_dataset.py in prepare_for_task(self, task) 1421 dataset = self.remove_columns(columns_to_drop) 1422 dataset = dataset.rename_columns(column_mapping) -> 1423 dataset = dataset.cast(features=template.features) 1424 return dataset 1425 ~/git/datasets/src/datasets/arrow_dataset.py in cast(self, features, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, num_proc) 970 format = self.format 971 dataset = self.with_format("arrow") --> 972 dataset = dataset.map( 973 lambda t: t.cast(schema), 974 batched=True, ~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1583 1584 if num_proc is None or num_proc == 1: -> 1585 return self._map_single( 1586 function=function, 1587 with_indices=with_indices, ~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 173 } 174 # apply actual function --> 175 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 176 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 177 # re-apply format to the output ~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs) 338 # Call actual function 339 --> 340 out = func(self, *args, **kwargs) 341 342 # Update fingerprint of in-place transforms + update in-place history of transforms ~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset) 1959 if update_data: 1960 # Create new Dataset from buffer or file -> 1961 info = self.info.copy() 1962 info.features = writer._features 1963 if buf_writer is None: ~/git/datasets/src/datasets/info.py in copy(self) 274 275 def copy(self) -> "DatasetInfo": --> 276 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) 277 278 ~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes) ~/git/datasets/src/datasets/info.py in __post_init__(self) 174 # The reason is that Dataset.prepare_for_task calls Dataset.cast which converts the 175 # DatasetInfo.features to the new schema and thus template.label_column is no longer a valid key --> 176 object.__setattr__(template, "labels", tuple(self.features[template.label_column].names)) 177 template.label_schema["labels"] = ClassLabel(names=template.labels) 178 self.task_templates[idx] = template KeyError: 'label' ``` What do you think? I did this a bit quickly, so maybe I'm overlooking something obvious :) One thing would be to only update the labels of the task template on load, but this seems a bit hacky IMO
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2392/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2392/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2392", "html_url": "https://github.com/huggingface/datasets/pull/2392", "diff_url": "https://github.com/huggingface/datasets/pull/2392.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2392.patch", "merged_at": "2021-05-28T11:37:32" }
true
https://api.github.com/repos/huggingface/datasets/issues/2391
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2391/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2391/comments
https://api.github.com/repos/huggingface/datasets/issues/2391/events
https://github.com/huggingface/datasets/issues/2391
898,128,099
MDU6SXNzdWU4OTgxMjgwOTk=
2,391
Missing original answers in kilt-TriviaQA
{ "login": "PaulLerner", "id": 25532159, "node_id": "MDQ6VXNlcjI1NTMyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PaulLerner", "html_url": "https://github.com/PaulLerner", "followers_url": "https://api.github.com/users/PaulLerner/followers", "following_url": "https://api.github.com/users/PaulLerner/following{/other_user}", "gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}", "starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions", "organizations_url": "https://api.github.com/users/PaulLerner/orgs", "repos_url": "https://api.github.com/users/PaulLerner/repos", "events_url": "https://api.github.com/users/PaulLerner/events{/privacy}", "received_events_url": "https://api.github.com/users/PaulLerner/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "That could be useful indeed! Feel free to open a PR on the dataset card if you already have some code that runs, otherwise we'll take care of it soon :) ", "I can open a PR but there is 2 details to fix:\r\n- the name for the corresponding key (e.g. `original_answer`)\r\n- how to implement it: I’m not sure what happens when you map `lambda x: {'input': ...}` as it keeps the other keys (e.g. `output`) intact but here since we want to set a nested value (e.g. `x['output']['original_answer']`) I implemented it with a regular function (not lambda), see below\r\n\r\n```py\r\ndef add_original_answer(x, trivia_qa, triviaqa_map):\r\n i = triviaqa_map[x['id']]\r\n x['output']['original_answer'] = trivia_qa['validation'][i]['answer']['value']\r\n return x\r\n```" ]
"2021-05-21T14:57:07"
"2021-06-14T17:29:11"
"2021-06-14T17:29:11"
CONTRIBUTOR
null
I previously opened an issue at https://github.com/facebookresearch/KILT/issues/42 but from the answer of @fabiopetroni it seems that the problem comes from HF-datasets ## Describe the bug The `answer` field in kilt-TriviaQA, e.g. `kilt_tasks['train_triviaqa'][0]['output']['answer']` contains a list of alternative answer which are accepted for the question. However it'd be nice to know the original answer to the question (the only fields in `output` are `'answer', 'meta', 'provenance'`) ## How to fix It can be fixed by retrieving the original answer from the original TriviaQA (e.g. `trivia_qa['train'][0]['answer']['value']`), perhaps at the same place as here where one retrieves the questions https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md#loading-the-kilt-knowledge-source-and-task-data cc @yjernite who previously answered to an issue about KILT and TriviaQA :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2391/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2391/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2390
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2390/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2390/comments
https://api.github.com/repos/huggingface/datasets/issues/2390/events
https://github.com/huggingface/datasets/pull/2390
897,903,642
MDExOlB1bGxSZXF1ZXN0NjQ5ODQ0NjQ2
2,390
Add check for task templates on dataset load
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-21T10:16:57"
"2021-05-21T15:49:09"
"2021-05-21T15:49:06"
MEMBER
null
This PR adds a check that the features of a dataset match the schema of each compatible task template.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2390/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2390/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2390", "html_url": "https://github.com/huggingface/datasets/pull/2390", "diff_url": "https://github.com/huggingface/datasets/pull/2390.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2390.patch", "merged_at": "2021-05-21T15:49:06" }
true
https://api.github.com/repos/huggingface/datasets/issues/2389
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2389/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2389/comments
https://api.github.com/repos/huggingface/datasets/issues/2389/events
https://github.com/huggingface/datasets/pull/2389
897,822,270
MDExOlB1bGxSZXF1ZXN0NjQ5Nzc3MDMz
2,389
Insert task templates for text classification
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-21T08:36:26"
"2021-05-28T15:28:58"
"2021-05-28T15:26:28"
MEMBER
null
This PR inserts text-classification templates for datasets with the following properties: * Only one config * At most two features of `(Value, ClassLabel)` type Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2389/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2389/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2389", "html_url": "https://github.com/huggingface/datasets/pull/2389", "diff_url": "https://github.com/huggingface/datasets/pull/2389.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2389.patch", "merged_at": "2021-05-28T15:26:28" }
true
https://api.github.com/repos/huggingface/datasets/issues/2388
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2388/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2388/comments
https://api.github.com/repos/huggingface/datasets/issues/2388/events
https://github.com/huggingface/datasets/issues/2388
897,767,470
MDU6SXNzdWU4OTc3Njc0NzA=
2,388
Incorrect URLs for some datasets
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-05-21T07:22:35"
"2021-06-04T17:39:45"
"2021-06-04T17:39:45"
MEMBER
null
## Describe the bug It seems that the URLs for the following datasets are invalid: - [ ] `bn_hate_speech` has been renamed: https://github.com/rezacsedu/Bengali-Hate-Speech-Dataset/commit/c67ecfc4184911e12814f6b36901f9828df8a63a - [ ] `covid_tweets_japanese` has been renamed: http://www.db.info.gifu-u.ac.jp/covid-19-twitter-dataset/ As a result we can no longer load these datasets using `load_dataset`. The simple fix is to rename the URL in the dataset script - will do this asap. ## Steps to reproduce the bug ```python from datasets import load_dataset # pick one of the datasets from the list above ds = load_dataset("bn_hate_speech") ``` ## Expected results Dataset loads without error. ## Actual results ``` Downloading: 3.36kB [00:00, 1.07MB/s] Downloading: 2.03kB [00:00, 678kB/s] Using custom data configuration default Downloading and preparing dataset bn_hate_speech/default (download: 951.48 KiB, generated: 949.84 KiB, post-processed: Unknown size, total: 1.86 MiB) to /Users/lewtun/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/load.py", line 744, in load_dataset builder_instance.download_and_prepare( File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 574, in download_and_prepare self._download_and_prepare( File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/builder.py", line 630, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/lewtun/.cache/huggingface/modules/datasets_modules/datasets/bn_hate_speech/a2dc726e511a2177523301bcad196af05d4d8a2cff30d2769ba8aacc1f5fdb5c/bn_hate_speech.py", line 76, in _split_generators train_path = dl_manager.download_and_extract(_URL) File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 287, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 195, in download downloaded_path_or_paths = map_nested( File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 195, in map_nested return function(data_struct) File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 218, in _download return cached_path(url_or_filename, download_config=download_config) File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 281, in cached_path output_path = get_from_cache( File "/Users/lewtun/miniconda3/envs/hf-hub_eval/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/rezacsedu/Bengali-Hate-Speech-Dataset/main/Bengali_%20Hate_Speech_Dataset_Subset.csv ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.6.2.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2388/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2387
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2387/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2387/comments
https://api.github.com/repos/huggingface/datasets/issues/2387/events
https://github.com/huggingface/datasets/issues/2387
897,566,666
MDU6SXNzdWU4OTc1NjY2NjY=
2,387
datasets 1.6 ignores cache
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Looks like there are multiple issues regarding this (#2386, #2322) and it's a WIP #2329. Currently these datasets are being loaded in-memory which is causing this issue. Quoting @mariosasko here for a quick fix:\r\n\r\n> set `keep_in_memory` to `False` when loading a dataset (`sst = load_dataset(\"sst\", keep_in_memory=False)`) to prevent it from loading in-memory. Currently, in-memory datasets fail to find cached files due to this check (always False for them)\r\n\r\n", "Hi ! Since `datasets` 1.6.0 we no longer keep small datasets (<250MB) on disk and load them in RAM instead by default. This makes data processing and iterating on data faster. However datasets in RAM currently have no way to reload previous results from the cache (since nothing is written on disk). We are working on making the caching work for datasets in RAM.\r\n\r\nUntil then, I'd recommend passing `keep_in_memory=False` to the calls to `load_dataset` like here:\r\n\r\nhttps://github.com/huggingface/transformers/blob/223943872e8c9c3fc11db3c6e93da07f5177423f/examples/pytorch/language-modeling/run_clm.py#L233\r\n\r\nThis way you say explicitly that you want your dataset to stay on the disk, and it will be able to recover previously computed results from the cache.", "gotcha! thanks Quentin", "OK, It doesn't look like we can use the proposed workaround - see https://github.com/huggingface/transformers/issues/11801\r\n\r\nCould you please add an env var for us to be able to turn off this unwanted in our situation behavior? It is really problematic for dev work, when one needs to restart the training very often and needs a quick startup time. Manual editing of standard scripts is not a practical option when one uses examples.\r\n\r\nThis could also be a problem for tests, which will be slower because of lack of cache, albeit usually we use tiny datasets there. I think we want caching for tests.\r\n\r\nThank you.", "Hi @stas00, \r\n\r\nYou are right: an env variable is needed to turn off this behavior. I am adding it.\r\n\r\nFor the moment there is a config parameter to turn off this behavior: `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES = None`\r\n\r\nYou can find this info in the docs:\r\n- in the docstring of the parameter `keep_in_memory` of the function [`load_datasets`](https://huggingface.co/docs/datasets/package_reference/loading_methods.html#datasets.load_dataset):\r\n- in a Note in the docs about [Loading a Dataset](https://huggingface.co/docs/datasets/loading_datasets.html#from-the-huggingface-hub)\r\n\r\n> The default in 🤗Datasets is to memory-map the dataset on drive if its size is larger than datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES (default 250 MiB); otherwise, the dataset is copied in-memory. This behavior can be disabled by setting datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES = None, and in this case the dataset is not loaded in memory.", "Yes, but this still requires one to edit the standard example scripts, so if I'm doing that already I just as well can add `keep_in_memory=False`.\r\n\r\nMay be the low hanging fruit is to add `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` env var to match the config, and if the user sets it to 0, then it'll be the same as `keep_in_memory=False` or `datasets.config.MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0`?", "@stas00, however, for the moment, setting the value to `0` is equivalent to the opposite, i.e. `keep_in_memory=True`. This means the max size until which I load in memory is 0 bytes.\r\n\r\nTell me if this is logical/convenient, or I should change it.", "In my PR, to turn off current default bahavior, you should set env variable to one of: `{\"\", \"OFF\", \"NO\", \"FALSE\"}`.\r\n\r\nFor example:\r\n```\r\nMAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=\r\n```", "IMHO, this behaviour is not very intuitive, as 0 is a normal quantity of bytes. So `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0` to me reads as don't cache ever.\r\n\r\nAlso \"SIZE_IN_BYTES\" that can take one of `{\"\", \"OFF\", \"NO\", \"FALSE\"}` is also quite odd.\r\n\r\nI think supporting a very simple `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES` that can accept any numerical value to match the name of the variable, requires minimal logic and is very straightforward. \r\n\r\nSo if you could adjust this logic - then `MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=0` is all that's needed to not do in-memory datasets.\r\n\r\nDoes it make sense?", "I understand your point @stas00, as I am not very convinced with current implementation.\r\n\r\nMy concern is: which numerical value should then pass a user who wants `keep_in_memory=True` by default, independently of dataset size? Currently it is `0` for this case.", "That's a good question, and again the normal bytes can be used for that:\r\n```\r\nMAX_IN_MEMORY_DATASET_SIZE_IN_BYTES=1e12 # (~2**40)\r\n```\r\nSince it's unlikely that anybody will have more than 1TB RAM.\r\n\r\nIt's also silly that it uses BYTES and not MBYTES - that level of refinement doesn't seem to be of a practical use in this context.\r\n\r\nNot sure when it was added and if there are back-compat issues here, but perhaps it could be renamed `MAX_IN_MEMORY_DATASET_SIZE` and support 1M, 1G, 1T, etc. \r\n\r\nBut scientific notation is quite intuitive too, as each 000 zeros is the next M, G, T multiplier. Minus the discrepancy of 1024 vs 1000, which adds up. And it is easy to write down `1e12`, as compared to `1099511627776` (2**40). (`1.1e12` is more exact).\r\n", "Great! Thanks, @stas00.\r\n\r\nI am implementing your suggestion to turn off default value when set to `0`.\r\n\r\nFor the other suggestion (allowing different metric prefixes), I will discuss with @lhoestq to agree on its implementation.", "Awesome! Thank you, @albertvillanova!!!\r\n\r\n" ]
"2021-05-21T00:12:58"
"2021-05-26T16:07:54"
"2021-05-26T16:07:54"
MEMBER
null
Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612 Quoting @VictorSanh: > > I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335): > > > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-c6aefe81ca4e5152.arrow'}], 'validation': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-97cf4c813e6469c6.arrow'}]}` > > while the same command with the latest version of datasets (actually starting at `1.6.0`) gives: > > `{'train': [], 'validation': []}` > I also confirm that downgrading to `datasets==1.5.0` makes things fast again - i.e. cache is used. to reproduce: ``` USE_TF=0 python examples/pytorch/language-modeling/run_clm.py \ --model_name_or_path gpt2 \ --dataset_name "stas/openwebtext-10k" \ --output_dir output_dir \ --overwrite_output_dir \ --do_train \ --do_eval \ --max_train_samples 1000 \ --max_eval_samples 200 \ --per_device_train_batch_size 4 \ --per_device_eval_batch_size 4 \ --num_train_epochs 1 \ --warmup_steps 8 \ --block_size 64 \ --fp16 \ --report_to none ``` the first time the startup is slow and some 5 tqdm bars. It shouldn't do it on consequent runs. but with `datasets>1.5.0` it rebuilds on every run. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2387/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2387/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2386
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2386/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2386/comments
https://api.github.com/repos/huggingface/datasets/issues/2386/events
https://github.com/huggingface/datasets/issues/2386
897,560,049
MDU6SXNzdWU4OTc1NjAwNDk=
2,386
Accessing Arrow dataset cache_files
{ "login": "Mehrad0711", "id": 28717374, "node_id": "MDQ6VXNlcjI4NzE3Mzc0", "avatar_url": "https://avatars.githubusercontent.com/u/28717374?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mehrad0711", "html_url": "https://github.com/Mehrad0711", "followers_url": "https://api.github.com/users/Mehrad0711/followers", "following_url": "https://api.github.com/users/Mehrad0711/following{/other_user}", "gists_url": "https://api.github.com/users/Mehrad0711/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mehrad0711/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mehrad0711/subscriptions", "organizations_url": "https://api.github.com/users/Mehrad0711/orgs", "repos_url": "https://api.github.com/users/Mehrad0711/repos", "events_url": "https://api.github.com/users/Mehrad0711/events{/privacy}", "received_events_url": "https://api.github.com/users/Mehrad0711/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks @bhavitvyamalik for referencing the workaround. Setting `keep_in_memory=False` is working." ]
"2021-05-20T23:57:43"
"2021-05-21T19:18:03"
"2021-05-21T19:18:03"
NONE
null
## Describe the bug In datasets 1.5.0 the following code snippet would have printed the cache_files: ``` train_data = load_dataset('conll2003', split='train', cache_dir='data') print(train_data.cache_files[0]['filename']) ``` However, in the newest release (1.6.1), it prints an empty list. I also tried loading the dataset with `keep_in_memory=True` argument but still `cache_files` is empty. Was wondering if this is a bug or I need to pass additional arguments so I can access the cache_files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2386/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2386/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2385
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2385/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2385/comments
https://api.github.com/repos/huggingface/datasets/issues/2385/events
https://github.com/huggingface/datasets/pull/2385
897,206,823
MDExOlB1bGxSZXF1ZXN0NjQ5MjM1Mjcy
2,385
update citations
{ "login": "adeepH", "id": 46108405, "node_id": "MDQ6VXNlcjQ2MTA4NDA1", "avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4", "gravatar_id": "", "url": "https://api.github.com/users/adeepH", "html_url": "https://github.com/adeepH", "followers_url": "https://api.github.com/users/adeepH/followers", "following_url": "https://api.github.com/users/adeepH/following{/other_user}", "gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}", "starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adeepH/subscriptions", "organizations_url": "https://api.github.com/users/adeepH/orgs", "repos_url": "https://api.github.com/users/adeepH/repos", "events_url": "https://api.github.com/users/adeepH/events{/privacy}", "received_events_url": "https://api.github.com/users/adeepH/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-20T17:54:08"
"2021-05-21T12:38:18"
"2021-05-21T12:38:18"
CONTRIBUTOR
null
To update citations for [Offenseval_dravidiain](https://huggingface.co/datasets/offenseval_dravidian)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2385/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2385/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2385", "html_url": "https://github.com/huggingface/datasets/pull/2385", "diff_url": "https://github.com/huggingface/datasets/pull/2385.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2385.patch", "merged_at": "2021-05-21T12:38:18" }
true
https://api.github.com/repos/huggingface/datasets/issues/2384
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2384/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2384/comments
https://api.github.com/repos/huggingface/datasets/issues/2384/events
https://github.com/huggingface/datasets/pull/2384
896,866,461
MDExOlB1bGxSZXF1ZXN0NjQ4OTI4NTQ0
2,384
Add args description to DatasetInfo
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-20T13:53:10"
"2021-05-22T09:26:16"
"2021-05-22T09:26:14"
MEMBER
null
Closes #2354 I am not sure what `post_processed` and `post_processing_size` correspond to, so have left them empty for now. I also took a guess at some of the other fields like `dataset_size` vs `size_in_bytes`, so might have misunderstood their meaning.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2384/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2384/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2384", "html_url": "https://github.com/huggingface/datasets/pull/2384", "diff_url": "https://github.com/huggingface/datasets/pull/2384.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2384.patch", "merged_at": "2021-05-22T09:26:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/2383
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2383/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2383/comments
https://api.github.com/repos/huggingface/datasets/issues/2383/events
https://github.com/huggingface/datasets/pull/2383
895,779,723
MDExOlB1bGxSZXF1ZXN0NjQ3OTU4MTQ0
2,383
Improve example in rounding docs
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-19T18:59:23"
"2021-05-21T12:53:22"
"2021-05-21T12:36:29"
CONTRIBUTOR
null
Improves the example in the rounding subsection of the Split API docs. With this change, it should more clear what's the difference between the `closest` and the `pct1_dropremainder` rounding.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2383/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2383/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2383", "html_url": "https://github.com/huggingface/datasets/pull/2383", "diff_url": "https://github.com/huggingface/datasets/pull/2383.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2383.patch", "merged_at": "2021-05-21T12:36:29" }
true
https://api.github.com/repos/huggingface/datasets/issues/2382
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2382/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2382/comments
https://api.github.com/repos/huggingface/datasets/issues/2382/events
https://github.com/huggingface/datasets/issues/2382
895,610,216
MDU6SXNzdWU4OTU2MTAyMTY=
2,382
DuplicatedKeysError: FAILURE TO GENERATE DATASET ! load_dataset('head_qa', 'en')
{ "login": "helloworld123-lab", "id": 75953751, "node_id": "MDQ6VXNlcjc1OTUzNzUx", "avatar_url": "https://avatars.githubusercontent.com/u/75953751?v=4", "gravatar_id": "", "url": "https://api.github.com/users/helloworld123-lab", "html_url": "https://github.com/helloworld123-lab", "followers_url": "https://api.github.com/users/helloworld123-lab/followers", "following_url": "https://api.github.com/users/helloworld123-lab/following{/other_user}", "gists_url": "https://api.github.com/users/helloworld123-lab/gists{/gist_id}", "starred_url": "https://api.github.com/users/helloworld123-lab/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/helloworld123-lab/subscriptions", "organizations_url": "https://api.github.com/users/helloworld123-lab/orgs", "repos_url": "https://api.github.com/users/helloworld123-lab/repos", "events_url": "https://api.github.com/users/helloworld123-lab/events{/privacy}", "received_events_url": "https://api.github.com/users/helloworld123-lab/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-19T15:49:48"
"2021-05-30T13:26:16"
"2021-05-30T13:26:16"
NONE
null
Hello everyone, I try to use head_qa dataset in [https://huggingface.co/datasets/viewer/?dataset=head_qa&config=en](url) ``` !pip install datasets from datasets import load_dataset dataset = load_dataset( 'head_qa', 'en') ``` When I write above load_dataset(.), it throws the following: ``` DuplicatedKeysError Traceback (most recent call last) <ipython-input-6-ea87002d32f0> in <module>() 2 from datasets import load_dataset 3 dataset = load_dataset( ----> 4 'head_qa', 'en') 5 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self) 347 for hash, key in self.hkey_record: 348 if hash in tmp_record: --> 349 raise DuplicatedKeysError(key) 350 else: 351 tmp_record.add(hash) DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 1 Keys should be unique and deterministic in nature ``` How can I fix the error? Thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2382/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2381
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2381/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2381/comments
https://api.github.com/repos/huggingface/datasets/issues/2381/events
https://github.com/huggingface/datasets/pull/2381
895,588,844
MDExOlB1bGxSZXF1ZXN0NjQ3NzkyNDcw
2,381
add dataset card title
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-19T15:30:03"
"2021-05-20T18:51:40"
"2021-05-20T18:51:40"
CONTRIBUTOR
null
few of them were missed by me earlier which I've added now
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2381/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2381", "html_url": "https://github.com/huggingface/datasets/pull/2381", "diff_url": "https://github.com/huggingface/datasets/pull/2381.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2381.patch", "merged_at": "2021-05-20T18:51:40" }
true
https://api.github.com/repos/huggingface/datasets/issues/2380
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2380/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2380/comments
https://api.github.com/repos/huggingface/datasets/issues/2380/events
https://github.com/huggingface/datasets/pull/2380
895,367,201
MDExOlB1bGxSZXF1ZXN0NjQ3NTk3NTc3
2,380
maintain YAML structure reading from README
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-19T12:12:07"
"2021-05-19T13:08:38"
"2021-05-19T13:08:38"
CONTRIBUTOR
null
How YAML used be loaded earlier in the string (structure of YAML was affected because of this and YAML for datasets with multiple configs was not being loaded correctly): ``` annotations_creators: labeled_final: - expert-generated labeled_swap: - expert-generated unlabeled_final: - machine-generated language_creators: - machine-generated languages: - en licenses: - other multilinguality: - monolingual size_categories: labeled_final: - 10K<n<100K labeled_swap: - 10K<n<100K unlabeled_final: - 100K<n<1M source_datasets: - original task_categories: - text-classification - text-scoring task_ids: - semantic-similarity-classification - semantic-similarity-scoring - text-scoring-other-paraphrase-identification ``` How YAML is loaded in string now: ``` annotations_creators: labeled_final: - expert-generated labeled_swap: - expert-generated unlabeled_final: - machine-generated language_creators: - machine-generated languages: - en licenses: - other multilinguality: - monolingual size_categories: labeled_final: - 10K<n<100K labeled_swap: - 10K<n<100K unlabeled_final: - 100K<n<1M source_datasets: - original task_categories: - text-classification - text-scoring task_ids: - semantic-similarity-classification - semantic-similarity-scoring - text-scoring-other-paraphrase-identification ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2380/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2380/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2380", "html_url": "https://github.com/huggingface/datasets/pull/2380", "diff_url": "https://github.com/huggingface/datasets/pull/2380.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2380.patch", "merged_at": "2021-05-19T13:08:38" }
true
https://api.github.com/repos/huggingface/datasets/issues/2379
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2379/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2379/comments
https://api.github.com/repos/huggingface/datasets/issues/2379/events
https://github.com/huggingface/datasets/pull/2379
895,252,597
MDExOlB1bGxSZXF1ZXN0NjQ3NDk2ODUx
2,379
Disallow duplicate keys in yaml tags
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-19T10:10:07"
"2021-05-19T10:45:32"
"2021-05-19T10:45:31"
MEMBER
null
Make sure that there's no duplidate keys in yaml tags. I added the check in the yaml tree constructor's method, so that the verification is done at every level in the yaml structure. cc @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2379/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2379/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2379", "html_url": "https://github.com/huggingface/datasets/pull/2379", "diff_url": "https://github.com/huggingface/datasets/pull/2379.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2379.patch", "merged_at": "2021-05-19T10:45:31" }
true
https://api.github.com/repos/huggingface/datasets/issues/2376
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2376/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2376/comments
https://api.github.com/repos/huggingface/datasets/issues/2376/events
https://github.com/huggingface/datasets/pull/2376
894,852,264
MDExOlB1bGxSZXF1ZXN0NjQ3MTU1NDE4
2,376
Improve task api code quality
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-18T23:13:40"
"2021-06-02T20:39:57"
"2021-05-25T15:30:54"
CONTRIBUTOR
null
Improves the code quality of the `TaskTemplate` dataclasses. Changes: * replaces `return NotImplemented` with raise `NotImplementedError` * replaces `sorted` with `len` in the uniqueness check * defines `label2id` and `id2label` in the `TextClassification` template as properties * replaces the `object.__setattr__(self, attr, value)` syntax with (IMO nicer) `self.__dict__[attr] = value`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2376/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2376", "html_url": "https://github.com/huggingface/datasets/pull/2376", "diff_url": "https://github.com/huggingface/datasets/pull/2376.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2376.patch", "merged_at": "2021-05-25T15:30:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/2375
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2375/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2375/comments
https://api.github.com/repos/huggingface/datasets/issues/2375/events
https://github.com/huggingface/datasets/pull/2375
894,655,157
MDExOlB1bGxSZXF1ZXN0NjQ2OTg2NTcw
2,375
Dataset Streaming
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-18T18:20:00"
"2021-06-23T16:35:02"
"2021-06-23T16:35:01"
MEMBER
null
# Dataset Streaming ## API Current API is ```python from datasets import load_dataset # Load an IterableDataset without downloading data snli = load_dataset("snli", streaming=True) # Access examples by streaming data print(next(iter(snli["train"]))) # {'premise': 'A person on a horse jumps over a broken down airplane.', # 'hypothesis': 'A person is training his horse for a competition.', # 'label': 1} ``` I already implemented a few methods: - IterableDataset.map: apply transforms on-the-fly to the examples - IterableDataset.shuffle: shuffle the data _a la_ TFDS, i.e. with a shuffling buffer - IterableDataset.with_format: set the format to `"torch"` to get a `torch.utils.data.IterableDataset` - merge_datasets: merge two iterable datasets by alternating one or the other (you can specify the probabilities) I would love to have your opinion on the API design :) ## Implementation details ### Streaming Data streaming is done using `fsspec` which has nice caching features. To make dataset streaming work I extend the `open` function of dataset scripts to support opening remote files without downloading them entirely. It also works with remote compressed archives (currently only zip is supported): ```python # Get a file-like object by streaming data from a remote file open("https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt") # Get a file-like object by streaming data from a remote compressed archive by using the hop separator "::" open("zip://snli_1.0_train.txt::https://nlp.stanford.edu/projects/snli/snli_1.0.zip") ``` I also extend the `os.path.join` function to support navigation in remote compressed archives, since it has to deal with the `"::"` separator. This separator is used by `fsspec`. Finally I also added a retry mechanism in case the connection fails during data streaming. ### Transforms An IterableDataset wraps an ExamplesIterable instance. There are different subclasses depending on the transforms we want to apply: - ExamplesIterable: the basic one - MappedExamplesIterable: an iterable with a `map` function applied on the fly - BufferShuffledExamplesIterable: an iterable with a shuffling buffer - CyclingMultiSourcesExamplesIterable: alternates between several ExamplesIterable - RandomlyCyclingMultiSourcesExamplesIterable: randomly alternates between several ExamplesIterable ### DatasetBuilder I use the same builders as usual. I just added a new method `_get_examples_iterable_for_split` to get an ExamplesIterable for a given split. Currently only the GeneratorBasedBuilder and the ArrowBasedBuilder implement it. The BeamBasedBuilder doesn't implement it yet. It means that datasets like wikipedia and natural_questions can't be loaded as IterableDataset for now. ## Other details <S>I may have to do some changes in many dataset script to use `download` instead of `download_and_extract` when extraction is not needed. This will avoid errors for streaming.</s> EDIT: Actually I just check for the extension of the file to do extraction only if needed. EDIT2: It's not possible to stream from .tar.gz files without downloading the file completely. For now I raise an error if one want to get a streaming dataset based on .tar.gz files. ## TODO usual stuff: - [x] make streaming dependency "aiohttp" optional: `pip install datasets[streaming]` - [x] tests - [x] docs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2375/reactions", "total_count": 7, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 6, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2375/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2375", "html_url": "https://github.com/huggingface/datasets/pull/2375", "diff_url": "https://github.com/huggingface/datasets/pull/2375.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2375.patch", "merged_at": "2021-06-23T16:35:01" }
true
https://api.github.com/repos/huggingface/datasets/issues/2374
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2374/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2374/comments
https://api.github.com/repos/huggingface/datasets/issues/2374/events
https://github.com/huggingface/datasets/pull/2374
894,579,364
MDExOlB1bGxSZXF1ZXN0NjQ2OTIyMjkw
2,374
add `desc` to `tqdm` in `Dataset.map()`
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-18T16:44:29"
"2021-05-27T15:44:04"
"2021-05-26T14:59:21"
CONTRIBUTOR
null
Fixes #2330. Please let me know if anything is also required in this
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2374/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2374/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2374", "html_url": "https://github.com/huggingface/datasets/pull/2374", "diff_url": "https://github.com/huggingface/datasets/pull/2374.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2374.patch", "merged_at": "2021-05-26T14:59:21" }
true
https://api.github.com/repos/huggingface/datasets/issues/2373
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2373/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2373/comments
https://api.github.com/repos/huggingface/datasets/issues/2373/events
https://github.com/huggingface/datasets/issues/2373
894,499,909
MDU6SXNzdWU4OTQ0OTk5MDk=
2,373
Loading dataset from local path
{ "login": "kolakows", "id": 34172905, "node_id": "MDQ6VXNlcjM0MTcyOTA1", "avatar_url": "https://avatars.githubusercontent.com/u/34172905?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kolakows", "html_url": "https://github.com/kolakows", "followers_url": "https://api.github.com/users/kolakows/followers", "following_url": "https://api.github.com/users/kolakows/following{/other_user}", "gists_url": "https://api.github.com/users/kolakows/gists{/gist_id}", "starred_url": "https://api.github.com/users/kolakows/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kolakows/subscriptions", "organizations_url": "https://api.github.com/users/kolakows/orgs", "repos_url": "https://api.github.com/users/kolakows/repos", "events_url": "https://api.github.com/users/kolakows/events{/privacy}", "received_events_url": "https://api.github.com/users/kolakows/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Version below works, checked again in the docs, and data_files should be a path.\r\n```\r\nds = datasets.load_dataset('my_script.py', \r\n data_files='/data/dir/corpus.txt', \r\n cache_dir='.')\r\n```" ]
"2021-05-18T15:20:50"
"2021-05-18T15:36:36"
"2021-05-18T15:36:35"
NONE
null
I'm trying to load a local dataset with the code below ``` ds = datasets.load_dataset('my_script.py', data_files='corpus.txt', data_dir='/data/dir', cache_dir='.') ``` But internally a BuilderConfig is created, which tries to use getmtime on the data_files string, without using data_dir. Is this a bug or am I not using the load_dataset correctly? https://github.com/huggingface/datasets/blob/bc61954083f74e6460688202e9f77dde2475319c/src/datasets/builder.py#L153
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2373/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2372
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2372/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2372/comments
https://api.github.com/repos/huggingface/datasets/issues/2372/events
https://github.com/huggingface/datasets/pull/2372
894,496,064
MDExOlB1bGxSZXF1ZXN0NjQ2ODUxODc2
2,372
ConvQuestions benchmark added
{ "login": "PhilippChr", "id": 24608689, "node_id": "MDQ6VXNlcjI0NjA4Njg5", "avatar_url": "https://avatars.githubusercontent.com/u/24608689?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PhilippChr", "html_url": "https://github.com/PhilippChr", "followers_url": "https://api.github.com/users/PhilippChr/followers", "following_url": "https://api.github.com/users/PhilippChr/following{/other_user}", "gists_url": "https://api.github.com/users/PhilippChr/gists{/gist_id}", "starred_url": "https://api.github.com/users/PhilippChr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilippChr/subscriptions", "organizations_url": "https://api.github.com/users/PhilippChr/orgs", "repos_url": "https://api.github.com/users/PhilippChr/repos", "events_url": "https://api.github.com/users/PhilippChr/events{/privacy}", "received_events_url": "https://api.github.com/users/PhilippChr/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-18T15:16:50"
"2021-05-26T10:31:45"
"2021-05-26T10:31:45"
CONTRIBUTOR
null
Hello, I would like to integrate our dataset on conversational QA. The answers are grounded in the KG. The work was published in CIKM 2019 (https://dl.acm.org/doi/10.1145/3357384.3358016). We hope for further research on how to deal with the challenges of factoid conversational QA. Thanks! :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2372/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2372/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2372", "html_url": "https://github.com/huggingface/datasets/pull/2372", "diff_url": "https://github.com/huggingface/datasets/pull/2372.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2372.patch", "merged_at": "2021-05-26T10:31:45" }
true
https://api.github.com/repos/huggingface/datasets/issues/2370
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2370/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2370/comments
https://api.github.com/repos/huggingface/datasets/issues/2370/events
https://github.com/huggingface/datasets/pull/2370
893,606,432
MDExOlB1bGxSZXF1ZXN0NjQ2MDkyNDQy
2,370
Adding HendrycksTest dataset
{ "login": "andyzoujm", "id": 43451571, "node_id": "MDQ6VXNlcjQzNDUxNTcx", "avatar_url": "https://avatars.githubusercontent.com/u/43451571?v=4", "gravatar_id": "", "url": "https://api.github.com/users/andyzoujm", "html_url": "https://github.com/andyzoujm", "followers_url": "https://api.github.com/users/andyzoujm/followers", "following_url": "https://api.github.com/users/andyzoujm/following{/other_user}", "gists_url": "https://api.github.com/users/andyzoujm/gists{/gist_id}", "starred_url": "https://api.github.com/users/andyzoujm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andyzoujm/subscriptions", "organizations_url": "https://api.github.com/users/andyzoujm/orgs", "repos_url": "https://api.github.com/users/andyzoujm/repos", "events_url": "https://api.github.com/users/andyzoujm/events{/privacy}", "received_events_url": "https://api.github.com/users/andyzoujm/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-17T18:53:05"
"2021-05-31T16:37:13"
"2021-05-31T16:37:13"
CONTRIBUTOR
null
Adding Hendrycks test from https://arxiv.org/abs/2009.03300. I'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help! Thank you!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2370/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2370/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2370", "html_url": "https://github.com/huggingface/datasets/pull/2370", "diff_url": "https://github.com/huggingface/datasets/pull/2370.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2370.patch", "merged_at": "2021-05-31T16:37:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/2369
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2369/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2369/comments
https://api.github.com/repos/huggingface/datasets/issues/2369/events
https://github.com/huggingface/datasets/pull/2369
893,554,153
MDExOlB1bGxSZXF1ZXN0NjQ2MDQ5NDM1
2,369
correct labels of conll2003
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/users/philschmid/followers", "following_url": "https://api.github.com/users/philschmid/following{/other_user}", "gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}", "starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/philschmid/subscriptions", "organizations_url": "https://api.github.com/users/philschmid/orgs", "repos_url": "https://api.github.com/users/philschmid/repos", "events_url": "https://api.github.com/users/philschmid/events{/privacy}", "received_events_url": "https://api.github.com/users/philschmid/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-17T17:37:54"
"2021-05-18T08:27:42"
"2021-05-18T08:27:42"
MEMBER
null
# What does this PR It fixes/extends the `ner_tags` for conll2003 to include all. Paper reference https://arxiv.org/pdf/cs/0306050v1.pdf Model reference https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/blob/main/config.json
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2369/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2369", "html_url": "https://github.com/huggingface/datasets/pull/2369", "diff_url": "https://github.com/huggingface/datasets/pull/2369.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2369.patch", "merged_at": "2021-05-18T08:27:42" }
true
https://api.github.com/repos/huggingface/datasets/issues/2368
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2368/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2368/comments
https://api.github.com/repos/huggingface/datasets/issues/2368/events
https://github.com/huggingface/datasets/pull/2368
893,411,076
MDExOlB1bGxSZXF1ZXN0NjQ1OTI5NzM0
2,368
Allow "other-X" in licenses
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-17T14:47:54"
"2021-05-17T16:36:27"
"2021-05-17T16:36:27"
CONTRIBUTOR
null
This PR allows "other-X" licenses during metadata validation. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2368/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2368", "html_url": "https://github.com/huggingface/datasets/pull/2368", "diff_url": "https://github.com/huggingface/datasets/pull/2368.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2368.patch", "merged_at": "2021-05-17T16:36:27" }
true
https://api.github.com/repos/huggingface/datasets/issues/2367
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2367/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2367/comments
https://api.github.com/repos/huggingface/datasets/issues/2367/events
https://github.com/huggingface/datasets/pull/2367
893,317,427
MDExOlB1bGxSZXF1ZXN0NjQ1ODUxNTE0
2,367
Remove getchildren from hyperpartisan news detection
{ "login": "ghomasHudson", "id": 13795113, "node_id": "MDQ6VXNlcjEzNzk1MTEz", "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghomasHudson", "html_url": "https://github.com/ghomasHudson", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-17T13:10:37"
"2021-05-17T14:07:13"
"2021-05-17T14:07:13"
CONTRIBUTOR
null
`Element.getchildren()` is now deprecated in the ElementTree library (I think in python 3.9, so it still passes the automated tests which are using 3.6. But for those of us on bleeding-edge distros it now fails). https://bugs.python.org/issue29209
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2367/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2367/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2367", "html_url": "https://github.com/huggingface/datasets/pull/2367", "diff_url": "https://github.com/huggingface/datasets/pull/2367.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2367.patch", "merged_at": "2021-05-17T14:07:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/2366
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2366/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2366/comments
https://api.github.com/repos/huggingface/datasets/issues/2366/events
https://github.com/huggingface/datasets/issues/2366
893,185,266
MDU6SXNzdWU4OTMxODUyNjY=
2,366
Json loader fails if user-specified features don't match the json data fields order
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-05-17T10:26:08"
"2021-06-16T10:47:49"
"2021-06-16T10:47:49"
MEMBER
null
If you do ```python dataset = load_dataset("json", data_files=data_files, features=features) ``` Then depending on the order of the features in the json data field it fails: ```python [...] ~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 94 if self.config.schema: 95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT ---> 96 pa_table = pa_table.cast(self.config.schema) 97 yield i, pa_table [...] ValueError: Target schema's field names are not matching the table's field names: ['tokens', 'ner_tags'], ['ner_tags', 'tokens'] ``` This is because one must first re-order the columns of the table to match the `self.config.schema` before calling cast. One way to fix the `cast` would be to replace it with: ```python # reorder the arrays if necessary + cast to schema # we can't simply use .cast here because we may need to change the order of the columns pa_table = pa.Table.from_arrays([pa_table[name] for name in schema.names], schema=schema) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2366/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2366/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2365
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2365/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2365/comments
https://api.github.com/repos/huggingface/datasets/issues/2365/events
https://github.com/huggingface/datasets/issues/2365
893,179,697
MDU6SXNzdWU4OTMxNzk2OTc=
2,365
Missing ClassLabel encoding in Json loader
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/5", "html_url": "https://github.com/huggingface/datasets/milestone/5", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/5/labels", "id": 6808903, "node_id": "MDk6TWlsZXN0b25lNjgwODkwMw==", "number": 5, "title": "1.9", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 0, "closed_issues": 12, "state": "closed", "created_at": "2021-05-31T16:13:06", "updated_at": "2021-07-12T14:12:00", "due_on": "2021-07-08T07:00:00", "closed_at": "2021-07-09T05:50:07" }
[]
"2021-05-17T10:19:10"
"2021-06-28T15:05:34"
"2021-06-28T15:05:34"
MEMBER
null
Currently if you want to load a json dataset this way ```python dataset = load_dataset("json", data_files=data_files, features=features) ``` Then if your features has ClassLabel types and if your json data needs class label encoding (i.e. if the labels in the json files are strings and not integers), then it would fail: ```python [...] ~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 94 if self.config.schema: 95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT ---> 96 pa_table = pa_table.cast(self.config.schema) 97 yield i, pa_table [...] ArrowInvalid: Failed to parse string: 'O' as a scalar of type int64 ``` This is because it just tries to cast the string data to integers, without applying the mapping str->int first The current workaround is to do instead ```python dataset = load_dataset("json", data_files=data_files) dataset = dataset.map(features.encode_example, features=features) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2365/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2365/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2364
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2364/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2364/comments
https://api.github.com/repos/huggingface/datasets/issues/2364/events
https://github.com/huggingface/datasets/pull/2364
892,420,500
MDExOlB1bGxSZXF1ZXN0NjQ1MTI4MDYx
2,364
README updated for SNLI, MNLI
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-15T11:37:59"
"2021-05-17T14:14:27"
"2021-05-17T13:34:19"
CONTRIBUTOR
null
Closes #2275. Mentioned about -1 labels in MNLI, SNLI and how they should be removed before training. @lhoestq `check_code_quality` test might fail for MNLI as the license name `other-Open Portion of the American National Corpus` is not a registered tag for 'licenses'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2364/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2364/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2364", "html_url": "https://github.com/huggingface/datasets/pull/2364", "diff_url": "https://github.com/huggingface/datasets/pull/2364.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2364.patch", "merged_at": "2021-05-17T13:34:18" }
true
https://api.github.com/repos/huggingface/datasets/issues/2362
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2362/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2362/comments
https://api.github.com/repos/huggingface/datasets/issues/2362/events
https://github.com/huggingface/datasets/pull/2362
892,100,749
MDExOlB1bGxSZXF1ZXN0NjQ0ODYzOTQw
2,362
Fix web_nlg metadata
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-14T17:15:07"
"2021-05-17T13:44:17"
"2021-05-17T13:42:28"
MEMBER
null
Our metadata storage system does not support `.` inside keys. cc @Pierrci
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2362/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2362/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2362", "html_url": "https://github.com/huggingface/datasets/pull/2362", "diff_url": "https://github.com/huggingface/datasets/pull/2362.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2362.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2361
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2361/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2361/comments
https://api.github.com/repos/huggingface/datasets/issues/2361/events
https://github.com/huggingface/datasets/pull/2361
891,982,808
MDExOlB1bGxSZXF1ZXN0NjQ0NzYzNTU4
2,361
Preserve dtype for numpy/torch/tf/jax arrays
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-14T14:45:23"
"2021-08-17T08:30:04"
"2021-08-17T08:30:04"
CONTRIBUTOR
null
Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2361/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2361/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2361", "html_url": "https://github.com/huggingface/datasets/pull/2361", "diff_url": "https://github.com/huggingface/datasets/pull/2361.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2361.patch", "merged_at": "2021-08-17T08:30:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/2359
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2359/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2359/comments
https://api.github.com/repos/huggingface/datasets/issues/2359/events
https://github.com/huggingface/datasets/issues/2359
891,946,017
MDU6SXNzdWU4OTE5NDYwMTc=
2,359
Allow model labels to be passed during task preparation
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "We now have the `align_labels_with_mapping` method in the API for this purpose." ]
"2021-05-14T13:58:28"
"2022-10-05T17:37:22"
"2022-10-05T17:37:22"
MEMBER
null
Models have a config with label2id. And we have the same for datasets with the ClassLabel feature type. At one point either the model or the dataset must sync with the other. It would be great to do that on the dataset side. For example for sentiment classification on amazon reviews with you could have these labels: - "1 star", "2 stars", "3 stars", "4 stars", "5 stars" - "1", "2", "3", "4", "5" Some models may use the first set, while other models use the second set. Here in the `TextClassification` class, the user can only specify one set of labels, while many models could actually be compatible but have different sets of labels. Should we allow users to pass a list of compatible labels sets ? Then in terms of API, users could use `dataset.prepare_for_task("text-classification", labels=model.labels)` or something like that. The label set could also be the same but not in the same order. For NLI for example, some models use `["neutral", "entailment", "contradiction"]` and some others use `["neutral", "contradiction", "entailment"]`, so we should take care of updating the order of the labels in the dataset to match the labels order of the model. Let me know what you think ! This can be done in a future PR _Originally posted by @lhoestq in https://github.com/huggingface/datasets/pull/2255#discussion_r632412792_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2359/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2358
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2358/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2358/comments
https://api.github.com/repos/huggingface/datasets/issues/2358/events
https://github.com/huggingface/datasets/pull/2358
891,269,577
MDExOlB1bGxSZXF1ZXN0NjQ0MTYyOTY2
2,358
Roman Urdu Stopwords List
{ "login": "devzohaib", "id": 58664161, "node_id": "MDQ6VXNlcjU4NjY0MTYx", "avatar_url": "https://avatars.githubusercontent.com/u/58664161?v=4", "gravatar_id": "", "url": "https://api.github.com/users/devzohaib", "html_url": "https://github.com/devzohaib", "followers_url": "https://api.github.com/users/devzohaib/followers", "following_url": "https://api.github.com/users/devzohaib/following{/other_user}", "gists_url": "https://api.github.com/users/devzohaib/gists{/gist_id}", "starred_url": "https://api.github.com/users/devzohaib/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/devzohaib/subscriptions", "organizations_url": "https://api.github.com/users/devzohaib/orgs", "repos_url": "https://api.github.com/users/devzohaib/repos", "events_url": "https://api.github.com/users/devzohaib/events{/privacy}", "received_events_url": "https://api.github.com/users/devzohaib/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-13T18:29:27"
"2021-05-19T08:50:43"
"2021-05-17T14:05:10"
NONE
null
A list of most frequently used Roman Urdu words with different spellings and usages. This is a very basic effort to collect some basic stopwords for Roman Urdu to help efforts of analyzing text data in roman Urdu which makes up a huge part of daily internet interaction of Roman-Urdu users.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2358/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2358/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2358", "html_url": "https://github.com/huggingface/datasets/pull/2358", "diff_url": "https://github.com/huggingface/datasets/pull/2358.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2358.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2357
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2357/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2357/comments
https://api.github.com/repos/huggingface/datasets/issues/2357/events
https://github.com/huggingface/datasets/pull/2357
890,595,693
MDExOlB1bGxSZXF1ZXN0NjQzNTk0NDcz
2,357
Adding Microsoft CodeXGlue Datasets
{ "login": "ncoop57", "id": 7613470, "node_id": "MDQ6VXNlcjc2MTM0NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/7613470?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ncoop57", "html_url": "https://github.com/ncoop57", "followers_url": "https://api.github.com/users/ncoop57/followers", "following_url": "https://api.github.com/users/ncoop57/following{/other_user}", "gists_url": "https://api.github.com/users/ncoop57/gists{/gist_id}", "starred_url": "https://api.github.com/users/ncoop57/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ncoop57/subscriptions", "organizations_url": "https://api.github.com/users/ncoop57/orgs", "repos_url": "https://api.github.com/users/ncoop57/repos", "events_url": "https://api.github.com/users/ncoop57/events{/privacy}", "received_events_url": "https://api.github.com/users/ncoop57/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-13T00:43:01"
"2021-06-08T09:29:57"
"2021-06-08T09:29:57"
NONE
null
Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:. I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2357/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2357/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2357", "html_url": "https://github.com/huggingface/datasets/pull/2357", "diff_url": "https://github.com/huggingface/datasets/pull/2357.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2357.patch", "merged_at": "2021-06-08T09:29:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/2355
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2355/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2355/comments
https://api.github.com/repos/huggingface/datasets/issues/2355/events
https://github.com/huggingface/datasets/pull/2355
890,484,408
MDExOlB1bGxSZXF1ZXN0NjQzNDk5NTIz
2,355
normalized TOCs and titles in data cards
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-12T20:59:59"
"2021-05-14T13:23:12"
"2021-05-14T13:23:12"
MEMBER
null
I started fixing some of the READMEs that were failing the tests introduced by @gchhablani but then realized that there were some consistent differences between earlier and newer versions of some of the titles (e.g. Data Splits vs Data Splits Sample Size, Supported Tasks vs Supported Tasks and Leaderboards). We also had different versions of the Table of Content This PR normalizes all of them to the newer version
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2355/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2355/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2355", "html_url": "https://github.com/huggingface/datasets/pull/2355", "diff_url": "https://github.com/huggingface/datasets/pull/2355.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2355.patch", "merged_at": "2021-05-14T13:23:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/2354
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2354/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2354/comments
https://api.github.com/repos/huggingface/datasets/issues/2354/events
https://github.com/huggingface/datasets/issues/2354
890,439,523
MDU6SXNzdWU4OTA0Mzk1MjM=
2,354
Document DatasetInfo attributes
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false } ]
null
[]
"2021-05-12T20:01:29"
"2021-05-22T09:26:14"
"2021-05-22T09:26:14"
MEMBER
null
**Is your feature request related to a problem? Please describe.** As noted in PR #2255, the attributes of `DatasetInfo` are not documented in the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=datasetinfo#datasetinfo). It would be nice to do so :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2354/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2354/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2353
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2353/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2353/comments
https://api.github.com/repos/huggingface/datasets/issues/2353/events
https://github.com/huggingface/datasets/pull/2353
890,296,262
MDExOlB1bGxSZXF1ZXN0NjQzMzM4MDcz
2,353
Update README vallidation rules
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-12T16:57:26"
"2021-05-14T08:56:06"
"2021-05-14T08:56:06"
CONTRIBUTOR
null
This PR allows unexpected subsections under third-level headings. All except `Contributions`. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2353/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2353/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2353", "html_url": "https://github.com/huggingface/datasets/pull/2353", "diff_url": "https://github.com/huggingface/datasets/pull/2353.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2353.patch", "merged_at": "2021-05-14T08:56:06" }
true
https://api.github.com/repos/huggingface/datasets/issues/2352
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2352/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2352/comments
https://api.github.com/repos/huggingface/datasets/issues/2352/events
https://github.com/huggingface/datasets/pull/2352
889,810,100
MDExOlB1bGxSZXF1ZXN0NjQyOTI4NTgz
2,352
Set to_json default to JSON lines
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-12T08:19:25"
"2021-05-21T09:01:14"
"2021-05-21T09:01:13"
MEMBER
null
With this PR, the method `Dataset.to_json`: - is added to the docs - defaults to JSON lines
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2352/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2352/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2352", "html_url": "https://github.com/huggingface/datasets/pull/2352", "diff_url": "https://github.com/huggingface/datasets/pull/2352.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2352.patch", "merged_at": "2021-05-21T09:01:13" }
true
https://api.github.com/repos/huggingface/datasets/issues/2351
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2351/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2351/comments
https://api.github.com/repos/huggingface/datasets/issues/2351/events
https://github.com/huggingface/datasets/pull/2351
889,584,953
MDExOlB1bGxSZXF1ZXN0NjQyNzI5NDIz
2,351
simpllify faiss index save
{ "login": "Guitaricet", "id": 2821124, "node_id": "MDQ6VXNlcjI4MjExMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Guitaricet", "html_url": "https://github.com/Guitaricet", "followers_url": "https://api.github.com/users/Guitaricet/followers", "following_url": "https://api.github.com/users/Guitaricet/following{/other_user}", "gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}", "starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions", "organizations_url": "https://api.github.com/users/Guitaricet/orgs", "repos_url": "https://api.github.com/users/Guitaricet/repos", "events_url": "https://api.github.com/users/Guitaricet/events{/privacy}", "received_events_url": "https://api.github.com/users/Guitaricet/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-12T03:54:10"
"2021-05-17T13:41:41"
"2021-05-17T13:41:41"
CONTRIBUTOR
null
Fixes #2350 In some cases, Faiss GPU index objects do not have neither "device" nor "getDevice". Possibly this happens when some part of the index is computed on CPU. In particular, this would happen with the index `OPQ16_128,IVF512,PQ32` (issue #2350). I did check it, but it is likely that `OPQ` or `PQ` transforms cause it. I propose, instead of using the index object to get the device, to infer it form the `FaissIndex.device` field as it is done in `.add_vectors`. Here we assume that `.device` always corresponds to the index placement and it seems reasonable.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2351/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2351/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2351", "html_url": "https://github.com/huggingface/datasets/pull/2351", "diff_url": "https://github.com/huggingface/datasets/pull/2351.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2351.patch", "merged_at": "2021-05-17T13:41:41" }
true
https://api.github.com/repos/huggingface/datasets/issues/2350
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2350/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2350/comments
https://api.github.com/repos/huggingface/datasets/issues/2350/events
https://github.com/huggingface/datasets/issues/2350
889,580,247
MDU6SXNzdWU4ODk1ODAyNDc=
2,350
`FaissIndex.save` throws error on GPU
{ "login": "Guitaricet", "id": 2821124, "node_id": "MDQ6VXNlcjI4MjExMjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2821124?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Guitaricet", "html_url": "https://github.com/Guitaricet", "followers_url": "https://api.github.com/users/Guitaricet/followers", "following_url": "https://api.github.com/users/Guitaricet/following{/other_user}", "gists_url": "https://api.github.com/users/Guitaricet/gists{/gist_id}", "starred_url": "https://api.github.com/users/Guitaricet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Guitaricet/subscriptions", "organizations_url": "https://api.github.com/users/Guitaricet/orgs", "repos_url": "https://api.github.com/users/Guitaricet/repos", "events_url": "https://api.github.com/users/Guitaricet/events{/privacy}", "received_events_url": "https://api.github.com/users/Guitaricet/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Just in case, this is a workaround that I use in my code and it seems to do the job.\r\n\r\n```python\r\nif use_gpu_index:\r\n data[\"train\"]._indexes[\"text_emb\"].faiss_index = faiss.index_gpu_to_cpu(data[\"train\"]._indexes[\"text_emb\"].faiss_index)\r\n```" ]
"2021-05-12T03:41:56"
"2021-05-17T13:41:41"
"2021-05-17T13:41:41"
CONTRIBUTOR
null
## Describe the bug After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error. ``` File "index_wikipedia.py", line 119, in <module> data["train"].save_faiss_index("text_emb", index_save_path) File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 470, in save_faiss_index index.save(file) File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/datasets/search.py", line 334, in save faiss.write_index(index, str(file)) File "/home/vlialin/miniconda3/envs/cat/lib/python3.8/site-packages/faiss/swigfaiss_avx2.py", line 5654, in write_index return _swigfaiss.write_index(*args) RuntimeError: Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /root/miniconda3/conda-bld/faiss-pkg_1613235005464/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index ``` ## Steps to reproduce the bug Any dataset will do, I just selected a familiar one. ```python import numpy as np import datasets INDEX_STR = "OPQ16_128,IVF512,PQ32" INDEX_SAVE_PATH = "will_not_save.faiss" data = datasets.load_dataset("Fraser/news-category-dataset", split=f"train[:10000]") def encode(item): return {"text_emb": np.random.randn(768).astype(np.float32)} data = data.map(encode) data.add_faiss_index(column="text_emb", string_factory=INDEX_STR, train_size=10_000, device=0) data.save_faiss_index("text_emb", INDEX_SAVE_PATH) ``` ## Expected results Saving the index ## Actual results Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) ... don't know how to serialize this type of index ## Environment info - `datasets` version: 1.6.2 - Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No I will be proposing a fix in a couple of minutes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2350/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2350/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2349
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2349/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2349/comments
https://api.github.com/repos/huggingface/datasets/issues/2349/events
https://github.com/huggingface/datasets/pull/2349
888,586,018
MDExOlB1bGxSZXF1ZXN0NjQxNzYzNzg3
2,349
Update task_ids for Ascent KB
{ "login": "phongnt570", "id": 6749421, "node_id": "MDQ6VXNlcjY3NDk0MjE=", "avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phongnt570", "html_url": "https://github.com/phongnt570", "followers_url": "https://api.github.com/users/phongnt570/followers", "following_url": "https://api.github.com/users/phongnt570/following{/other_user}", "gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}", "starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions", "organizations_url": "https://api.github.com/users/phongnt570/orgs", "repos_url": "https://api.github.com/users/phongnt570/repos", "events_url": "https://api.github.com/users/phongnt570/events{/privacy}", "received_events_url": "https://api.github.com/users/phongnt570/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-11T20:44:33"
"2021-05-17T10:53:14"
"2021-05-17T10:48:34"
CONTRIBUTOR
null
This "other-other-knowledge-base" task is better suited for the dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2349/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2349/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2349", "html_url": "https://github.com/huggingface/datasets/pull/2349", "diff_url": "https://github.com/huggingface/datasets/pull/2349.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2349.patch", "merged_at": "2021-05-17T10:48:34" }
true
https://api.github.com/repos/huggingface/datasets/issues/2348
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2348/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2348/comments
https://api.github.com/repos/huggingface/datasets/issues/2348/events
https://github.com/huggingface/datasets/pull/2348
887,927,737
MDExOlB1bGxSZXF1ZXN0NjQxMTMwOTM4
2,348
Add tests for dataset cards
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-11T17:14:27"
"2021-05-21T12:10:47"
"2021-05-21T12:10:47"
CONTRIBUTOR
null
Adding tests for dataset cards This PR will potentially remove the scripts being used for dataset tags and readme validation. Additionally, this will allow testing dataset readmes by providing the name as follows: ```bash pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist] ``` and ```bash pytest tests/test_dataset_cards.py::test_readme_content[fashion_mnist] ``` or a combined test as: ```bash pytest tests/test_dataset_cards.py::test_dataset_card[fashion_mnist] ``` @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2348/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2348/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2348", "html_url": "https://github.com/huggingface/datasets/pull/2348", "diff_url": "https://github.com/huggingface/datasets/pull/2348.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2348.patch", "merged_at": "2021-05-21T12:10:47" }
true
https://api.github.com/repos/huggingface/datasets/issues/2347
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2347/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2347/comments
https://api.github.com/repos/huggingface/datasets/issues/2347/events
https://github.com/huggingface/datasets/issues/2347
887,404,868
MDU6SXNzdWU4ODc0MDQ4Njg=
2,347
Add an API to access the language and pretty name of a dataset
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi ! With @bhavitvyamalik we discussed about having something like\r\n```python\r\nfrom datasets import load_dataset_card\r\n\r\ndataset_card = load_dataset_card(\"squad\")\r\nprint(dataset_card.metadata.pretty_name)\r\n# Stanford Question Answering Dataset (SQuAD)\r\nprint(dataset_card.metadata.languages)\r\n# [\"en\"]\r\n\r\n```\r\nWhat do you think ?\r\n\r\nI don't know if you already have a way to load the model tags in `transformers` but we can agree on the API to have something consistent.\r\n\r\nAlso note that the pretty name would only be used to show users something prettier than a dataset id, but in the end the source of truth will stay the dataset id (here `squad`).", "That works for me!", "maybe use the hub-backed dataset_info method? (so there's only one parser of README.md metadata)?", "What dataset_info method are you talking about @julien-c ? In `huggingface_hub` I can only see `model_info`.", "hmm the equivalent method in `datasets` (which could go into `huggingface_hub` at some point)", "Indeed, this info can now be fetched with `huggingface_hub.dataset_info`, so I think we can close this issue." ]
"2021-05-11T14:10:08"
"2022-10-05T17:16:54"
"2022-10-05T17:16:53"
MEMBER
null
It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2347/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2347/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2346
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2346/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2346/comments
https://api.github.com/repos/huggingface/datasets/issues/2346/events
https://github.com/huggingface/datasets/pull/2346
886,632,114
MDExOlB1bGxSZXF1ZXN0NjM5OTAzMjk3
2,346
Add Qasper Dataset
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "organizations_url": "https://api.github.com/users/cceyda/orgs", "repos_url": "https://api.github.com/users/cceyda/repos", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "received_events_url": "https://api.github.com/users/cceyda/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-11T09:25:44"
"2021-05-18T12:28:28"
"2021-05-18T12:28:28"
CONTRIBUTOR
null
[Question Answering on Scientific Research Papers](https://allenai.org/project/qasper/home) Doing NLP on NLP papers to do NLP ♻️ I had to add it~ - [x] Add README (just gotta fill out some more ) - [x] Dataloader code - [x] Make dummy dataset - [x] generate dataset infos - [x] Tests
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2346/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2346/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2346", "html_url": "https://github.com/huggingface/datasets/pull/2346", "diff_url": "https://github.com/huggingface/datasets/pull/2346.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2346.patch", "merged_at": "2021-05-18T12:28:27" }
true
https://api.github.com/repos/huggingface/datasets/issues/2345
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2345/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2345/comments
https://api.github.com/repos/huggingface/datasets/issues/2345/events
https://github.com/huggingface/datasets/issues/2345
886,586,872
MDU6SXNzdWU4ODY1ODY4NzI=
2,345
[Question] How to move and reuse preprocessed dataset?
{ "login": "AtmaHou", "id": 15045402, "node_id": "MDQ6VXNlcjE1MDQ1NDAy", "avatar_url": "https://avatars.githubusercontent.com/u/15045402?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AtmaHou", "html_url": "https://github.com/AtmaHou", "followers_url": "https://api.github.com/users/AtmaHou/followers", "following_url": "https://api.github.com/users/AtmaHou/following{/other_user}", "gists_url": "https://api.github.com/users/AtmaHou/gists{/gist_id}", "starred_url": "https://api.github.com/users/AtmaHou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AtmaHou/subscriptions", "organizations_url": "https://api.github.com/users/AtmaHou/orgs", "repos_url": "https://api.github.com/users/AtmaHou/repos", "events_url": "https://api.github.com/users/AtmaHou/events{/privacy}", "received_events_url": "https://api.github.com/users/AtmaHou/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "@lhoestq @LysandreJik", "<s>Hi :) Can you share with us the code you used ?</s>\r\n\r\nEDIT: from https://github.com/huggingface/transformers/issues/11665#issuecomment-838348291 I understand you're using the run_clm.py script. Can you share your logs ?\r\n", "Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same", "> Also note that for the caching to work, you must reuse the exact same parameters as in the first run. Did you change any parameter ? The `preprocessing_num_workers` should also stay the same\r\n\r\nI only changed the `preprocessing_num_workers` maybe it is the problem~ I will try again~" ]
"2021-05-11T09:09:17"
"2021-06-11T04:39:11"
"2021-06-11T04:39:11"
NONE
null
Hi, I am training a gpt-2 from scratch using run_clm.py. I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess), I tried to : copy path_to_cache_dir/datasets to new_cache_dir/datasets set export HF_DATASETS_CACHE="new_cache_dir/" but the program still re-preprocess the whole dataset without loading cache. I also tried to torch.save(lm_datasets, fw), but the saved file is only 14M. What is the proper way to do this?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2345/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2345/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2342
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2342/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2342/comments
https://api.github.com/repos/huggingface/datasets/issues/2342/events
https://github.com/huggingface/datasets/pull/2342
882,981,420
MDExOlB1bGxSZXF1ZXN0NjM2NDg0MzM3
2,342
Docs - CER above 1
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-09T23:41:00"
"2021-05-10T13:34:00"
"2021-05-10T13:34:00"
CONTRIBUTOR
null
CER can actually be greater than 1.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2342/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2342/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2342", "html_url": "https://github.com/huggingface/datasets/pull/2342", "diff_url": "https://github.com/huggingface/datasets/pull/2342.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2342.patch", "merged_at": "2021-05-10T13:34:00" }
true
https://api.github.com/repos/huggingface/datasets/issues/2341
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2341/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2341/comments
https://api.github.com/repos/huggingface/datasets/issues/2341/events
https://github.com/huggingface/datasets/pull/2341
882,370,933
MDExOlB1bGxSZXF1ZXN0NjM1OTExODI2
2,341
Added the Ascent KB
{ "login": "phongnt570", "id": 6749421, "node_id": "MDQ6VXNlcjY3NDk0MjE=", "avatar_url": "https://avatars.githubusercontent.com/u/6749421?v=4", "gravatar_id": "", "url": "https://api.github.com/users/phongnt570", "html_url": "https://github.com/phongnt570", "followers_url": "https://api.github.com/users/phongnt570/followers", "following_url": "https://api.github.com/users/phongnt570/following{/other_user}", "gists_url": "https://api.github.com/users/phongnt570/gists{/gist_id}", "starred_url": "https://api.github.com/users/phongnt570/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/phongnt570/subscriptions", "organizations_url": "https://api.github.com/users/phongnt570/orgs", "repos_url": "https://api.github.com/users/phongnt570/repos", "events_url": "https://api.github.com/users/phongnt570/events{/privacy}", "received_events_url": "https://api.github.com/users/phongnt570/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-09T14:17:39"
"2021-05-11T09:16:59"
"2021-05-11T09:16:59"
CONTRIBUTOR
null
Added the Ascent Commonsense KB of 8.9M assertions. - Paper: [Advanced Semantics for Commonsense Knowledge Extraction (WWW'21)](https://arxiv.org/abs/2011.00905) - Website: https://ascent.mpi-inf.mpg.de/ (I am the author of the dataset)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2341/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2341/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2341", "html_url": "https://github.com/huggingface/datasets/pull/2341", "diff_url": "https://github.com/huggingface/datasets/pull/2341.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2341.patch", "merged_at": "2021-05-11T09:16:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/2340
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2340/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2340/comments
https://api.github.com/repos/huggingface/datasets/issues/2340/events
https://github.com/huggingface/datasets/pull/2340
882,370,824
MDExOlB1bGxSZXF1ZXN0NjM1OTExNzIx
2,340
More consistent copy logic
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-09T14:17:33"
"2021-05-11T08:58:33"
"2021-05-11T08:58:33"
CONTRIBUTOR
null
Use `info.copy()` instead of `copy.deepcopy(info)`. `Features.copy` now creates a deep copy.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2340/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2340/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2340", "html_url": "https://github.com/huggingface/datasets/pull/2340", "diff_url": "https://github.com/huggingface/datasets/pull/2340.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2340.patch", "merged_at": "2021-05-11T08:58:33" }
true
https://api.github.com/repos/huggingface/datasets/issues/2338
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2338/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2338/comments
https://api.github.com/repos/huggingface/datasets/issues/2338/events
https://github.com/huggingface/datasets/pull/2338
882,046,077
MDExOlB1bGxSZXF1ZXN0NjM1NjA3NzQx
2,338
fixed download link for web_science
{ "login": "bhavitvyamalik", "id": 19718818, "node_id": "MDQ6VXNlcjE5NzE4ODE4", "avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bhavitvyamalik", "html_url": "https://github.com/bhavitvyamalik", "followers_url": "https://api.github.com/users/bhavitvyamalik/followers", "following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}", "gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}", "starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions", "organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs", "repos_url": "https://api.github.com/users/bhavitvyamalik/repos", "events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}", "received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-09T09:12:20"
"2021-05-10T13:35:53"
"2021-05-10T13:35:53"
CONTRIBUTOR
null
Fixes #2337. Should work with: `dataset = load_dataset("web_of_science", "WOS11967", ignore_verifications=True)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2338/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2338/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2338", "html_url": "https://github.com/huggingface/datasets/pull/2338", "diff_url": "https://github.com/huggingface/datasets/pull/2338.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2338.patch", "merged_at": "2021-05-10T13:35:53" }
true
https://api.github.com/repos/huggingface/datasets/issues/2337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2337/comments
https://api.github.com/repos/huggingface/datasets/issues/2337/events
https://github.com/huggingface/datasets/issues/2337
881,610,567
MDU6SXNzdWU4ODE2MTA1Njc=
2,337
NonMatchingChecksumError for web_of_science dataset
{ "login": "nbroad1881", "id": 24982805, "node_id": "MDQ6VXNlcjI0OTgyODA1", "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nbroad1881", "html_url": "https://github.com/nbroad1881", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "repos_url": "https://api.github.com/users/nbroad1881/repos", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I've raised a PR for this. Should work with `dataset = load_dataset(\"web_of_science\", \"WOS11967\", ignore_verifications=True)`once it gets merged into the main branch. Thanks for reporting this! " ]
"2021-05-09T02:02:02"
"2021-05-10T13:35:53"
"2021-05-10T13:35:53"
NONE
null
NonMatchingChecksumError when trying to download the web_of_science dataset. >NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1'] Setting `ignore_verfications=True` results in OSError. >OSError: Cannot find data file. Original error: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/37ab2c42f50d553c1d0ea432baca3e9e11fedea4aeec63a81e6b7e25dd10d4e7/WOS5736/X.txt' ```python dataset = load_dataset('web_of_science', 'WOS5736') ``` There are 3 data instances and they all don't work. 'WOS5736', 'WOS11967', 'WOS46985' datasets 1.6.2 python 3.7.10 Ubuntu 18.04.5 LTS
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2337/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2337/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2336
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2336/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2336/comments
https://api.github.com/repos/huggingface/datasets/issues/2336/events
https://github.com/huggingface/datasets/pull/2336
881,298,783
MDExOlB1bGxSZXF1ZXN0NjM0ODk1OTU5
2,336
Fix overflow issue in interpolation search
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-08T20:51:36"
"2021-05-10T13:29:07"
"2021-05-10T13:26:12"
CONTRIBUTOR
null
Fixes #2335 More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2336/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2336/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2336", "html_url": "https://github.com/huggingface/datasets/pull/2336", "diff_url": "https://github.com/huggingface/datasets/pull/2336.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2336.patch", "merged_at": "2021-05-10T13:26:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/2335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2335/comments
https://api.github.com/repos/huggingface/datasets/issues/2335/events
https://github.com/huggingface/datasets/issues/2335
881,291,887
MDU6SXNzdWU4ODEyOTE4ODc=
2,335
Index error in Dataset.map
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
"2021-05-08T20:44:57"
"2021-05-10T13:26:12"
"2021-05-10T13:26:12"
CONTRIBUTOR
null
The following code, if executed on master, raises an IndexError (due to overflow): ```python >>> from datasets import * >>> d = load_dataset("bookcorpus", split="train") Reusing dataset bookcorpus (C:\Users\Mario\.cache\huggingface\datasets\bookcorpus\plain_text\1.0.0\44662c4a114441c35200992bea923b170e6f13f2f0beb7c14e43759cec498700) 2021-05-08 21:23:46.859818: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library cudart64_101.dll >>> d.map(lambda ex: ex) 0%|▎ | 289430/74004228 [00:13<58:41, 20935.33ex/s]c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py:84: RuntimeWarning: overflow encountered in int_scalars k = i + ((j - i) * (x - arr[i]) // (arr[j] - arr[i])) 0%|▎ | 290162/74004228 [00:13<59:11, 20757.23ex/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1498, in map new_fingerprint=new_fingerprint, File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 174, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\fingerprint.py", line 340, in wrapper out = func(self, *args, **kwargs) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1799, in _map_single for i, example in enumerate(pbar): File "C:\Users\Mario\Anaconda3\envs\hf-datasets\lib\site-packages\tqdm\std.py", line 1133, in __iter__ for obj in iterable: File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1145, in __iter__ format_kwargs=format_kwargs, File "c:\users\mario\desktop\projects\datasets-1\src\datasets\arrow_dataset.py", line 1337, in _getitem pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 368, in query_table pa_subtable = _query_table(table, key) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\formatting\formatting.py", line 79, in _query_table return table.fast_slice(key % table.num_rows, 1) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 128, in fast_slice i = _interpolation_search(self._offsets, offset) File "c:\users\mario\desktop\projects\datasets-1\src\datasets\table.py", line 91, in _interpolation_search raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.") IndexError: Invalid query '290162' for size 74004228. ``` Tested on Windows, can run on Linux if needed. EDIT: It seems like for this to happen, the default NumPy dtype has to be np.int32.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2335/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2335/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2334
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2334/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2334/comments
https://api.github.com/repos/huggingface/datasets/issues/2334/events
https://github.com/huggingface/datasets/pull/2334
879,810,107
MDExOlB1bGxSZXF1ZXN0NjMzNTAzNTEw
2,334
Updating the DART file checksums in GEM
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "organizations_url": "https://api.github.com/users/yjernite/orgs", "repos_url": "https://api.github.com/users/yjernite/repos", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "received_events_url": "https://api.github.com/users/yjernite/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-07T21:53:44"
"2021-05-07T22:18:10"
"2021-05-07T22:18:10"
MEMBER
null
The DART files were just updated on the source GitHub https://github.com/Yale-LILY/dart/commit/34b3c872da4811523e334f1631e54ca8105dffab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2334/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2334/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2334", "html_url": "https://github.com/huggingface/datasets/pull/2334", "diff_url": "https://github.com/huggingface/datasets/pull/2334.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2334.patch", "merged_at": "2021-05-07T22:18:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/2333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2333/comments
https://api.github.com/repos/huggingface/datasets/issues/2333/events
https://github.com/huggingface/datasets/pull/2333
879,214,067
MDExOlB1bGxSZXF1ZXN0NjMyOTUwNzIy
2,333
Fix duplicate keys
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-07T15:28:08"
"2021-05-08T21:47:31"
"2021-05-07T15:57:08"
MEMBER
null
As noticed in https://github.com/huggingface/datasets/pull/2245, many datasets yield duplicate keys. Most of the time it was because the counter used for ids were reset at each new data file.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2333/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2333/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2333", "html_url": "https://github.com/huggingface/datasets/pull/2333", "diff_url": "https://github.com/huggingface/datasets/pull/2333.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2333.patch", "merged_at": "2021-05-07T15:57:08" }
true
https://api.github.com/repos/huggingface/datasets/issues/2332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2332/comments
https://api.github.com/repos/huggingface/datasets/issues/2332/events
https://github.com/huggingface/datasets/pull/2332
879,041,608
MDExOlB1bGxSZXF1ZXN0NjMyNzk1NDE4
2,332
Add note about indices mapping in save_to_disk docstring
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-07T13:49:42"
"2021-05-07T17:20:48"
"2021-05-07T17:20:48"
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2332/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2332/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2332", "html_url": "https://github.com/huggingface/datasets/pull/2332", "diff_url": "https://github.com/huggingface/datasets/pull/2332.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2332.patch", "merged_at": "2021-05-07T17:20:48" }
true
https://api.github.com/repos/huggingface/datasets/issues/2330
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2330/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2330/comments
https://api.github.com/repos/huggingface/datasets/issues/2330/events
https://github.com/huggingface/datasets/issues/2330
878,490,927
MDU6SXNzdWU4Nzg0OTA5Mjc=
2,330
Allow passing `desc` to `tqdm` in `Dataset.map()`
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\nShould we change `desc` in [pbar](https://github.com/huggingface/datasets/blob/81fcf88172ed5e3026ef68aed4c0ec6980372333/src/datasets/arrow_dataset.py#L1860) to something meaningful?", "I think the user could pass the `desc` parameter to `map` so that it can be displayed in the tqdm progress bar, as suggested by @cccntu.\r\n\r\nWhen there's no multiprocessing, the `desc` of the progress bar could be the `desc` passed by the user.\r\nIn multiprocessing, we were already using a `desc` equal to `\"#\" + str(rank)`.\r\nWe can change it to be `(desc or \"\") + \"#\" + str(rank)` instead.\r\n\r\nIn the end, since both `desc` and `rank` could be None, we can have:\r\n```python\r\npbar_desc = (desc or \"\") + \"#\" + str(rank) if rank is not None else desc\r\n```\r\n\r\nFinally let's remember that if we add `desc` as a new parameter to `map`, we should add it to the `ignore_kwargs` list of the `@fingerprint_transform` decorator of `Dataset._map_single` since we don't want this parameter to affect the fingerprint of the resulting dataset." ]
"2021-05-07T05:52:54"
"2021-05-26T14:59:21"
"2021-05-26T14:59:21"
CONTRIBUTOR
null
It's normal to have many `map()` calls, and some of them can take a few minutes, it would be nice to have a description on the progress bar. Alternative solution: Print the description before/after the `map()` call.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2330/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2330/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2329
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2329/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2329/comments
https://api.github.com/repos/huggingface/datasets/issues/2329/events
https://github.com/huggingface/datasets/pull/2329
877,924,198
MDExOlB1bGxSZXF1ZXN0NjMxODA3MTk0
2,329
Add cache dir for in-memory datasets
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-06T19:35:32"
"2021-06-08T19:46:48"
"2021-06-08T19:06:46"
CONTRIBUTOR
null
Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq. Should fix #2322
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2329/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2329/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2329", "html_url": "https://github.com/huggingface/datasets/pull/2329", "diff_url": "https://github.com/huggingface/datasets/pull/2329.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2329.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2328
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2328/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2328/comments
https://api.github.com/repos/huggingface/datasets/issues/2328/events
https://github.com/huggingface/datasets/pull/2328
877,673,896
MDExOlB1bGxSZXF1ZXN0NjMxNTg2MzU2
2,328
Add Matthews/Pearson/Spearman correlation metrics
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-06T16:09:27"
"2021-05-06T16:58:10"
"2021-05-06T16:58:10"
MEMBER
null
Added three metrics: - The Matthews correlation coefficient (from sklearn) - The Pearson correlation coefficient (from scipy) - The Spearman correlation coefficient (from scipy) cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2328/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2328/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2328", "html_url": "https://github.com/huggingface/datasets/pull/2328", "diff_url": "https://github.com/huggingface/datasets/pull/2328.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2328.patch", "merged_at": "2021-05-06T16:58:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/2327
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2327/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2327/comments
https://api.github.com/repos/huggingface/datasets/issues/2327/events
https://github.com/huggingface/datasets/issues/2327
877,565,831
MDU6SXNzdWU4Nzc1NjU4MzE=
2,327
A syntax error in example
{ "login": "mymusise", "id": 6883957, "node_id": "MDQ6VXNlcjY4ODM5NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/6883957?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mymusise", "html_url": "https://github.com/mymusise", "followers_url": "https://api.github.com/users/mymusise/followers", "following_url": "https://api.github.com/users/mymusise/following{/other_user}", "gists_url": "https://api.github.com/users/mymusise/gists{/gist_id}", "starred_url": "https://api.github.com/users/mymusise/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mymusise/subscriptions", "organizations_url": "https://api.github.com/users/mymusise/orgs", "repos_url": "https://api.github.com/users/mymusise/repos", "events_url": "https://api.github.com/users/mymusise/events{/privacy}", "received_events_url": "https://api.github.com/users/mymusise/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "cc @beurkinger but I think this has been fixed internally and will soon be updated right ?", "This issue has been fixed." ]
"2021-05-06T14:34:44"
"2021-05-20T03:04:19"
"2021-05-20T03:04:19"
NONE
null
![image](https://user-images.githubusercontent.com/6883957/117315905-b47a5c00-aeba-11eb-91eb-b2a4a0212a56.png) Sorry to report with an image, I can't find the template source code of this snippet.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2327/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2327/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2326
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2326/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2326/comments
https://api.github.com/repos/huggingface/datasets/issues/2326/events
https://github.com/huggingface/datasets/pull/2326
876,829,254
MDExOlB1bGxSZXF1ZXN0NjMwODk3MjI4
2,326
Enable auto-download for PAN-X / Wikiann domain in XTREME
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-05T20:58:38"
"2021-05-07T08:41:10"
"2021-05-07T08:41:10"
MEMBER
null
This PR replaces the manual download of the `PAN-X.lang` domains with an auto-download from a Dropbox link provided by the Wikiann author. We also add the relevant dummy data for these domains. While re-generating `dataset_infos.json` I ran into a `KeyError` in the `udpos.Arabic` domain so have included a fix for this as well.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2326/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2326/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2326", "html_url": "https://github.com/huggingface/datasets/pull/2326", "diff_url": "https://github.com/huggingface/datasets/pull/2326.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2326.patch", "merged_at": "2021-05-07T08:41:10" }
true
https://api.github.com/repos/huggingface/datasets/issues/2325
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2325/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2325/comments
https://api.github.com/repos/huggingface/datasets/issues/2325/events
https://github.com/huggingface/datasets/pull/2325
876,653,121
MDExOlB1bGxSZXF1ZXN0NjMwNzU1MzIx
2,325
Added the HLGD dataset
{ "login": "tingofurro", "id": 2609265, "node_id": "MDQ6VXNlcjI2MDkyNjU=", "avatar_url": "https://avatars.githubusercontent.com/u/2609265?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tingofurro", "html_url": "https://github.com/tingofurro", "followers_url": "https://api.github.com/users/tingofurro/followers", "following_url": "https://api.github.com/users/tingofurro/following{/other_user}", "gists_url": "https://api.github.com/users/tingofurro/gists{/gist_id}", "starred_url": "https://api.github.com/users/tingofurro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tingofurro/subscriptions", "organizations_url": "https://api.github.com/users/tingofurro/orgs", "repos_url": "https://api.github.com/users/tingofurro/repos", "events_url": "https://api.github.com/users/tingofurro/events{/privacy}", "received_events_url": "https://api.github.com/users/tingofurro/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-05T16:53:29"
"2021-05-12T14:55:13"
"2021-05-12T14:16:38"
CONTRIBUTOR
null
Added the Headline Grouping Dataset (HLGD), from the NAACL2021 paper: News Headline Grouping as a Challenging NLU Task Dataset Link: https://github.com/tingofurro/headline_grouping Paper link: https://people.eecs.berkeley.edu/~phillab/pdfs/NAACL2021_HLG.pdf
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2325/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2325/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2325", "html_url": "https://github.com/huggingface/datasets/pull/2325", "diff_url": "https://github.com/huggingface/datasets/pull/2325.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2325.patch", "merged_at": "2021-05-12T14:16:38" }
true
https://api.github.com/repos/huggingface/datasets/issues/2324
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2324/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2324/comments
https://api.github.com/repos/huggingface/datasets/issues/2324/events
https://github.com/huggingface/datasets/pull/2324
876,602,064
MDExOlB1bGxSZXF1ZXN0NjMwNzE1NTQz
2,324
Create Audio feature
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/8", "html_url": "https://github.com/huggingface/datasets/milestone/8", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels", "id": 6968069, "node_id": "MI_kwDODunzps4AalMF", "number": 8, "title": "1.12", "description": "Next minor release", "creator": { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, "open_issues": 4, "closed_issues": 2, "state": "open", "created_at": "2021-07-21T15:34:56", "updated_at": "2021-10-13T10:26:33", "due_on": "2021-08-30T07:00:00", "closed_at": null }
[]
"2021-05-05T15:55:22"
"2021-10-13T10:26:33"
"2021-10-13T10:26:33"
MEMBER
null
Create `Audio` feature to handle raw audio files. Some decisions to be further discussed: - I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library. - I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them. - For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager - I also require `pytest-datadir`, which allow to have (audio) data files for tests - The audio data contain: array and sample_rate. - The array is reshaped as 1D array (expected input for `Wav2Vec2`). Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`. ## Requirements Specification - Access example with audio loading and resampling: ```python ds[0]["audio"] ``` - Map with audio loading & resampling: ```python def preprocess(batch): batch["input_values"] = processor(batch["audio"]).input_values return batch ds = ds.map(preprocess) ``` - Map without audio loading and resampling: ```python def preprocess(batch): batch["labels"] = processor(batch["target_text"]).input_values return batch ds = ds.map(preprocess) ``` - Additional requirement specification (see https://github.com/huggingface/datasets/pull/2324#pullrequestreview-768864998): Cast audio column to change sampling sate: ```python ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2324/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2324/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2324", "html_url": "https://github.com/huggingface/datasets/pull/2324", "diff_url": "https://github.com/huggingface/datasets/pull/2324.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2324.patch", "merged_at": "2021-10-13T10:26:33" }
true
https://api.github.com/repos/huggingface/datasets/issues/2323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2323/comments
https://api.github.com/repos/huggingface/datasets/issues/2323/events
https://github.com/huggingface/datasets/issues/2323
876,438,507
MDU6SXNzdWU4NzY0Mzg1MDc=
2,323
load_dataset("timit_asr") gives back duplicates of just one sample text
{ "login": "ekeleshian", "id": 33647474, "node_id": "MDQ6VXNlcjMzNjQ3NDc0", "avatar_url": "https://avatars.githubusercontent.com/u/33647474?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ekeleshian", "html_url": "https://github.com/ekeleshian", "followers_url": "https://api.github.com/users/ekeleshian/followers", "following_url": "https://api.github.com/users/ekeleshian/following{/other_user}", "gists_url": "https://api.github.com/users/ekeleshian/gists{/gist_id}", "starred_url": "https://api.github.com/users/ekeleshian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekeleshian/subscriptions", "organizations_url": "https://api.github.com/users/ekeleshian/orgs", "repos_url": "https://api.github.com/users/ekeleshian/repos", "events_url": "https://api.github.com/users/ekeleshian/events{/privacy}", "received_events_url": "https://api.github.com/users/ekeleshian/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Upgrading datasets to version 1.6 fixes the issue", "This bug was fixed in #1995. Upgrading the `datasets` should work! ", "Thanks @ekeleshian for having reported.\r\n\r\nI am closing this issue once that you updated `datasets`. Feel free to reopen it if the problem persists." ]
"2021-05-05T13:14:48"
"2021-05-07T10:32:30"
"2021-05-07T10:32:30"
NONE
null
## Describe the bug When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasantly situated near the shore." 1680 times. I tried to work around the issue by downgrading to datasets version 1.3.0, inspired by [this post](https://www.gitmemory.com/issue/huggingface/datasets/2052/798904836) and removing the entire huggingface directory from ~/.cache, but I still get the same issue. ## Steps to reproduce the bug ```python from datasets import load_dataset timit = load_dataset("timit_asr") print(timit['train']['text']) print(timit['test']['text']) ``` ## Expected Result Rows of diverse text, like how it is shown in the [wav2vec2.0 tutorial](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) <img width="485" alt="Screen Shot 2021-05-05 at 9 09 57 AM" src="https://user-images.githubusercontent.com/33647474/117146094-d9b77f00-ad81-11eb-8306-f281850c127a.png"> ## Actual results Rows of repeated text. <img width="319" alt="Screen Shot 2021-05-05 at 9 11 53 AM" src="https://user-images.githubusercontent.com/33647474/117146231-f8b61100-ad81-11eb-834a-fc10410b0c9c.png"> ## Versions - Datasets: 1.3.0 - Python: 3.9.1 - Platform: macOS-11.2.1-x86_64-i386-64bit}
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2323/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2323/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2322
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2322/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2322/comments
https://api.github.com/repos/huggingface/datasets/issues/2322/events
https://github.com/huggingface/datasets/issues/2322
876,383,853
MDU6SXNzdWU4NzYzODM4NTM=
2,322
Calls to map are not cached.
{ "login": "villmow", "id": 2743060, "node_id": "MDQ6VXNlcjI3NDMwNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/villmow", "html_url": "https://github.com/villmow", "followers_url": "https://api.github.com/users/villmow/followers", "following_url": "https://api.github.com/users/villmow/following{/other_user}", "gists_url": "https://api.github.com/users/villmow/gists{/gist_id}", "starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/villmow/subscriptions", "organizations_url": "https://api.github.com/users/villmow/orgs", "repos_url": "https://api.github.com/users/villmow/repos", "events_url": "https://api.github.com/users/villmow/events{/privacy}", "received_events_url": "https://api.github.com/users/villmow/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I tried upgrading to `datasets==1.6.2` and downgrading to `1.6.0`. Both versions produce the same output.\r\n\r\nDowngrading to `1.5.0` works and produces the following output for me:\r\n\r\n```bash\r\nDownloading: 9.20kB [00:00, 3.94MB/s] \r\nDownloading: 5.99kB [00:00, 3.29MB/s] \r\nNo config specified, defaulting to: sst/default\r\nDownloading and preparing dataset sst/default (download: 6.83 MiB, generated: 3.73 MiB, post-processed: Unknown size, total: 10.56 MiB) to /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b...\r\n Dataset sst downloaded and prepared to /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b. Subsequent calls will reuse this data.\r\nexecuted [0, 1]\r\n#0: 0%| | 0/5 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/5 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281]\r\nexecuted [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]\r\nexecuted [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281]\r\nexecuted [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009]\r\nexecuted [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281]\r\nexecuted [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009]\r\nexecuted [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281]\r\nexecuted [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009]\r\n#0: 100%|██████████| 5/5 [00:00<00:00, 94.83ba/s]\r\nexecuted [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281]\r\n#1: 100%|██████████| 5/5 [00:00<00:00, 92.75ba/s]\r\nexecuted [0, 1]\r\n#0: 0%| | 0/1 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/1 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [551, 552, 553, 554, 555, 556, 557, 558, 559, 560]\r\n#0: 100%|██████████| 1/1 [00:00<00:00, 118.81ba/s]\r\n#1: 100%|██████████| 1/1 [00:00<00:00, 123.06ba/s]\r\nexecuted [0, 1]\r\n#0: 0%| | 0/2 [00:00<?, ?ba/s]\r\n#1: 0%| | 0/2 [00:00<?, ?ba/s]\r\nexecuted [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]\r\nexecuted [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114]\r\nexecuted [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009]\r\n#0: 100%|██████████| 2/2 [00:00<00:00, 119.42ba/s]\r\nexecuted [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114]\r\n#1: 100%|██████████| 2/2 [00:00<00:00, 123.33ba/s]\r\n\r\n\r\n\r\n ############################## \r\n\r\n\r\n\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-6079777aa097c8f8.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-2dc05c46f68eda6e.arrow\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-1ca347e7430b98f1.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-c0f1a73ce3ba40cd.arrow\r\nexecuted [0, 1]\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-832a1407bf1ac5b7.arrow\r\nLoading cached processed dataset at /home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/a16a45566b63b2c3179e6c1d0f8edadde56e45570ee8cf99394fbb738491d34b/cache-036316a259b773c4.arrow\r\n- Datasets: 1.5.0\r\n- Python: 3.8.3 (default, May 19 2020, 18:47:26) \r\n[GCC 7.3.0]\r\n- Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10\r\n```", "Hi,\r\n\r\nset `keep_in_memory` to False when loading a dataset (`sst = load_dataset(\"sst\", keep_in_memory=False)`) to prevent it from loading in-memory. Currently, in-memory datasets fail to find cached files due to this check (always False for them):\r\n\r\nhttps://github.com/huggingface/datasets/blob/241a0b4a3a868778ee91e767ad406f9da7610df2/src/datasets/arrow_dataset.py#L1718\r\n\r\n@albertvillanova It seems like this behavior was overlooked in #2182.\r\n\r\n", "Hi @villmow, thanks for reporting. \r\n\r\nAs @mariosasko has pointed out, we did not consider this case when introducing the feature of automatic in-memory for small datasets. This needs to be fixed.", "Hi ! Currently a dataset that is in memory doesn't know doesn't know in which directory it has to read/write cache files.\r\nOn the other hand, a dataset that loaded from the disk (via memory mapping) uses the directory from which the dataset is located to read/write cache files.\r\n\r\nBecause of that, currently in-memory datasets simply don't use caching.\r\n\r\nMaybe a Dataset object could have a `cache_dir` that is set to the directory where the arrow files are created during `load_dataset` ?", "Fixed once reverted the default in-memory feature:\r\nClosed by #2460 (to close issue #2458).", "Please @villmow, feel free to update to `Datasets` latest version (1.8)." ]
"2021-05-05T12:11:27"
"2021-06-08T19:10:02"
"2021-06-08T19:08:21"
NONE
null
## Describe the bug Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed? ## Steps to reproduce the bug ```python import datasets datasets.set_caching_enabled(True) sst = datasets.load_dataset("sst") def foo(samples, i): print("executed", i[:10]) return samples # first call x = sst.map(foo, batched=True, with_indices=True, num_proc=2) print('\n'*3, "#" * 30, '\n'*3) # second call y = sst.map(foo, batched=True, with_indices=True, num_proc=2) # print version import sys import platform print(f""" - Datasets: {datasets.__version__} - Python: {sys.version} - Platform: {platform.platform()} """) ``` ## Actual results This code prints the following output for me: ```bash No config specified, defaulting to: sst/default Reusing dataset sst (/home/johannes/.cache/huggingface/datasets/sst/default/1.0.0/b8a7889ef01c5d3ae8c379b84cc4080f8aad3ac2bc538701cbe0ac6416fb76ff) #0: 0%| | 0/5 [00:00<?, ?ba/s] #1: 0%| | 0/5 [00:00<?, ?ba/s] executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281] executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009] executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281] executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009] executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281] executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009] executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281] executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009] #0: 100%|██████████| 5/5 [00:00<00:00, 59.85ba/s] executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281] #1: 100%|██████████| 5/5 [00:00<00:00, 60.85ba/s] #0: 0%| | 0/1 [00:00<?, ?ba/s] #1: 0%| | 0/1 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] #0: 100%|██████████| 1/1 [00:00<00:00, 69.32ba/s] executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560] #1: 100%|██████████| 1/1 [00:00<00:00, 70.93ba/s] #0: 0%| | 0/2 [00:00<?, ?ba/s] #1: 0%| | 0/2 [00:00<?, ?ba/s]executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009] #0: 100%|██████████| 2/2 [00:00<00:00, 63.25ba/s] executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114] executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114] #1: 100%|██████████| 2/2 [00:00<00:00, 57.69ba/s] ############################## #0: 0%| | 0/5 [00:00<?, ?ba/s] #1: 0%| | 0/5 [00:00<?, ?ba/s] executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] executed [4272, 4273, 4274, 4275, 4276, 4277, 4278, 4279, 4280, 4281] executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009] executed [5272, 5273, 5274, 5275, 5276, 5277, 5278, 5279, 5280, 5281] executed [2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009] executed [6272, 6273, 6274, 6275, 6276, 6277, 6278, 6279, 6280, 6281] executed [3000, 3001, 3002, 3003, 3004, 3005, 3006, 3007, 3008, 3009] executed [4000, 4001, 4002, 4003, 4004, 4005, 4006, 4007, 4008, 4009] #0: 100%|██████████| 5/5 [00:00<00:00, 58.10ba/s] executed [7272, 7273, 7274, 7275, 7276, 7277, 7278, 7279, 7280, 7281] executed [8272, 8273, 8274, 8275, 8276, 8277, 8278, 8279, 8280, 8281] #1: 100%|██████████| 5/5 [00:00<00:00, 57.19ba/s] #0: 0%| | 0/1 [00:00<?, ?ba/s] #1: 0%| | 0/1 [00:00<?, ?ba/s] executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] #0: 100%|██████████| 1/1 [00:00<00:00, 60.10ba/s] executed [551, 552, 553, 554, 555, 556, 557, 558, 559, 560] #1: 100%|██████████| 1/1 [00:00<00:00, 53.82ba/s] #0: 0%| | 0/2 [00:00<?, ?ba/s] #1: 0%| | 0/2 [00:00<?, ?ba/s] executed [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] executed [1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009] executed [1105, 1106, 1107, 1108, 1109, 1110, 1111, 1112, 1113, 1114] #0: 100%|██████████| 2/2 [00:00<00:00, 72.76ba/s] executed [2105, 2106, 2107, 2108, 2109, 2110, 2111, 2112, 2113, 2114] #1: 100%|██████████| 2/2 [00:00<00:00, 71.55ba/s] - Datasets: 1.6.1 - Python: 3.8.3 (default, May 19 2020, 18:47:26) [GCC 7.3.0] - Platform: Linux-5.4.0-72-generic-x86_64-with-glibc2.10 ``` ## Expected results Caching should work.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2322/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2322/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2321
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2321/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2321/comments
https://api.github.com/repos/huggingface/datasets/issues/2321/events
https://github.com/huggingface/datasets/pull/2321
876,304,364
MDExOlB1bGxSZXF1ZXN0NjMwNDc3NDUy
2,321
Set encoding in OSCAR dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-05T10:27:03"
"2021-05-05T10:50:55"
"2021-05-05T10:50:55"
MEMBER
null
Set explicit `utf-8` encoding in OSCAR dataset, to avoid using the system default `cp1252` on Windows platforms. Fix #2319.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2321/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2321/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2321", "html_url": "https://github.com/huggingface/datasets/pull/2321", "diff_url": "https://github.com/huggingface/datasets/pull/2321.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2321.patch", "merged_at": "2021-05-05T10:50:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/2320
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2320/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2320/comments
https://api.github.com/repos/huggingface/datasets/issues/2320/events
https://github.com/huggingface/datasets/pull/2320
876,257,026
MDExOlB1bGxSZXF1ZXN0NjMwNDM5NjI5
2,320
Set default name in init_dynamic_modules
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-05T09:30:03"
"2021-05-06T07:57:54"
"2021-05-06T07:57:54"
MEMBER
null
Set default value for the name of dynamic modules. Close #2318.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2320/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2320/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2320", "html_url": "https://github.com/huggingface/datasets/pull/2320", "diff_url": "https://github.com/huggingface/datasets/pull/2320.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2320.patch", "merged_at": "2021-05-06T07:57:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/2319
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2319/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2319/comments
https://api.github.com/repos/huggingface/datasets/issues/2319/events
https://github.com/huggingface/datasets/issues/2319
876,251,376
MDU6SXNzdWU4NzYyNTEzNzY=
2,319
UnicodeDecodeError for OSCAR (Afrikaans)
{ "login": "sgraaf", "id": 8904453, "node_id": "MDQ6VXNlcjg5MDQ0NTM=", "avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgraaf", "html_url": "https://github.com/sgraaf", "followers_url": "https://api.github.com/users/sgraaf/followers", "following_url": "https://api.github.com/users/sgraaf/following{/other_user}", "gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions", "organizations_url": "https://api.github.com/users/sgraaf/orgs", "repos_url": "https://api.github.com/users/sgraaf/repos", "events_url": "https://api.github.com/users/sgraaf/events{/privacy}", "received_events_url": "https://api.github.com/users/sgraaf/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @sgraaf.\r\n\r\nI am going to have a look at it. \r\n\r\nI guess the expected codec is \"UTF-8\". Normally, when no explicitly codec is passed, Python uses one which is platform-dependent. For Linux machines, the default codec is `utf_8`, which is OK. However for Windows machine, the default codec is `cp1252`, which causes the problem.", "Awesome, thank you. 😃 ", "@sgraaf, I have just merged the fix in the master branch.\r\n\r\nYou can either:\r\n- install `datasets` from source code\r\n- wait until we make the next release of `datasets`\r\n- set the `utf-8` codec as your default instead of `cp1252`. This can be done by activating the Python [UTF-8 mode](https://www.python.org/dev/peps/pep-0540) either by passing the command-line option `-X utf8` or by setting the environment variable `PYTHONUTF8=1`." ]
"2021-05-05T09:22:52"
"2021-05-05T10:57:31"
"2021-05-05T10:50:55"
NONE
null
## Describe the bug When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("oscar", "unshuffled_deduplicated_af") ``` ## Expected results Anything but an error, really. ## Actual results ```python >>> from datasets import load_dataset >>> dataset = load_dataset("oscar", "unshuffled_deduplicated_af") Downloading: 14.7kB [00:00, 4.91MB/s] Downloading: 3.07MB [00:00, 32.6MB/s] Downloading and preparing dataset oscar/unshuffled_deduplicated_af (download: 62.93 MiB, generated: 163.38 MiB, post-processed: Unknown size, total: 226.32 MiB) to C:\Users\sgraaf\.cache\huggingface\datasets\oscar\unshuffled_deduplicated_af\1.0.0\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464... Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 81.0/81.0 [00:00<00:00, 40.5kB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 66.0M/66.0M [00:18<00:00, 3.50MB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\load.py", line 745, in load_dataset builder_instance.download_and_prepare( File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 574, in download_and_prepare self._download_and_prepare( File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\datasets\builder.py", line 979, in _prepare_split for key, record in utils.tqdm( File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\site-packages\tqdm\std.py", line 1133, in __iter__ for obj in iterable: File "C:\Users\sgraaf\.cache\huggingface\modules\datasets_modules\datasets\oscar\bd4f96df5b4512007ef9fd17bbc1ecde459fa53d2fc0049cf99392ba2efcc464\oscar.py", line 359, in _generate_examples for line in f: File "C:\Users\sgraaf\AppData\Local\Programs\Python\Python39\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 7454: character maps to <undefined> ``` ## Versions Paste the output of the following code: ```python import datasets import sys import platform print(f""" - Datasets: {datasets.__version__} - Python: {sys.version} - Platform: {platform.platform()} """) ``` - Datasets: 1.6.2 - Python: 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)] - Platform: Windows-10-10.0.19041-SP0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2319/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2319/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2318
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2318/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2318/comments
https://api.github.com/repos/huggingface/datasets/issues/2318/events
https://github.com/huggingface/datasets/issues/2318
876,212,460
MDU6SXNzdWU4NzYyMTI0NjA=
2,318
[api request] API to obtain "dataset_module" dynamic path?
{ "login": "richardliaw", "id": 4529381, "node_id": "MDQ6VXNlcjQ1MjkzODE=", "avatar_url": "https://avatars.githubusercontent.com/u/4529381?v=4", "gravatar_id": "", "url": "https://api.github.com/users/richardliaw", "html_url": "https://github.com/richardliaw", "followers_url": "https://api.github.com/users/richardliaw/followers", "following_url": "https://api.github.com/users/richardliaw/following{/other_user}", "gists_url": "https://api.github.com/users/richardliaw/gists{/gist_id}", "starred_url": "https://api.github.com/users/richardliaw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richardliaw/subscriptions", "organizations_url": "https://api.github.com/users/richardliaw/orgs", "repos_url": "https://api.github.com/users/richardliaw/repos", "events_url": "https://api.github.com/users/richardliaw/events{/privacy}", "received_events_url": "https://api.github.com/users/richardliaw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @richardliaw, \r\n\r\nFirst, thanks for the compliments.\r\n\r\nIn relation with your request, currently, the dynamic modules path is obtained this way:\r\n```python\r\nfrom datasets.load import init_dynamic_modules, MODULE_NAME_FOR_DYNAMIC_MODULES\r\n\r\ndynamic_modules_path = init_dynamic_modules(MODULE_NAME_FOR_DYNAMIC_MODULES)\r\n```\r\n\r\nLet me know if it is OK for you this way. \r\n\r\nI could set `MODULE_NAME_FOR_DYNAMIC_MODULES` as default value, so that you could instead obtain the path with:\r\n```\r\ndynamic_modules_path = datasets.load.init_dynamic_modules()\r\n```", "Hi @albertvillanova, the default value proposal seems great :) Looking forward to this!", "I like the idea as well ! thanks @albertvillanova ", "Hi @richardliaw, the feature is on the master branch and will be included in the next release in a couple of weeks.", "awesome work @albertvillanova !" ]
"2021-05-05T08:40:48"
"2021-05-06T08:45:45"
"2021-05-06T07:57:54"
NONE
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. This is an awesome library. It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34 This is because Ray will spawn new processes, and each process will load modules by path. However, we need to explicitly inform Ray to load the right modules, or else it will error upon import. I'd like an API to obtain the dynamic paths. This will allow us to support this functionality in this awesome library while being future proof. **Describe the solution you'd like** A clear and concise description of what you want to happen. `datasets.get_dynamic_paths -> List[str]` will be sufficient for my use case. By offering this API, we will be able to address the following issues (by patching the ray integration sufficiently): https://github.com/huggingface/blog/issues/106 https://github.com/huggingface/transformers/issues/11565 https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34 https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/35
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2318/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2318/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2317
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2317/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2317/comments
https://api.github.com/repos/huggingface/datasets/issues/2317/events
https://github.com/huggingface/datasets/pull/2317
875,767,318
MDExOlB1bGxSZXF1ZXN0NjMwMDQxNzc4
2,317
Fix incorrect version specification for the pyarrow package
{ "login": "cemilcengiz", "id": 32267027, "node_id": "MDQ6VXNlcjMyMjY3MDI3", "avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cemilcengiz", "html_url": "https://github.com/cemilcengiz", "followers_url": "https://api.github.com/users/cemilcengiz/followers", "following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}", "gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions", "organizations_url": "https://api.github.com/users/cemilcengiz/orgs", "repos_url": "https://api.github.com/users/cemilcengiz/repos", "events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}", "received_events_url": "https://api.github.com/users/cemilcengiz/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-04T19:30:20"
"2021-05-05T10:09:16"
"2021-05-05T09:21:58"
CONTRIBUTOR
null
This PR addresses the bug in the pyarrow version specification, which is detailed in #2316 . Simply, I put a comma between the version bounds. Fix #2316.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2317/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2317/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2317", "html_url": "https://github.com/huggingface/datasets/pull/2317", "diff_url": "https://github.com/huggingface/datasets/pull/2317.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2317.patch", "merged_at": "2021-05-05T09:21:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/2316
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2316/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2316/comments
https://api.github.com/repos/huggingface/datasets/issues/2316/events
https://github.com/huggingface/datasets/issues/2316
875,756,353
MDU6SXNzdWU4NzU3NTYzNTM=
2,316
Incorrect version specification for pyarrow
{ "login": "cemilcengiz", "id": 32267027, "node_id": "MDQ6VXNlcjMyMjY3MDI3", "avatar_url": "https://avatars.githubusercontent.com/u/32267027?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cemilcengiz", "html_url": "https://github.com/cemilcengiz", "followers_url": "https://api.github.com/users/cemilcengiz/followers", "following_url": "https://api.github.com/users/cemilcengiz/following{/other_user}", "gists_url": "https://api.github.com/users/cemilcengiz/gists{/gist_id}", "starred_url": "https://api.github.com/users/cemilcengiz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cemilcengiz/subscriptions", "organizations_url": "https://api.github.com/users/cemilcengiz/orgs", "repos_url": "https://api.github.com/users/cemilcengiz/repos", "events_url": "https://api.github.com/users/cemilcengiz/events{/privacy}", "received_events_url": "https://api.github.com/users/cemilcengiz/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Fixed by #2317." ]
"2021-05-04T19:15:11"
"2021-05-05T10:10:03"
"2021-05-05T10:10:03"
CONTRIBUTOR
null
## Describe the bug The pyarrow dependency is incorrectly specified in setup.py file, in [this line](https://github.com/huggingface/datasets/blob/3a3e5a4da20bfcd75f8b6a6869b240af8feccc12/setup.py#L77). Also as a snippet: ```python "pyarrow>=1.0.0<4.0.0", ``` ## Steps to reproduce the bug ```bash pip install "pyarrow>=1.0.0<4.0.0" ``` ## Expected results It is expected to get a pyarrow version between 1.0.0 (inclusive) and 4.0.0 (exclusive). ## Actual results pip ignores the specified versions since there is a missing comma between the lower and upper limits. Therefore, pip installs the latest pyarrow version from PYPI, which is 4.0.0. This is especially problematic since "conda env export" fails due to incorrect version specification. Here is the conda error as well: ```bash conda env export InvalidVersionSpec: Invalid version '1.0.0<4.0.0': invalid character(s) ``` ## Fix suggestion Put a comma between the version limits which means replacing the line in setup.py file with the following: ```python "pyarrow>=1.0.0,<4.0.0", ``` ## Versions Paste the output of the following code: ```python - Datasets: 1.6.2 - Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] - Platform: Linux-5.4.0-42-generic-x86_64-with-debian-buster-sid ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2316/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2316/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2315
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2315/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2315/comments
https://api.github.com/repos/huggingface/datasets/issues/2315/events
https://github.com/huggingface/datasets/pull/2315
875,742,200
MDExOlB1bGxSZXF1ZXN0NjMwMDIyMDYy
2,315
Datasets cli improvements
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-04T18:55:11"
"2021-05-10T16:36:51"
"2021-05-10T16:36:50"
CONTRIBUTOR
null
This PR: * replaces the code from the `bug_report.md` that was used to get relevant system info with a dedicated command (a more elegant approach than copy-pasting the code IMO) * removes the `download` command (copied from the transformers repo?) * adds missing help messages to the cli commands
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2315/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2315/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2315", "html_url": "https://github.com/huggingface/datasets/pull/2315", "diff_url": "https://github.com/huggingface/datasets/pull/2315.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2315.patch", "merged_at": "2021-05-10T16:36:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/2314
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2314/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2314/comments
https://api.github.com/repos/huggingface/datasets/issues/2314/events
https://github.com/huggingface/datasets/pull/2314
875,729,271
MDExOlB1bGxSZXF1ZXN0NjMwMDExODc4
2,314
Minor refactor prepare_module
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-04T18:37:26"
"2021-10-13T09:07:34"
"2021-10-13T09:07:34"
MEMBER
null
Start to refactor `prepare_module` to try to decouple functionality. This PR does: - extract function `_initialize_dynamic_modules_namespace_package` - extract function `_find_module_in_github_or_s3` - some renaming of variables - use of f-strings
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2314/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2314/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2314", "html_url": "https://github.com/huggingface/datasets/pull/2314", "diff_url": "https://github.com/huggingface/datasets/pull/2314.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2314.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2313
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2313/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2313/comments
https://api.github.com/repos/huggingface/datasets/issues/2313/events
https://github.com/huggingface/datasets/pull/2313
875,475,367
MDExOlB1bGxSZXF1ZXN0NjI5ODEwNTc4
2,313
Remove unused head_hf_s3 function
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-04T13:42:06"
"2021-05-07T09:31:42"
"2021-05-07T09:31:42"
MEMBER
null
Currently, the function `head_hf_s3` is not used: - neither its returned result is used - nor it raises any exception, as exceptions are catched and returned (not raised) This PR removes it.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2313/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2313/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2313", "html_url": "https://github.com/huggingface/datasets/pull/2313", "diff_url": "https://github.com/huggingface/datasets/pull/2313.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2313.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2312
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2312/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2312/comments
https://api.github.com/repos/huggingface/datasets/issues/2312/events
https://github.com/huggingface/datasets/pull/2312
875,435,726
MDExOlB1bGxSZXF1ZXN0NjI5Nzc4NjUz
2,312
Add rename_columnS method
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-04T12:57:53"
"2021-05-04T13:43:13"
"2021-05-04T13:43:12"
CONTRIBUTOR
null
Cherry-picked from #2255
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2312/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2312/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2312", "html_url": "https://github.com/huggingface/datasets/pull/2312", "diff_url": "https://github.com/huggingface/datasets/pull/2312.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2312.patch", "merged_at": "2021-05-04T13:43:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/2311
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2311/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2311/comments
https://api.github.com/repos/huggingface/datasets/issues/2311/events
https://github.com/huggingface/datasets/pull/2311
875,262,208
MDExOlB1bGxSZXF1ZXN0NjI5NjQwNTMx
2,311
Add SLR52, SLR53 and SLR54 to OpenSLR
{ "login": "cahya-wirawan", "id": 7669893, "node_id": "MDQ6VXNlcjc2Njk4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cahya-wirawan", "html_url": "https://github.com/cahya-wirawan", "followers_url": "https://api.github.com/users/cahya-wirawan/followers", "following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}", "gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}", "starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions", "organizations_url": "https://api.github.com/users/cahya-wirawan/orgs", "repos_url": "https://api.github.com/users/cahya-wirawan/repos", "events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}", "received_events_url": "https://api.github.com/users/cahya-wirawan/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-04T09:08:03"
"2021-05-07T09:50:55"
"2021-05-07T09:50:55"
CONTRIBUTOR
null
Add large speech datasets for Sinhala, Bengali and Nepali.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2311/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2311/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2311", "html_url": "https://github.com/huggingface/datasets/pull/2311", "diff_url": "https://github.com/huggingface/datasets/pull/2311.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2311.patch", "merged_at": "2021-05-07T09:50:55" }
true
https://api.github.com/repos/huggingface/datasets/issues/2310
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2310/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2310/comments
https://api.github.com/repos/huggingface/datasets/issues/2310/events
https://github.com/huggingface/datasets/pull/2310
875,096,051
MDExOlB1bGxSZXF1ZXN0NjI5NTEwNTg5
2,310
Update README.md
{ "login": "cryoff", "id": 15029054, "node_id": "MDQ6VXNlcjE1MDI5MDU0", "avatar_url": "https://avatars.githubusercontent.com/u/15029054?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cryoff", "html_url": "https://github.com/cryoff", "followers_url": "https://api.github.com/users/cryoff/followers", "following_url": "https://api.github.com/users/cryoff/following{/other_user}", "gists_url": "https://api.github.com/users/cryoff/gists{/gist_id}", "starred_url": "https://api.github.com/users/cryoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cryoff/subscriptions", "organizations_url": "https://api.github.com/users/cryoff/orgs", "repos_url": "https://api.github.com/users/cryoff/repos", "events_url": "https://api.github.com/users/cryoff/events{/privacy}", "received_events_url": "https://api.github.com/users/cryoff/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-04T04:38:01"
"2022-07-06T15:19:58"
"2022-07-06T15:19:58"
CONTRIBUTOR
null
Provides description of data instances and dataset features
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2310/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2310/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2310", "html_url": "https://github.com/huggingface/datasets/pull/2310", "diff_url": "https://github.com/huggingface/datasets/pull/2310.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2310.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/2309
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2309/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2309/comments
https://api.github.com/repos/huggingface/datasets/issues/2309/events
https://github.com/huggingface/datasets/pull/2309
874,644,990
MDExOlB1bGxSZXF1ZXN0NjI5MTU4NjQx
2,309
Fix conda release
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-03T14:52:59"
"2021-05-03T16:01:17"
"2021-05-03T16:01:17"
MEMBER
null
There were a few issues with conda releases (they've been failing for a while now). To fix this I had to: - add the --single-version-externally-managed tag to the build stage (suggestion from [here](https://stackoverflow.com/a/64825075)) - set the python version of the conda build stage to 3.8 since 3.9 isn't supported - sync the evrsion requirement of `huggingface_hub` With these changes I'm working on uploading all missing versions until 1.6.2 to conda EDIT: I managed to build and upload all missing versions until 1.6.2 to conda :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2309/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2309/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2309", "html_url": "https://github.com/huggingface/datasets/pull/2309", "diff_url": "https://github.com/huggingface/datasets/pull/2309.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2309.patch", "merged_at": "2021-05-03T16:01:17" }
true
https://api.github.com/repos/huggingface/datasets/issues/2302
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2302/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2302/comments
https://api.github.com/repos/huggingface/datasets/issues/2302/events
https://github.com/huggingface/datasets/pull/2302
873,961,435
MDExOlB1bGxSZXF1ZXN0NjI4NjIzMDQ3
2,302
Add SubjQA dataset
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-02T14:51:20"
"2021-05-10T09:21:19"
"2021-05-10T09:21:19"
MEMBER
null
Hello datasetters 🙂! Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance). I found a bug in the start/end indices that I've proposed a fix for here: https://github.com/megagonlabs/SubjQA/pull/2 Unfortunately, the dataset creators are unresponsive, so for now I am using my fork as the source. Will update the URL if/when the creators respond.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2302/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2302/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2302", "html_url": "https://github.com/huggingface/datasets/pull/2302", "diff_url": "https://github.com/huggingface/datasets/pull/2302.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2302.patch", "merged_at": "2021-05-10T09:21:19" }
true
https://api.github.com/repos/huggingface/datasets/issues/2301
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2301/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2301/comments
https://api.github.com/repos/huggingface/datasets/issues/2301/events
https://github.com/huggingface/datasets/issues/2301
873,941,266
MDU6SXNzdWU4NzM5NDEyNjY=
2,301
Unable to setup dev env on Windows
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "repos_url": "https://api.github.com/users/gchhablani/repos", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @gchhablani, \r\n\r\nThere are some 3rd-party dependencies that require to build code in C. In this case, it is the library `python-Levenshtein`.\r\n\r\nOn Windows, in order to be able to build C code, you need to install at least `Microsoft C++ Build Tools` version 14. You can find more info here: https://visualstudio.microsoft.com/visual-cpp-build-tools/", "Hi @albertvillanova \r\n\r\nSorry for such a trivial issue ;-; \r\n\r\nThanks a lot." ]
"2021-05-02T13:20:42"
"2021-05-03T15:18:01"
"2021-05-03T15:17:34"
CONTRIBUTOR
null
Hi I tried installing the `".[dev]"` version on Windows 10 after cloning. Here is the error I'm facing: ```bat (env) C:\testing\datasets>pip install -e ".[dev]" Obtaining file:///C:/testing/datasets Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.5) Collecting pyarrow>=0.17.1 Using cached pyarrow-4.0.0-cp37-cp37m-win_amd64.whl (13.3 MB) Requirement already satisfied: dill in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.3.1.1) Collecting pandas Using cached pandas-1.2.4-cp37-cp37m-win_amd64.whl (9.1 MB) Requirement already satisfied: requests>=2.19.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.25.1) Requirement already satisfied: tqdm<4.50.0,>=4.27 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.49.0) Requirement already satisfied: xxhash in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2.0.2) Collecting multiprocess Using cached multiprocess-0.70.11.1-py37-none-any.whl (108 kB) Requirement already satisfied: fsspec in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (2021.4.0) Collecting huggingface_hub<0.1.0 Using cached huggingface_hub-0.0.8-py3-none-any.whl (34 kB) Requirement already satisfied: importlib_metadata in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.0.1) Requirement already satisfied: absl-py in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.12.0) Requirement already satisfied: pytest in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (6.2.3) Collecting pytest-xdist Using cached pytest_xdist-2.2.1-py3-none-any.whl (37 kB) Collecting apache-beam>=2.24.0 Using cached apache_beam-2.29.0-cp37-cp37m-win_amd64.whl (3.7 MB) Collecting elasticsearch Using cached elasticsearch-7.12.1-py2.py3-none-any.whl (339 kB) Requirement already satisfied: boto3==1.16.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.16.43) Requirement already satisfied: botocore==1.19.43 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.19.43) Collecting moto[s3]==1.3.16 Using cached moto-1.3.16-py2.py3-none-any.whl (879 kB) Collecting rarfile>=4.0 Using cached rarfile-4.0-py3-none-any.whl (28 kB) Collecting tensorflow>=2.3 Using cached tensorflow-2.4.1-cp37-cp37m-win_amd64.whl (370.7 MB) Requirement already satisfied: torch in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.8.1) Requirement already satisfied: transformers in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (4.5.1) Collecting bs4 Using cached bs4-0.0.1-py3-none-any.whl Collecting conllu Using cached conllu-4.4-py2.py3-none-any.whl (15 kB) Collecting langdetect Using cached langdetect-1.0.8-py3-none-any.whl Collecting lxml Using cached lxml-4.6.3-cp37-cp37m-win_amd64.whl (3.5 MB) Collecting mwparserfromhell Using cached mwparserfromhell-0.6-cp37-cp37m-win_amd64.whl (101 kB) Collecting nltk Using cached nltk-3.6.2-py3-none-any.whl (1.5 MB) Collecting openpyxl Using cached openpyxl-3.0.7-py2.py3-none-any.whl (243 kB) Collecting py7zr Using cached py7zr-0.15.2-py3-none-any.whl (66 kB) Collecting tldextract Using cached tldextract-3.1.0-py2.py3-none-any.whl (87 kB) Collecting zstandard Using cached zstandard-0.15.2-cp37-cp37m-win_amd64.whl (582 kB) Collecting bert_score>=0.3.6 Using cached bert_score-0.3.9-py3-none-any.whl (59 kB) Collecting rouge_score Using cached rouge_score-0.0.4-py2.py3-none-any.whl (22 kB) Collecting sacrebleu Using cached sacrebleu-1.5.1-py3-none-any.whl (54 kB) Requirement already satisfied: scipy in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3) Collecting seqeval Using cached seqeval-1.2.2-py3-none-any.whl Collecting sklearn Using cached sklearn-0.0-py2.py3-none-any.whl Collecting jiwer Using cached jiwer-2.2.0-py3-none-any.whl (13 kB) Requirement already satisfied: toml>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.10.2) Requirement already satisfied: requests_file>=1.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.5.1) Requirement already satisfied: texttable>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.6.3) Requirement already satisfied: s3fs>=0.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (0.4.2) Requirement already satisfied: Werkzeug>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datasets==1.5.0.dev0) (1.0.1) Collecting black Using cached black-21.4b2-py3-none-any.whl (130 kB) Collecting isort Using cached isort-5.8.0-py3-none-any.whl (103 kB) Collecting flake8==3.7.9 Using cached flake8-3.7.9-py2.py3-none-any.whl (69 kB) Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.10.0) Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from boto3==1.16.43->datasets==1.5.0.dev0) (0.3.7) Requirement already satisfied: urllib3<1.27,>=1.25.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (1.26.4) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from botocore==1.19.43->datasets==1.5.0.dev0) (2.8.1) Collecting entrypoints<0.4.0,>=0.3.0 Using cached entrypoints-0.3-py2.py3-none-any.whl (11 kB) Collecting pyflakes<2.2.0,>=2.1.0 Using cached pyflakes-2.1.1-py2.py3-none-any.whl (59 kB) Collecting pycodestyle<2.6.0,>=2.5.0 Using cached pycodestyle-2.5.0-py2.py3-none-any.whl (51 kB) Collecting mccabe<0.7.0,>=0.6.0 Using cached mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB) Requirement already satisfied: jsondiff>=1.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.3.0) Requirement already satisfied: pytz in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2021.1) Requirement already satisfied: mock in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.0.3) Requirement already satisfied: MarkupSafe<2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.1.1) Requirement already satisfied: python-jose[cryptography]<4.0.0,>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0) Requirement already satisfied: aws-xray-sdk!=0.96,>=0.93 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.8.0) Requirement already satisfied: cryptography>=2.3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.7) Requirement already satisfied: more-itertools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (8.7.0) Requirement already satisfied: PyYAML>=5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.4.1) Requirement already satisfied: boto>=2.36.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.49.0) Requirement already satisfied: idna<3,>=2.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.10) Requirement already satisfied: sshpubkeys>=3.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.3.1) Requirement already satisfied: responses>=0.9.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.13.3) Requirement already satisfied: xmltodict in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.12.0) Requirement already satisfied: setuptools in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (52.0.0.post20210125) Requirement already satisfied: Jinja2>=2.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.11.3) Requirement already satisfied: zipp in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.4.1) Requirement already satisfied: six>1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.15.0) Requirement already satisfied: ecdsa<0.15 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.14.1) Requirement already satisfied: docker>=2.5.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (5.0.0) Requirement already satisfied: cfn-lint>=0.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.49.0) Requirement already satisfied: grpcio<2,>=1.29.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (1.32.0) Collecting hdfs<3.0.0,>=2.1.0 Using cached hdfs-2.6.0-py3-none-any.whl (33 kB) Collecting pyarrow>=0.17.1 Using cached pyarrow-3.0.0-cp37-cp37m-win_amd64.whl (12.6 MB) Collecting fastavro<2,>=0.21.4 Using cached fastavro-1.4.0-cp37-cp37m-win_amd64.whl (394 kB) Requirement already satisfied: httplib2<0.18.0,>=0.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.17.4) Collecting pymongo<4.0.0,>=3.8.0 Using cached pymongo-3.11.3-cp37-cp37m-win_amd64.whl (382 kB) Collecting crcmod<2.0,>=1.7 Using cached crcmod-1.7-py3-none-any.whl Collecting avro-python3!=1.9.2,<1.10.0,>=1.8.1 Using cached avro_python3-1.9.2.1-py3-none-any.whl Requirement already satisfied: typing-extensions<3.8.0,>=3.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.7.4.3) Requirement already satisfied: future<1.0.0,>=0.18.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.18.2) Collecting oauth2client<5,>=2.0.1 Using cached oauth2client-4.1.3-py2.py3-none-any.whl (98 kB) Collecting pydot<2,>=1.2.0 Using cached pydot-1.4.2-py2.py3-none-any.whl (21 kB) Requirement already satisfied: protobuf<4,>=3.12.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from apache-beam>=2.24.0->datasets==1.5.0.dev0) (3.15.8) Requirement already satisfied: wrapt in c:\programdata\anaconda3\envs\env\lib\site-packages (from aws-xray-sdk!=0.96,>=0.93->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.12.1) Collecting matplotlib Using cached matplotlib-3.4.1-cp37-cp37m-win_amd64.whl (7.1 MB) Requirement already satisfied: junit-xml~=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.9) Requirement already satisfied: jsonpatch in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.32) Requirement already satisfied: jsonschema~=3.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (3.2.0) Requirement already satisfied: networkx~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.5.1) Requirement already satisfied: aws-sam-translator>=1.35.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.35.0) Requirement already satisfied: cffi>=1.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (1.14.5) Requirement already satisfied: pycparser in c:\programdata\anaconda3\envs\env\lib\site-packages (from cffi>=1.12->cryptography>=2.3.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.20) Requirement already satisfied: pywin32==227 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (227) Requirement already satisfied: websocket-client>=0.32.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from docker>=2.5.1->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.58.0) Requirement already satisfied: docopt in c:\programdata\anaconda3\envs\env\lib\site-packages (from hdfs<3.0.0,>=2.1.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.6.2) Requirement already satisfied: filelock in c:\programdata\anaconda3\envs\env\lib\site-packages (from huggingface_hub<0.1.0->datasets==1.5.0.dev0) (3.0.12) Requirement already satisfied: pyrsistent>=0.14.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (0.17.3) Requirement already satisfied: attrs>=17.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonschema~=3.0->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (20.3.0) Requirement already satisfied: decorator<5,>=4.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from networkx~=2.4->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (4.4.2) Requirement already satisfied: rsa>=3.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (4.7.2) Requirement already satisfied: pyasn1-modules>=0.0.5 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.2.8) Requirement already satisfied: pyasn1>=0.1.7 in c:\programdata\anaconda3\envs\env\lib\site-packages (from oauth2client<5,>=2.0.1->apache-beam>=2.24.0->datasets==1.5.0.dev0) (0.4.8) Requirement already satisfied: pyparsing>=2.1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pydot<2,>=1.2.0->apache-beam>=2.24.0->datasets==1.5.0.dev0) (2.4.7) Requirement already satisfied: certifi>=2017.4.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (2020.12.5) Requirement already satisfied: chardet<5,>=3.0.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests>=2.19.0->datasets==1.5.0.dev0) (4.0.0) Collecting keras-preprocessing~=1.1.2 Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB) Requirement already satisfied: termcolor~=1.1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (1.1.0) Requirement already satisfied: tensorboard~=2.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.5.0) Requirement already satisfied: wheel~=0.35 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (0.36.2) Collecting opt-einsum~=3.3.0 Using cached opt_einsum-3.3.0-py3-none-any.whl (65 kB) Collecting gast==0.3.3 Using cached gast-0.3.3-py2.py3-none-any.whl (9.7 kB) Collecting google-pasta~=0.2 Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB) Requirement already satisfied: tensorflow-estimator<2.5.0,>=2.4.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorflow>=2.3->datasets==1.5.0.dev0) (2.4.0) Collecting astunparse~=1.6.3 Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB) Collecting flatbuffers~=1.12.0 Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB) Collecting h5py~=2.10.0 Using cached h5py-2.10.0-cp37-cp37m-win_amd64.whl (2.5 MB) Requirement already satisfied: markdown>=2.6.8 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.3.4) Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.8.0) Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.4.4) Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (0.6.0) Requirement already satisfied: google-auth<2,>=1.6.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.30.0) Requirement already satisfied: cachetools<5.0,>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (4.2.2) Requirement already satisfied: requests-oauthlib>=0.7.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (1.3.0) Requirement already satisfied: oauthlib>=3.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=2.3->datasets==1.5.0.dev0) (3.1.0) Requirement already satisfied: regex!=2019.12.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (2021.4.4) Requirement already satisfied: tokenizers<0.11,>=0.10.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.10.2) Requirement already satisfied: sacremoses in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (0.0.45) Requirement already satisfied: packaging in c:\programdata\anaconda3\envs\env\lib\site-packages (from transformers->datasets==1.5.0.dev0) (20.9) Collecting pathspec<1,>=0.8.1 Using cached pathspec-0.8.1-py2.py3-none-any.whl (28 kB) Requirement already satisfied: click>=7.1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (7.1.2) Collecting appdirs Using cached appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB) Collecting mypy-extensions>=0.4.3 Using cached mypy_extensions-0.4.3-py2.py3-none-any.whl (4.5 kB) Requirement already satisfied: typed-ast>=1.4.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from black->datasets==1.5.0.dev0) (1.4.3) Collecting beautifulsoup4 Using cached beautifulsoup4-4.9.3-py3-none-any.whl (115 kB) Requirement already satisfied: soupsieve>1.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from beautifulsoup4->bs4->datasets==1.5.0.dev0) (2.2.1) Collecting python-Levenshtein Using cached python-Levenshtein-0.12.2.tar.gz (50 kB) Requirement already satisfied: jsonpointer>=1.9 in c:\programdata\anaconda3\envs\env\lib\site-packages (from jsonpatch->cfn-lint>=0.4.0->moto[s3]==1.3.16->datasets==1.5.0.dev0) (2.1) Requirement already satisfied: pillow>=6.2.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (8.2.0) Requirement already satisfied: cycler>=0.10 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (0.10.0) Requirement already satisfied: kiwisolver>=1.0.1 in c:\programdata\anaconda3\envs\env\lib\site-packages (from matplotlib->bert_score>=0.3.6->datasets==1.5.0.dev0) (1.3.1) Collecting multiprocess Using cached multiprocess-0.70.11-py3-none-any.whl (98 kB) Using cached multiprocess-0.70.10.zip (2.4 MB) Using cached multiprocess-0.70.9-py3-none-any.whl Requirement already satisfied: joblib in c:\programdata\anaconda3\envs\env\lib\site-packages (from nltk->datasets==1.5.0.dev0) (1.0.1) Collecting et-xmlfile Using cached et_xmlfile-1.1.0-py3-none-any.whl (4.7 kB) Requirement already satisfied: pyzstd<0.15.0,>=0.14.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from py7zr->datasets==1.5.0.dev0) (0.14.4) Collecting pyppmd<0.13.0,>=0.12.1 Using cached pyppmd-0.12.1-cp37-cp37m-win_amd64.whl (32 kB) Collecting pycryptodome>=3.6.6 Using cached pycryptodome-3.10.1-cp35-abi3-win_amd64.whl (1.6 MB) Collecting bcj-cffi<0.6.0,>=0.5.1 Using cached bcj_cffi-0.5.1-cp37-cp37m-win_amd64.whl (21 kB) Collecting multivolumefile<0.3.0,>=0.2.0 Using cached multivolumefile-0.2.3-py3-none-any.whl (17 kB) Requirement already satisfied: iniconfig in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.1.1) Requirement already satisfied: py>=1.8.2 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.10.0) Requirement already satisfied: pluggy<1.0.0a1,>=0.12 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.13.1) Requirement already satisfied: atomicwrites>=1.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (1.4.0) Requirement already satisfied: colorama in c:\programdata\anaconda3\envs\env\lib\site-packages (from pytest->datasets==1.5.0.dev0) (0.4.4) Collecting pytest-forked Using cached pytest_forked-1.3.0-py2.py3-none-any.whl (4.7 kB) Collecting execnet>=1.1 Using cached execnet-1.8.0-py2.py3-none-any.whl (39 kB) Requirement already satisfied: apipkg>=1.4 in c:\programdata\anaconda3\envs\env\lib\site-packages (from execnet>=1.1->pytest-xdist->datasets==1.5.0.dev0) (1.5) Collecting portalocker==2.0.0 Using cached portalocker-2.0.0-py2.py3-none-any.whl (11 kB) Requirement already satisfied: scikit-learn>=0.21.3 in c:\programdata\anaconda3\envs\env\lib\site-packages (from seqeval->datasets==1.5.0.dev0) (0.24.2) Requirement already satisfied: threadpoolctl>=2.0.0 in c:\programdata\anaconda3\envs\env\lib\site-packages (from scikit-learn>=0.21.3->seqeval->datasets==1.5.0.dev0) (2.1.0) Building wheels for collected packages: python-Levenshtein Building wheel for python-Levenshtein (setup.py) ... error ERROR: Command errored out with exit status 1: command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\VKC~1\AppData\Local\Temp\pip-wheel-8jh7fm18' cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\ Complete output (27 lines): running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.7 creating build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein running egg_info writing python_Levenshtein.egg-info\PKG-INFO writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt writing entry points to python_Levenshtein.egg-info\entry_points.txt writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt writing requirements to python_Levenshtein.egg-info\requires.txt writing top-level names to python_Levenshtein.egg-info\top_level.txt reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*pyc' found anywhere in distribution warning: no previously-included files matching '*so' found anywhere in distribution warning: no previously-included files matching '.project' found anywhere in distribution warning: no previously-included files matching '.pydevproject' found anywhere in distribution writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt' copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein running build_ext building 'Levenshtein._levenshtein' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Failed building wheel for python-Levenshtein Running setup.py clean for python-Levenshtein Failed to build python-Levenshtein Installing collected packages: python-Levenshtein, pytest-forked, pyppmd, pymongo, pyflakes, pydot, pycryptodome, pycodestyle, pyarrow, portalocker, pathspec, pandas, opt-einsum, oauth2client, nltk, mypy-extensions, multivolumefile, multiprocess, moto, mccabe, matplotlib, keras-preprocessing, huggingface-hub, hdfs, h5py, google-pasta, gast, flatbuffers, fastavro, execnet, et-xmlfile, entrypoints, crcmod, beautifulsoup4, bcj-cffi, avro-python3, astunparse, appdirs, zstandard, tldextract, tensorflow, sklearn, seqeval, sacrebleu, rouge-score, rarfile, pytest-xdist, py7zr, openpyxl, mwparserfromhell, lxml, langdetect, jiwer, isort, flake8, elasticsearch, datasets, conllu, bs4, black, bert-score, apache-beam Running setup.py install for python-Levenshtein ... error ERROR: Command errored out with exit status 1: command: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein' cwd: C:\Users\VKC~1\AppData\Local\Temp\pip-install-ynt_dbm4\python-levenshtein_c02e7e6f9def4629a475349654670ae9\ Complete output (27 lines): running install running build running build_py creating build creating build\lib.win-amd64-3.7 creating build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\StringMatcher.py -> build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\__init__.py -> build\lib.win-amd64-3.7\Levenshtein running egg_info writing python_Levenshtein.egg-info\PKG-INFO writing dependency_links to python_Levenshtein.egg-info\dependency_links.txt writing entry points to python_Levenshtein.egg-info\entry_points.txt writing namespace_packages to python_Levenshtein.egg-info\namespace_packages.txt writing requirements to python_Levenshtein.egg-info\requires.txt writing top-level names to python_Levenshtein.egg-info\top_level.txt reading manifest file 'python_Levenshtein.egg-info\SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no previously-included files matching '*pyc' found anywhere in distribution warning: no previously-included files matching '*so' found anywhere in distribution warning: no previously-included files matching '.project' found anywhere in distribution warning: no previously-included files matching '.pydevproject' found anywhere in distribution writing manifest file 'python_Levenshtein.egg-info\SOURCES.txt' copying Levenshtein\_levenshtein.c -> build\lib.win-amd64-3.7\Levenshtein copying Levenshtein\_levenshtein.h -> build\lib.win-amd64-3.7\Levenshtein running build_ext building 'Levenshtein._levenshtein' extension error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/ ---------------------------------------- ERROR: Command errored out with exit status 1: 'C:\ProgramData\Anaconda3\envs\env\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"'; __file__='"'"'C:\\Users\\VKC~1\\AppData\\Local\\Temp\\pip-install-ynt_dbm4\\python-levenshtein_c02e7e6f9def4629a475349654670ae9\\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\VKC~1\AppData\Local\Temp\pip-record-v7l7zitb\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\ProgramData\Anaconda3\envs\env\Include\python-Levenshtein' Check the logs for full command output. ``` Here are conda and python versions: ```bat (env) C:\testing\datasets>conda --version conda 4.9.2 (env) C:\testing\datasets>python --version Python 3.7.10 ``` Please help me out. Thanks.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2301/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2301/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2300
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2300/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2300/comments
https://api.github.com/repos/huggingface/datasets/issues/2300/events
https://github.com/huggingface/datasets/issues/2300
873,928,169
MDU6SXNzdWU4NzM5MjgxNjk=
2,300
Add VoxPopuli
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, "node_id": "MDU6TGFiZWwyNzI1MjQxMDUy", "url": "https://api.github.com/repos/huggingface/datasets/labels/speech", "name": "speech", "color": "d93f0b", "default": false, "description": "" } ]
closed
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[ "I'm happy to take this on:) One question: The original unlabelled data is stored unsegmented (see e.g. https://github.com/facebookresearch/voxpopuli/blob/main/voxpopuli/get_unlabelled_data.py#L30), but segmenting the audio in the dataset would require a dependency on something like soundfile or torchaudio. An alternative could be to provide the segments start and end times as a Sequence and then it's up to the user to perform the segmentation on-the-fly if they wish?", "Hey @jfainberg,\r\n\r\nThis sounds great! I think adding a dependency would not be a big problem, however automatically segmenting the data probably means that it would take a very long time to do:\r\n\r\n```python\r\ndataset = load_dataset(\"voxpopuli\", \"french\")\r\n```\r\n\r\n=> so as a start I think your option 2 is the way to go!", "@polinaeterna VoxPopuli is available [here](https://huggingface.co/datasets/facebook/voxpopuli), so we can close this issue, right?\r\n", "@mariosasko yes, sure, closing it" ]
"2021-05-02T12:17:40"
"2023-02-28T17:43:52"
"2023-02-28T17:43:51"
MEMBER
null
## Adding a Dataset - **Name:** Voxpopuli - **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings - **Paper:** https://arxiv.org/abs/2101.00390 - **Data:** https://github.com/facebookresearch/voxpopuli - **Motivation:** biggest unlabeled speech dataset **Note**: Since the dataset is so huge, we should only add the config `10k` in the beginning. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2300/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2300/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2299
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2299/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2299/comments
https://api.github.com/repos/huggingface/datasets/issues/2299/events
https://github.com/huggingface/datasets/issues/2299
873,914,717
MDU6SXNzdWU4NzM5MTQ3MTc=
2,299
My iPhone
{ "login": "Jasonbuchanan1983", "id": 82856229, "node_id": "MDQ6VXNlcjgyODU2MjI5", "avatar_url": "https://avatars.githubusercontent.com/u/82856229?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jasonbuchanan1983", "html_url": "https://github.com/Jasonbuchanan1983", "followers_url": "https://api.github.com/users/Jasonbuchanan1983/followers", "following_url": "https://api.github.com/users/Jasonbuchanan1983/following{/other_user}", "gists_url": "https://api.github.com/users/Jasonbuchanan1983/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jasonbuchanan1983/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jasonbuchanan1983/subscriptions", "organizations_url": "https://api.github.com/users/Jasonbuchanan1983/orgs", "repos_url": "https://api.github.com/users/Jasonbuchanan1983/repos", "events_url": "https://api.github.com/users/Jasonbuchanan1983/events{/privacy}", "received_events_url": "https://api.github.com/users/Jasonbuchanan1983/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-02T11:11:11"
"2021-07-23T09:24:16"
"2021-05-03T08:17:38"
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2299/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2299/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2298
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2298/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2298/comments
https://api.github.com/repos/huggingface/datasets/issues/2298/events
https://github.com/huggingface/datasets/pull/2298
873,771,942
MDExOlB1bGxSZXF1ZXN0NjI4NDk2NjM2
2,298
Mapping in the distributed setting
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/users/TevenLeScao/followers", "following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}", "gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}", "starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions", "organizations_url": "https://api.github.com/users/TevenLeScao/orgs", "repos_url": "https://api.github.com/users/TevenLeScao/repos", "events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}", "received_events_url": "https://api.github.com/users/TevenLeScao/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
"2021-05-01T21:23:05"
"2021-05-03T13:54:53"
"2021-05-03T13:54:53"
MEMBER
null
The barrier trick for distributed mapping as discussed on Thursday with @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2298/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2298/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2298", "html_url": "https://github.com/huggingface/datasets/pull/2298", "diff_url": "https://github.com/huggingface/datasets/pull/2298.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2298.patch", "merged_at": "2021-05-03T13:54:53" }
true