url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.16B
1.34B
node_id
stringlengths
18
19
number
int64
3.81k
4.82k
title
stringlengths
1
162
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
sequence
created_at
int64
1,646B
1,660B
updated_at
int64
1,646B
1,660B
closed_at
int64
1,646B
1,660B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
9
19.5k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/4624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4624/comments
https://api.github.com/repos/huggingface/datasets/issues/4624/events
https://github.com/huggingface/datasets/pull/4624
1,293,085,058
PR_kwDODunzps46yzOK
4,624
Remove all paperswithcode_id: null
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> We've been using `null` to specify that we checked on pwc but the dataset doesn't exist there.\r\n\r\n@lhoestq maybe it's better to accept it on the Hub side then? Let me know if you want us to do it Hub-side", "Yup it's maybe better to support it on the Hub side then indeed, thanks ! Closing this one" ]
1,656,936,692,000
1,656,940,920,000
1,656,940,238,000
MEMBER
null
On the Hub there is a validation error on the `paperswithcode_id` tag when the value is `null`: <img width="686" alt="image" src="https://user-images.githubusercontent.com/42851186/177151825-93d341c5-25bd-41ab-96c2-c0b516d51c68.png"> We've been using `null` to specify that we checked on pwc but the dataset doesn't exist there. To have the validation working again we can simply remove all the `paperswithcode_id: null`. cc @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4624/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4624/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4624", "html_url": "https://github.com/huggingface/datasets/pull/4624", "diff_url": "https://github.com/huggingface/datasets/pull/4624.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4624.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4623/comments
https://api.github.com/repos/huggingface/datasets/issues/4623/events
https://github.com/huggingface/datasets/issues/4623
1,293,042,894
I_kwDODunzps5NEkTO
4,623
Loading MNIST as Pytorch Dataset
{ "login": "jameschapman19", "id": 56592797, "node_id": "MDQ6VXNlcjU2NTkyNzk3", "avatar_url": "https://avatars.githubusercontent.com/u/56592797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jameschapman19", "html_url": "https://github.com/jameschapman19", "followers_url": "https://api.github.com/users/jameschapman19/followers", "following_url": "https://api.github.com/users/jameschapman19/following{/other_user}", "gists_url": "https://api.github.com/users/jameschapman19/gists{/gist_id}", "starred_url": "https://api.github.com/users/jameschapman19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jameschapman19/subscriptions", "organizations_url": "https://api.github.com/users/jameschapman19/orgs", "repos_url": "https://api.github.com/users/jameschapman19/repos", "events_url": "https://api.github.com/users/jameschapman19/events{/privacy}", "received_events_url": "https://api.github.com/users/jameschapman19/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! We haven't implemented the conversion from images data to PyTorch tensors yet I think\r\n\r\ncc @mariosasko ", "So I understand:\r\n\r\nset_format() does not properly do the conversion to pytorch tensors from PIL images.\r\n\r\nSo that someone who stumbles on this can use the package:\r\n\r\n```python\r\ndataset = load_dataset(\"mnist\", split=\"train\")\r\ndef transform_func(examples):\r\n examples[\"image\"] = [np.array(img) for img in examples[\"image\"]]\r\n return examples\r\ndataset = dataset.with_transform(transform_func)\r\ndataset[0]\r\n``` ", "This then appears to work with pytorch dataloaders as:\r\n```\r\ndataloader=torch.utils.data.DataLoader(dataset,batch_size=1)\r\n```\r\n\r\nand tensorflow as:\r\n```\r\ndataset=dataset.to_tf_dataset(batch_size=1)\r\n```", "Hi! `set_transform`/`with_transform` is indeed the correct solution for the conversion. Improving this part of the API is one of the things I'm working on currently, so stay tuned!" ]
1,656,934,390,000
1,656,945,650,000
null
NONE
null
## Describe the bug Conversion of MNIST dataset to pytorch fails with bug ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("mnist", split="train") dataset.set_format('torch') dataset[0] print() ``` ## Expected results Expect to see torch tensors image and label ## Actual results Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\pydevd.py", line 1491, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/Users/chapm/PycharmProjects/multiviewdata/multiviewdata/huggingface/mnist.py", line 13, in <module> dataset[0] File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2154, in __getitem__ return self._getitem( File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2139, in _getitem formatted_output = format_table( File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 532, in format_table return formatter(pa_table, query_type=query_type) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 281, in __call__ return self.format_row(pa_table) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 58, in format_row return self.recursive_tensorize(row) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 54, in recursive_tensorize return map_nested(self._recursive_tensorize, data_struct, map_list=False) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 356, in map_nested mapped = [ File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 357, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in _single_map_nested return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in <dictcomp> return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 293, in _single_map_nested return function(data_struct) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 51, in _recursive_tensorize return self._tensorize(data_struct) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 38, in _tensorize if np.issubdtype(value.dtype, np.integer): AttributeError: 'bytes' object has no attribute 'dtype' python-BaseException ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Windows-10-10.0.22579-SP0 - Python version: 3.9.2 - PyArrow version: 8.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4623/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4622/comments
https://api.github.com/repos/huggingface/datasets/issues/4622/events
https://github.com/huggingface/datasets/pull/4622
1,293,031,939
PR_kwDODunzps46ynmT
4,622
Fix ImageFolder with parameters drop_metadata=True and drop_labels=False (when metadata.jsonl is present)
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq @mariosasko pls take a look at https://github.com/huggingface/datasets/pull/4622/commits/769e4c046a5bd5e3a4dbd09cfad1f4cf60677869. I modified `_generate_examples()` according to the same logic too: removed checking if `metadata_files` are not empty for the case when `self.config.drop_metadata=True` because I think we should be aligned with the config and preserve labels if `self.config.drop_labels=False` (the default value) and `self.config.drop_metadata=True` but `metadata_files` are passed. This is an extremely unlikely use case (when `self.config.drop_metadata=True`, but `metadata_files` are passed to `_generate_examples()`) since users usually do not use `_generate_examples()` alone but I believe it would be consistent to have the same behavior as in `_splits_generators()`. This change requires change in tests too if we suppose that we want to preserve labels (default value of `self.config.drop_labels` is False) when `self.config.drop_metadata=True`, even if `metadata_files` are for some reason provided (as it is done in tests). \r\n\r\nwdyt about this change?\r\n", "@lhoestq it wouldn't raise an error if we check `example.keys() == {\"image\", \"label\"}` as test checks only `_generate_examples`, not `encode_example`. and in the implementation of this PR `_generate_examples` would return both `image` and `label` key in the case when `drop_metadata=True` and `drop_labels=False` (default) as it seems that we agreed on that :)", "and on the other hand it would raise an error if `label` column is missing in _generate_examples when `drop_metadata=True` and `drop_labels=False`\r\n\r\nby \"it\" i mean tests :D (`test_generate_examples_with_metadata_that_misses_one_image`, `test_generate_examples_with_metadata_in_wrong_location` and `test_generate_examples_drop_metadata`)", "Perhaps we could make `self.config.drop_metadata = None` and `self.config.drop_labels = None` the defaults to see explicitly what the user wants. This would then turn into `self.config.drop_metadata = False` and `self.config.drop_labels = True` if metadata files are present and `self.config.drop_metadata = True` and `self.config.drop_labels = False` if not. And if the user wants to have the `label` column alongside metadata columns, it can do so by passing `drop_labels = False` explicitely (in that scenario we have to check that the `label` column is not already present in metadata files). And maybe we can also improve the logging messages.\r\n\r\nI find it problematic that the current implementation drops labels in some scenarios even if `self.config.drop_labels = False`, and the user doesn't have control over this behavior.\r\n\r\nLet me know what you think." ]
1,656,933,800,000
1,657,895,843,000
1,657,895,064,000
CONTRIBUTOR
null
Will fix #4621 ImageFolder raises `KeyError: 'label'` with params `drop_metadata=True` and `drop_labels=False` (if there is at least one metadata.jsonl file a data directory). This happens because metadata files are collected inside `analyze()` function regardless of `drop_metadata` value. And then the following condition doesn't pass: https://github.com/huggingface/datasets/blob/master/src/datasets/packaged_modules/imagefolder/imagefolder.py#L167 So I suggest to double check it inside `analyze()` not to collect metadata files if they are not needed. (and labels too, to be consistent) --- Also, I added a test to check if labels are inferred correctly from directories names in general (because we didn't have it) :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4622/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4622", "html_url": "https://github.com/huggingface/datasets/pull/4622", "diff_url": "https://github.com/huggingface/datasets/pull/4622.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4622.patch", "merged_at": 1657895064000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4621/comments
https://api.github.com/repos/huggingface/datasets/issues/4621/events
https://github.com/huggingface/datasets/issues/4621
1,293,030,128
I_kwDODunzps5NEhLw
4,621
ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[]
1,656,933,704,000
1,657,895,064,000
1,657,895,064,000
CONTRIBUTOR
null
## Describe the bug If you pass `drop_metadata=True` and `drop_labels=False` when a `data_dir` contains at least one `matadata.jsonl` file, you will get a KeyError. This is probably not a very useful case but we shouldn't get an error anyway. Asking users to move metadata files manually outside `data_dir` or pass features manually (when there is a tool that can infer them automatically) don't look like a good idea to me either. ## Steps to reproduce the bug ### Clone an example dataset from the Hub ```bash git clone https://huggingface.co/datasets/nateraw/test-imagefolder-metadata ``` ### Try to load it ```python from datasets import load_dataset ds = load_dataset("test-imagefolder-metadata", drop_metadata=True, drop_labels=False) ``` or even just ```python ds = load_dataset("test-imagefolder-metadata", drop_metadata=True) ``` as `drop_labels=False` is a default value. ## Expected results A DatasetDict object with two features: `"image"` and `"label"`. ## Actual results ``` Traceback (most recent call last): File "/home/polina/workspace/datasets/debug.py", line 18, in <module> ds = load_dataset( File "/home/polina/workspace/datasets/src/datasets/load.py", line 1732, in load_dataset builder_instance.download_and_prepare( File "/home/polina/workspace/datasets/src/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1227, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/polina/workspace/datasets/src/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1218, in _prepare_split example = self.info.features.encode_example(record) File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1596, in encode_example return encode_nested_example(self, example) File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in encode_nested_example { File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in <dictcomp> { File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in zip_dict yield key, tuple(d[key] for d in dicts) File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in <genexpr> yield key, tuple(d[key] for d in dicts) KeyError: 'label' ``` ## Environment info `datasets` master branch - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.14.0-1042-oem-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.1 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4621/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/4621/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4620/comments
https://api.github.com/repos/huggingface/datasets/issues/4620/events
https://github.com/huggingface/datasets/issues/4620
1,292,797,878
I_kwDODunzps5NDoe2
4,620
Data type is not recognized when using datetime.time
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "cc @mariosasko ", "Hi, thanks for reporting! I'm investigating the issue." ]
1,656,922,418,000
1,657,202,231,000
1,657,202,231,000
CONTRIBUTOR
null
## Describe the bug Creating a dataset from a pandas dataframe with `datetime.time` format generates an error. ## Steps to reproduce the bug ```python import pandas as pd from datetime import time from datasets import Dataset df = pd.DataFrame({"feature_name": [time(1, 1, 1)]}) dataset = Dataset.from_pandas(df) ``` ## Expected results The dataset should be created. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 823, in from_pandas return cls(table, info=info, split=split) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 679, in __init__ inferred_features = Features.from_arrow_schema(arrow_table.schema) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1551, in from_arrow_schema obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1551, in <dictcomp> obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1315, in generate_from_arrow_type return Value(dtype=_arrow_to_datasets_dtype(pa_type)) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 83, in _arrow_to_datasets_dtype return f"time64[{arrow_type.unit}]" AttributeError: 'pyarrow.lib.DataType' object has no attribute 'unit' ``` ## Environment info - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.13.0-1031-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4620/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4619/comments
https://api.github.com/repos/huggingface/datasets/issues/4619/events
https://github.com/huggingface/datasets/issues/4619
1,292,107,275
I_kwDODunzps5NA_4L
4,619
np arrays get turned into native lists
{ "login": "ZhaofengWu", "id": 11954789, "node_id": "MDQ6VXNlcjExOTU0Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/11954789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZhaofengWu", "html_url": "https://github.com/ZhaofengWu", "followers_url": "https://api.github.com/users/ZhaofengWu/followers", "following_url": "https://api.github.com/users/ZhaofengWu/following{/other_user}", "gists_url": "https://api.github.com/users/ZhaofengWu/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZhaofengWu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZhaofengWu/subscriptions", "organizations_url": "https://api.github.com/users/ZhaofengWu/orgs", "repos_url": "https://api.github.com/users/ZhaofengWu/repos", "events_url": "https://api.github.com/users/ZhaofengWu/events{/privacy}", "received_events_url": "https://api.github.com/users/ZhaofengWu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "If you add the line `dataset2.set_format('np')` before calling `dataset2[0]['tmp']` it should return `np.ndarray`.\r\nI believe internally it will not store it as a list, it is only returning a list when you index it.\r\n\r\n```\r\nIn [1]: import datasets, numpy as np\r\nIn [2]: dataset = datasets.load_dataset(\"glue\", \"mrpc\")[\"validation\"]\r\nIn [3]: dataset2 = dataset.map(lambda x: {\"tmp\": np.array([0.5])}, batched=False)\r\nIn [4]: dataset2[0][\"tmp\"]\r\nOut[4]: [0.5]\r\n\r\nIn [5]: dataset2.set_format('np')\r\n\r\nIn [6]: dataset2[0][\"tmp\"]\r\nOut[6]: array([0.5])\r\n```", "I see, thanks! Any idea if the default numpy → list conversion might cause precision loss?", "I'm not super familiar with our datasets works internally, but I think your `np` array will be stored in a `pyarrow` format, and then you take a view of this as a python array. In which case, I think the precision should be preserved." ]
1,656,784,497,000
1,656,880,027,000
null
NONE
null
## Describe the bug When attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen? ## Steps to reproduce the bug ```python >>> import datasets, numpy as np >>> dataset = datasets.load_dataset("glue", "mrpc")["validation"] Reusing dataset glue (...) 100%|███████████████████████████████████████████████| 3/3 [00:00<00:00, 1360.61it/s] >>> dataset2 = dataset.map(lambda x: {"tmp": np.array([0.5])}, batched=False) 100%|██████████████████████████████████████████| 408/408 [00:00<00:00, 10819.97ex/s] >>> dataset2[0]["tmp"] [0.5] >>> type(dataset2[0]["tmp"]) <class 'list'> ``` ## Expected results `dataset2[0]["tmp"]` should be an `np.ndarray`. ## Actual results It's a list. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: mac, though I'm pretty sure it happens on a linux machine too - Python version: 3.9.7 - PyArrow version: 6.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4619/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4618/comments
https://api.github.com/repos/huggingface/datasets/issues/4618/events
https://github.com/huggingface/datasets/issues/4618
1,292,078,225
I_kwDODunzps5NA4yR
4,618
contribute data loading for object detection datasets with yolo data format
{ "login": "faizankshaikh", "id": 8406903, "node_id": "MDQ6VXNlcjg0MDY5MDM=", "avatar_url": "https://avatars.githubusercontent.com/u/8406903?v=4", "gravatar_id": "", "url": "https://api.github.com/users/faizankshaikh", "html_url": "https://github.com/faizankshaikh", "followers_url": "https://api.github.com/users/faizankshaikh/followers", "following_url": "https://api.github.com/users/faizankshaikh/following{/other_user}", "gists_url": "https://api.github.com/users/faizankshaikh/gists{/gist_id}", "starred_url": "https://api.github.com/users/faizankshaikh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/faizankshaikh/subscriptions", "organizations_url": "https://api.github.com/users/faizankshaikh/orgs", "repos_url": "https://api.github.com/users/faizankshaikh/repos", "events_url": "https://api.github.com/users/faizankshaikh/events{/privacy}", "received_events_url": "https://api.github.com/users/faizankshaikh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! The `imagefolder` script is already quite complex, so a standalone script sounds better. Also, I suggest we create an org on the Hub (e.g. `hf-loaders`) and store such scripts there for easier maintenance rather than having them as packaged modules (IMO only very generic loaders should be packaged). WDYT @lhoestq @albertvillanova @polinaeterna?", "@mariosasko sounds good to me!\r\n", "Thank you for the suggestion @mariosasko . I agree with the point, but I have a few doubts\r\n\r\n1. How would the user access the script if it's not a part of the core codebase?\r\n2. Could you direct me as to what will be the tasks I have to do to contribute to the code? As per my understanding, it would be like\r\n 1. Create a new org \"hf-loaders\" and add you (and more HF people) to the org\r\n 2. Add data loader script as a (model?)\r\n 3. Test it with a dataset on HF hub\r\n3. We should maybe brainstorm as to which public datasets have this format (YOLO type) and are the most important ones to test the script with. We can even add the datasets on HF Hub alongside the script", "1. Like this: `load_dataset(\"hf-loaders/yolo\", data_files=...)`\r\n2. The steps would be:\r\n 1. Create a new org `hf-community-loaders` (IMO a better name than \"hf-loaders\") and add me (as an admin)\r\n 2. Create a new dataset repo `yolo` and add the loading script to it (`yolo.py`)\r\n 3. Open a discussion to request our review\r\n4. I like this idea. Another option is to add snippets that describe how to load such datasets using the `yolo` loader." ]
1,656,775,319,000
1,658,412,644,000
null
NONE
null
**Is your feature request related to a problem? Please describe.** At the moment, HF datasets loads [image classification datasets](https://huggingface.co/docs/datasets/image_process) out-of-the-box. There could be a data loader for loading standard object detection datasets ([original discussion here](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions/2)) **Describe the solution you'd like** I wrote a [custom script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_chess_pieces.py) to load dataset which has YOLO data format. **Describe alternatives you've considered** The script can either be a standalone dataset builder, or a modified version of `ImageFolder` **Additional context** I would be happy to contribute to this, but I would do it at a very slow pace (maybe a month or two) as I have my exams approaching 😄
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4618/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4615/comments
https://api.github.com/repos/huggingface/datasets/issues/4615/events
https://github.com/huggingface/datasets/pull/4615
1,291,307,428
PR_kwDODunzps46tADt
4,615
Fix `embed_storage` on features inside lists/sequences
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,676,328,000
1,657,282,390,000
1,657,281,696,000
CONTRIBUTOR
null
Add a dedicated function for embed_storage to always preserve the embedded/casted arrays (and to have more control over `embed_storage` in general). Fix #4591 ~~(Waiting for #4608 to be merged to mark this PR as ready for review - required for fixing `xgetsize` in private repos)~~ Done!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4615/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4615/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4615", "html_url": "https://github.com/huggingface/datasets/pull/4615", "diff_url": "https://github.com/huggingface/datasets/pull/4615.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4615.patch", "merged_at": 1657281695000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4614/comments
https://api.github.com/repos/huggingface/datasets/issues/4614/events
https://github.com/huggingface/datasets/pull/4614
1,291,218,020
PR_kwDODunzps46ssfw
4,614
Ensure ConcatenationTable.cast uses target_schema metadata
{ "login": "dtuit", "id": 8114067, "node_id": "MDQ6VXNlcjgxMTQwNjc=", "avatar_url": "https://avatars.githubusercontent.com/u/8114067?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dtuit", "html_url": "https://github.com/dtuit", "followers_url": "https://api.github.com/users/dtuit/followers", "following_url": "https://api.github.com/users/dtuit/following{/other_user}", "gists_url": "https://api.github.com/users/dtuit/gists{/gist_id}", "starred_url": "https://api.github.com/users/dtuit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dtuit/subscriptions", "organizations_url": "https://api.github.com/users/dtuit/orgs", "repos_url": "https://api.github.com/users/dtuit/repos", "events_url": "https://api.github.com/users/dtuit/events{/privacy}", "received_events_url": "https://api.github.com/users/dtuit/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @lhoestq, Thanks for the detailed comment. I've tested the suggested approach and can confirm it works for the testcase outlined above! The PR is updated with the changes.", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,670,928,000
1,658,238,525,000
1,658,237,784,000
CONTRIBUTOR
null
Currently, `ConcatenationTable.cast` does not use target_schema metadata when casting subtables. This causes an issue when using cast_column and the underlying table is a ConcatenationTable. Code example of where issue arrises: ``` from datasets import Dataset, Image column1 = [0, 1] image_paths = ['/images/image1.jpg', '/images/image2.jpg'] ds = Dataset.from_dict({"column1": column1}) ds = ds.add_column("image", image_paths) ds.cast_column("image", Image()) # Fails here ``` Output ``` ... TypeError: Couldn't cast array of type string to {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)} ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4614/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4614/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4614", "html_url": "https://github.com/huggingface/datasets/pull/4614", "diff_url": "https://github.com/huggingface/datasets/pull/4614.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4614.patch", "merged_at": 1658237784000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4613
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4613/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4613/comments
https://api.github.com/repos/huggingface/datasets/issues/4613/events
https://github.com/huggingface/datasets/pull/4613
1,291,181,193
PR_kwDODunzps46skd6
4,613
Align/fix license metadata info
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thank you thank you! Let's merge and pray? 😱 ", "I just need to add `license_details` to the validator and yup we can merge" ]
1,656,669,050,000
1,656,680,037,000
1,656,679,367,000
MEMBER
null
fix bad "other-*" licenses and add the corresponding "license_details" when relevant
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4613/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4613/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4613", "html_url": "https://github.com/huggingface/datasets/pull/4613", "diff_url": "https://github.com/huggingface/datasets/pull/4613.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4613.patch", "merged_at": 1656679366000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4612
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4612/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4612/comments
https://api.github.com/repos/huggingface/datasets/issues/4612/events
https://github.com/huggingface/datasets/issues/4612
1,290,984,660
I_kwDODunzps5M8tzU
4,612
Release 2.3.0 broke custom iterable datasets
{ "login": "aapot", "id": 19529125, "node_id": "MDQ6VXNlcjE5NTI5MTI1", "avatar_url": "https://avatars.githubusercontent.com/u/19529125?v=4", "gravatar_id": "", "url": "https://api.github.com/users/aapot", "html_url": "https://github.com/aapot", "followers_url": "https://api.github.com/users/aapot/followers", "following_url": "https://api.github.com/users/aapot/following{/other_user}", "gists_url": "https://api.github.com/users/aapot/gists{/gist_id}", "starred_url": "https://api.github.com/users/aapot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aapot/subscriptions", "organizations_url": "https://api.github.com/users/aapot/orgs", "repos_url": "https://api.github.com/users/aapot/repos", "events_url": "https://api.github.com/users/aapot/events{/privacy}", "received_events_url": "https://api.github.com/users/aapot/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Apparently, `fsspec` does not allow access to attribute-based modules anymore, such as `fsspec.async`.\r\n\r\nHowever, this is a fairly simple fix:\r\n- Change the import to: `from fsspec import asyn`;\r\n- Change line 18 to: `asyn.iothread[0] = None`;\r\n- Change line 19 to `asyn.loop[0] = None`.", "Hi! I think it's easier to replace `import fsspec` with `import fsspec.asyn` and leave the rest unchanged. @gugarosa Are you interested in submitting a PR?", "Perfect, it is even better!\r\n\r\nJust submitted the PR: #4630.\r\n\r\nThank you!" ]
1,656,657,967,000
1,657,033,701,000
1,657,033,701,000
NONE
null
## Describe the bug Trying to iterate examples from custom iterable dataset fails to bug introduced in `torch_iterable_dataset.py` since the release of 2.3.0. ## Steps to reproduce the bug ```python next(iter(custom_iterable_dataset)) ``` ## Expected results `next(iter(custom_iterable_dataset))` should return examples from the dataset ## Actual results ``` /usr/local/lib/python3.7/dist-packages/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py in _set_fsspec_for_multiprocess() 16 See https://github.com/fsspec/gcsfs/issues/379 17 """ ---> 18 fsspec.asyn.iothread[0] = None 19 fsspec.asyn.loop[0] = None 20 AttributeError: module 'fsspec' has no attribute 'asyn' ``` ## Environment info - `datasets` version: 2.3.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4612/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4612/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4611/comments
https://api.github.com/repos/huggingface/datasets/issues/4611/events
https://github.com/huggingface/datasets/pull/4611
1,290,940,874
PR_kwDODunzps46rxIX
4,611
Preserve member order by MockDownloadManager.iter_archive
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,654,500,000
1,656,694,751,000
1,656,694,108,000
MEMBER
null
Currently, `MockDownloadManager.iter_archive` yields paths to archive members in an order given by `path.rglob("*")`, which migh not be the same order as in the original archive. See issue in: - https://github.com/huggingface/datasets/pull/4579#issuecomment-1172135027 This PR fixes the order of the members yielded by `MockDownloadManager.iter_archive` so that it is the same as in the original archive.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4611/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4611", "html_url": "https://github.com/huggingface/datasets/pull/4611", "diff_url": "https://github.com/huggingface/datasets/pull/4611.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4611.patch", "merged_at": 1656694108000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4610/comments
https://api.github.com/repos/huggingface/datasets/issues/4610/events
https://github.com/huggingface/datasets/issues/4610
1,290,603,827
I_kwDODunzps5M7Q0z
4,610
codeparrot/github-code failing to load
{ "login": "PyDataBlog", "id": 29863388, "node_id": "MDQ6VXNlcjI5ODYzMzg4", "avatar_url": "https://avatars.githubusercontent.com/u/29863388?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PyDataBlog", "html_url": "https://github.com/PyDataBlog", "followers_url": "https://api.github.com/users/PyDataBlog/followers", "following_url": "https://api.github.com/users/PyDataBlog/following{/other_user}", "gists_url": "https://api.github.com/users/PyDataBlog/gists{/gist_id}", "starred_url": "https://api.github.com/users/PyDataBlog/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PyDataBlog/subscriptions", "organizations_url": "https://api.github.com/users/PyDataBlog/orgs", "repos_url": "https://api.github.com/users/PyDataBlog/repos", "events_url": "https://api.github.com/users/PyDataBlog/events{/privacy}", "received_events_url": "https://api.github.com/users/PyDataBlog/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I believe the issue is in `codeparrot/github-code`. `base_path` param is missing - https://huggingface.co/datasets/codeparrot/github-code/blob/main/github-code.py#L169\r\n\r\nFunction definition has changed.\r\nhttps://github.com/huggingface/datasets/blob/0e1c629cfb9f9ba124537ba294a0ec451584da5f/src/datasets/data_files.py#L547\r\n\r\n@mariosasko could you please confirm my finding? And are there any changes that need to be done from my side?", "Good catch ! We recently did a breaking change in `get_patterns_in_dataset_repository`, I think we can revert it", "> Good catch ! We recently did a breaking change in `get_patterns_in_dataset_repository`, I think we can revert it\n\nI can't wait for that releasee. Broke my application", "This simple workaround should fix: https://huggingface.co/datasets/codeparrot/github-code/discussions/2\r\n\r\n`get_patterns_in_dataset_repository` can treat whether `base_path=None`, so we just need to make sure that codeparrot/github-code `_split_generators` calls with such an argument.", "I am afraid your suggested change @gugarosa will break compatibility with older datasets versions that don't have `base_path` argument in `get_patterns_in_dataset_repository`, as a workaround while the issue gets resolved in `datasets` can you downgrade your datasets version to `<=2.1.0` ? \r\n@lvwerra do you think we should adapt the script to check the datasets version before calling `get_patterns_in_dataset_repository`?", "Actually I think it's just simpler to fix it in the dataset itself, let me open a PR\r\n\r\nEDIT: PR opened here: https://huggingface.co/datasets/codeparrot/github-code/discussions/3", "PR is merged, it's working now ! Closing this one :)", "> I am afraid your suggested change @gugarosa will break compatibility with older datasets versions that don't have `base_path` argument in `get_patterns_in_dataset_repository`, as a workaround while the issue gets resolved in `datasets` can you downgrade your datasets version to `<=2.1.0` ?\r\n> @lvwerra do you think we should adapt the script to check the datasets version before calling `get_patterns_in_dataset_repository`?\r\n\r\nYou are definitely right, sorry about it. I always keep forgetting that we need to keep in mind users from past versions, my bad." ]
1,656,620,688,000
1,657,031,053,000
1,657,012,796,000
NONE
null
## Describe the bug codeparrot/github-code fails to load with a `TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'` ## Steps to reproduce the bug ```python from datasets import load_dataset ``` ## Expected results loaded dataset object ## Actual results ```python [3]: dataset = load_dataset("codeparrot/github-code") No config specified, defaulting to: github-code/all-all Downloading and preparing dataset github-code/all-all to /home/bebr/.cache/huggingface/datasets/codeparrot___github-code/all-all/0.0.0/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817... --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [3], in <cell line: 1>() ----> 1 dataset = load_dataset("codeparrot/github-code") File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1678 # Download and prepare data -> 1679 builder_instance.download_and_prepare( 1680 download_config=download_config, 1681 download_mode=download_mode, 1682 ignore_verifications=ignore_verifications, 1683 try_from_hf_gcs=try_from_hf_gcs, 1684 use_auth_token=use_auth_token, 1685 ) 1687 # Build dataset for splits 1688 keep_in_memory = ( 1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1690 ) File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 702 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 703 if not downloaded_from_gcs: --> 704 self._download_and_prepare( 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 706 ) 707 # Sync info 708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:1221, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos) 1220 def _download_and_prepare(self, dl_manager, verify_infos): -> 1221 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 769 split_dict = SplitDict(dataset_name=self.name) 770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 773 # Checksums verification 774 if verify_infos and dl_manager.record_checksums: File ~/.cache/huggingface/modules/datasets_modules/datasets/codeparrot--github-code/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817/github-code.py:169, in GithubCode._split_generators(self, dl_manager) 162 def _split_generators(self, dl_manager): 164 hfh_dataset_info = HfApi(datasets.config.HF_ENDPOINT).dataset_info( 165 _REPO_NAME, 166 timeout=100.0, 167 ) --> 169 patterns = datasets.data_files.get_patterns_in_dataset_repository(hfh_dataset_info) 170 data_files = datasets.data_files.DataFilesDict.from_hf_repo( 171 patterns, 172 dataset_info=hfh_dataset_info, 173 ) 175 files = dl_manager.download_and_extract(data_files["train"]) TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path' ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.18.7-arch1-1-x86_64-with-glibc2.35 - Python version: 3.10.5 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4610/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4609/comments
https://api.github.com/repos/huggingface/datasets/issues/4609/events
https://github.com/huggingface/datasets/issues/4609
1,290,392,083
I_kwDODunzps5M6dIT
4,609
librispeech dataset has to download whole subset when specifing the split to use
{ "login": "sunhaozhepy", "id": 73462159, "node_id": "MDQ6VXNlcjczNDYyMTU5", "avatar_url": "https://avatars.githubusercontent.com/u/73462159?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sunhaozhepy", "html_url": "https://github.com/sunhaozhepy", "followers_url": "https://api.github.com/users/sunhaozhepy/followers", "following_url": "https://api.github.com/users/sunhaozhepy/following{/other_user}", "gists_url": "https://api.github.com/users/sunhaozhepy/gists{/gist_id}", "starred_url": "https://api.github.com/users/sunhaozhepy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sunhaozhepy/subscriptions", "organizations_url": "https://api.github.com/users/sunhaozhepy/orgs", "repos_url": "https://api.github.com/users/sunhaozhepy/repos", "events_url": "https://api.github.com/users/sunhaozhepy/events{/privacy}", "received_events_url": "https://api.github.com/users/sunhaozhepy/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi! You can use streaming to fetch only a subset of the data:\r\n```python\r\nraw_dataset = load_dataset(\"librispeech_asr\", \"clean\", split=\"train.100\", streaming=True)\r\n```\r\nAlso, we plan to make it possible to download a particular split in the non-streaming mode, but this task is not easy due to how our dataset scripts are structured.", "Hi,\r\n\r\nThat's a great help. Thank you very much." ]
1,656,607,104,000
1,657,662,272,000
1,657,662,272,000
NONE
null
## Describe the bug librispeech dataset has to download whole subset when specifing the split to use ## Steps to reproduce the bug see below # Sample code to reproduce the bug ``` !pip install datasets from datasets import load_dataset raw_dataset = load_dataset("librispeech_asr", "clean", split="train.100") ``` ## Expected results The split "train.clean.100" is downloaded. ## Actual results All four splits in "clean" subset is downloaded. ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4609/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4609/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4608/comments
https://api.github.com/repos/huggingface/datasets/issues/4608/events
https://github.com/huggingface/datasets/pull/4608
1,290,298,002
PR_kwDODunzps46pm9A
4,608
Fix xisfile, xgetsize, xisdir, xlistdir in private repo
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Added tests for xisfile, xgetsize, xlistdir and xglob for private repos, and also tests for xwalk that was untested" ]
1,656,602,601,000
1,657,111,559,000
1,657,110,859,000
MEMBER
null
`xisfile` is working in a private repository when passing a chained URL to a file inside an archive, e.g. `zip://a.txt::https://huggingface/datasets/username/dataset_name/resolve/main/data.zip`. However it's not working when passing a simple file `https://huggingface/datasets/username/dataset_name/resolve/main/data.zip`. This is because the authentication headers are not passed correctly in this case. This is causing dataset streaming to fail in private parquet repositories, as noted in https://github.com/huggingface/datasets/issues/4605 I fixed `xisfile` and the other functions that behave the same way: xgetsize, xisdir and xlistdir TODO: - [x] tests fix https://github.com/huggingface/datasets/issues/4605
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4608/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4608", "html_url": "https://github.com/huggingface/datasets/pull/4608", "diff_url": "https://github.com/huggingface/datasets/pull/4608.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4608.patch", "merged_at": 1657110859000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4607
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4607/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4607/comments
https://api.github.com/repos/huggingface/datasets/issues/4607/events
https://github.com/huggingface/datasets/pull/4607
1,290,171,941
PR_kwDODunzps46pLnd
4,607
Align more metadata with other repo types (models,spaces)
{ "login": "julien-c", "id": 326577, "node_id": "MDQ6VXNlcjMyNjU3Nw==", "avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/julien-c", "html_url": "https://github.com/julien-c", "followers_url": "https://api.github.com/users/julien-c/followers", "following_url": "https://api.github.com/users/julien-c/following{/other_user}", "gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}", "starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/julien-c/subscriptions", "organizations_url": "https://api.github.com/users/julien-c/orgs", "repos_url": "https://api.github.com/users/julien-c/repos", "events_url": "https://api.github.com/users/julien-c/events{/privacy}", "received_events_url": "https://api.github.com/users/julien-c/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I just set a default value (None) for the deprecated licenses and languages fields, which should fix most of the CI failures.\r\n\r\nNote that the CI should still be red because you edited many dataset cards and they're still missing some content - but this is unrelated to this PR so we can ignore these failures", "thanks so much @lhoestq !!", "There's also a follow-up PR to this one, in #4613 – I would suggest to merge all of them at the same time and hope not too many things are broken 🙀 🙀 ", "Alright merging this one now, let's see how broken things get" ]
1,656,597,132,000
1,656,676,837,000
1,656,676,154,000
MEMBER
null
see also associated PR on the `datasets-tagging` Space: https://huggingface.co/spaces/huggingface/datasets-tagging/discussions/2 (to merge after this one is merged)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4607/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4607/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4607", "html_url": "https://github.com/huggingface/datasets/pull/4607", "diff_url": "https://github.com/huggingface/datasets/pull/4607.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4607.patch", "merged_at": 1656676154000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4606/comments
https://api.github.com/repos/huggingface/datasets/issues/4606/events
https://github.com/huggingface/datasets/issues/4606
1,290,083,534
I_kwDODunzps5M5RzO
4,606
evaluation result changes after `datasets` version change
{ "login": "thnkinbtfly", "id": 70014488, "node_id": "MDQ6VXNlcjcwMDE0NDg4", "avatar_url": "https://avatars.githubusercontent.com/u/70014488?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thnkinbtfly", "html_url": "https://github.com/thnkinbtfly", "followers_url": "https://api.github.com/users/thnkinbtfly/followers", "following_url": "https://api.github.com/users/thnkinbtfly/following{/other_user}", "gists_url": "https://api.github.com/users/thnkinbtfly/gists{/gist_id}", "starred_url": "https://api.github.com/users/thnkinbtfly/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thnkinbtfly/subscriptions", "organizations_url": "https://api.github.com/users/thnkinbtfly/orgs", "repos_url": "https://api.github.com/users/thnkinbtfly/repos", "events_url": "https://api.github.com/users/thnkinbtfly/events{/privacy}", "received_events_url": "https://api.github.com/users/thnkinbtfly/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi! The GH/no-namespace datasets versioning is synced with the version of the `datasets` lib, which means that the `wikiann` script was modified between the two compared versions. In this scenario, you can ensure reproducibility by pinning the script version, which is done by passing `revision=\"x.y.z\"` (e.g. `revision=\"2.2.0\"`) to `load_dataset.`\r\n" ]
1,656,593,006,000
1,656,956,852,000
null
NONE
null
## Describe the bug evaluation result changes after `datasets` version change ## Steps to reproduce the bug 1. Train a model on WikiAnn 2. reload the ckpt -> test accuracy becomes same as eval accuracy 3. such behavior is gone after downgrading `datasets` https://colab.research.google.com/drive/1kYz7-aZRGdayaq-gDTt30tyEgsKlpYOw?usp=sharing ## Expected results evaluation result shouldn't change before/after `datasets` version changes ## Actual results evaluation result changes before/after `datasets` version changes ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: colab - Python version: 3.7.13 - PyArrow version: 6.0.1 Q. How could the evaluation result change before/after `datasets` version changes?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4606/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4606/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4605/comments
https://api.github.com/repos/huggingface/datasets/issues/4605/events
https://github.com/huggingface/datasets/issues/4605
1,290,058,970
I_kwDODunzps5M5Lza
4,605
Dataset Viewer issue for boris/gis_filtered
{ "login": "WaterKnight1998", "id": 41203448, "node_id": "MDQ6VXNlcjQxMjAzNDQ4", "avatar_url": "https://avatars.githubusercontent.com/u/41203448?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WaterKnight1998", "html_url": "https://github.com/WaterKnight1998", "followers_url": "https://api.github.com/users/WaterKnight1998/followers", "following_url": "https://api.github.com/users/WaterKnight1998/following{/other_user}", "gists_url": "https://api.github.com/users/WaterKnight1998/gists{/gist_id}", "starred_url": "https://api.github.com/users/WaterKnight1998/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WaterKnight1998/subscriptions", "organizations_url": "https://api.github.com/users/WaterKnight1998/orgs", "repos_url": "https://api.github.com/users/WaterKnight1998/repos", "events_url": "https://api.github.com/users/WaterKnight1998/events{/privacy}", "received_events_url": "https://api.github.com/users/WaterKnight1998/received_events", "type": "User", "site_admin": false }
[ { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Yes, this dataset is \"gated\": you first have to go to https://huggingface.co/datasets/boris/gis_filtered and click \"Access repository\" (if you accept to share your contact information with the repository authors).", "I already did that, it returns error when using streaming", "Oh, sorry, I misread. Looking at it. Maybe @huggingface/datasets or @SBrandeis ", "I could reproduce the error, even though I provided my token and accepted the gate form. It looks like an error from `datasets`", "This is indeed a bug in `datasets`. Parquet datasets in gated/private repositories can't be streamed properly, which caused the viewer to fail. I opened a PR at https://github.com/huggingface/datasets/pull/4608" ]
1,656,591,814,000
1,657,110,859,000
1,657,110,859,000
NONE
null
### Link https://huggingface.co/datasets/boris/gis_filtered/viewer/boris--gis_filtered/train ### Description When I try to access this from the website I get this error: Status code: 400 Exception: ClientResponseError Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/boris/gis_filtered/resolve/80b805053ce61d4eb487b6b8d9095d775c2c466e/data/train/0000.parquet') If I try to load with code I also get the same issue: ```python dataset2_train=load_dataset("boris/gis_filtered", use_auth_token=os.environ["HF_TOKEN"],split="train",streaming=True) dataset2_validation=load_dataset("boris/gis_filtered", use_auth_token=os.environ["HF_TOKEN"], split="validation",streaming=True) ``` ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4605/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4605/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4604/comments
https://api.github.com/repos/huggingface/datasets/issues/4604/events
https://github.com/huggingface/datasets/pull/4604
1,289,963,962
PR_kwDODunzps46oeju
4,604
Update CI Windows orb
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,586,831,000
1,656,595,991,000
1,656,595,346,000
MEMBER
null
This PR tries to fix recurrent random CI failures on Windows. After 2 runs, it seems to have fixed the issue. Fix #4603.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4604/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4604/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4604", "html_url": "https://github.com/huggingface/datasets/pull/4604", "diff_url": "https://github.com/huggingface/datasets/pull/4604.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4604.patch", "merged_at": 1656595345000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4603/comments
https://api.github.com/repos/huggingface/datasets/issues/4603/events
https://github.com/huggingface/datasets/issues/4603
1,289,963,331
I_kwDODunzps5M40dD
4,603
CI fails recurrently and randomly on Windows
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,656,586,798,000
1,656,595,345,000
1,656,595,345,000
MEMBER
null
As reported by @lhoestq, The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install. In particular it seems that building the wheels fail. Here is an example of logs: ``` Building wheel for seqeval (setup.py): started Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6' No parent package detected, impossible to derive `name` running bdist_wheel running build running build_py package init file 'seqeval\__init__.py' not found (or not a regular file) package init file 'seqeval\metrics\__init__.py' not found (or not a regular file) C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, installing to build\bdist.win-amd64\wheel running install running install_lib warning: install_lib: 'build\lib' does not exist -- no Python modules to install running install_egg_info running egg_info creating UNKNOWN.egg-info writing UNKNOWN.egg-info\PKG-INFO writing dependency_links to UNKNOWN.egg-info\dependency_links.txt writing top-level names to UNKNOWN.egg-info\top_level.txt writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' reading manifest file 'UNKNOWN.egg-info\SOURCES.txt' writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info running install_scripts creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it adding 'UNKNOWN-0.0.0.dist-info/METADATA' adding 'UNKNOWN-0.0.0.dist-info/WHEEL' adding 'UNKNOWN-0.0.0.dist-info/top_level.txt' adding 'UNKNOWN-0.0.0.dist-info/RECORD' removing build\bdist.win-amd64\wheel Building wheel for seqeval (setup.py): finished with status 'done' Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1 Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7 WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4603/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4603/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4602
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4602/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4602/comments
https://api.github.com/repos/huggingface/datasets/issues/4602/events
https://github.com/huggingface/datasets/pull/4602
1,289,950,379
PR_kwDODunzps46obqi
4,602
Upgrade setuptools in windows CI
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,586,121,000
1,656,593,858,000
1,656,593,177,000
MEMBER
null
The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install. In particular it seems that building the wheels fail. Here is an example of logs ``` Building wheel for seqeval (setup.py): started Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6' No parent package detected, impossible to derive `name` running bdist_wheel running build running build_py package init file 'seqeval\__init__.py' not found (or not a regular file) package init file 'seqeval\metrics\__init__.py' not found (or not a regular file) C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, installing to build\bdist.win-amd64\wheel running install running install_lib warning: install_lib: 'build\lib' does not exist -- no Python modules to install running install_egg_info running egg_info creating UNKNOWN.egg-info writing UNKNOWN.egg-info\PKG-INFO writing dependency_links to UNKNOWN.egg-info\dependency_links.txt writing top-level names to UNKNOWN.egg-info\top_level.txt writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' reading manifest file 'UNKNOWN.egg-info\SOURCES.txt' writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info running install_scripts creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it adding 'UNKNOWN-0.0.0.dist-info/METADATA' adding 'UNKNOWN-0.0.0.dist-info/WHEEL' adding 'UNKNOWN-0.0.0.dist-info/top_level.txt' adding 'UNKNOWN-0.0.0.dist-info/RECORD' removing build\bdist.win-amd64\wheel Building wheel for seqeval (setup.py): finished with status 'done' Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1 Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7 WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN' ``` hopefully this fixes the issue
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4602/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4602/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4602", "html_url": "https://github.com/huggingface/datasets/pull/4602", "diff_url": "https://github.com/huggingface/datasets/pull/4602.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4602.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4601
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4601/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4601/comments
https://api.github.com/repos/huggingface/datasets/issues/4601/events
https://github.com/huggingface/datasets/pull/4601
1,289,924,715
PR_kwDODunzps46oWF8
4,601
Upgrade pip in WIN CI
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "It failed terribly" ]
1,656,584,742,000
1,656,586,465,000
1,656,585,818,000
MEMBER
null
The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install. In particular it seems that building the wheels fail. Here is an example of logs ``` Building wheel for seqeval (setup.py): started Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6' No parent package detected, impossible to derive `name` running bdist_wheel running build running build_py package init file 'seqeval\__init__.py' not found (or not a regular file) package init file 'seqeval\metrics\__init__.py' not found (or not a regular file) C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, installing to build\bdist.win-amd64\wheel running install running install_lib warning: install_lib: 'build\lib' does not exist -- no Python modules to install running install_egg_info running egg_info creating UNKNOWN.egg-info writing UNKNOWN.egg-info\PKG-INFO writing dependency_links to UNKNOWN.egg-info\dependency_links.txt writing top-level names to UNKNOWN.egg-info\top_level.txt writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' reading manifest file 'UNKNOWN.egg-info\SOURCES.txt' writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info running install_scripts creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it adding 'UNKNOWN-0.0.0.dist-info/METADATA' adding 'UNKNOWN-0.0.0.dist-info/WHEEL' adding 'UNKNOWN-0.0.0.dist-info/top_level.txt' adding 'UNKNOWN-0.0.0.dist-info/RECORD' removing build\bdist.win-amd64\wheel Building wheel for seqeval (setup.py): finished with status 'done' Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1 Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7 WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN' ``` I tried to update pip and re-run the CI several times and I couldn't re-experience this issue for now, so I think upgrading pip may solve the issue
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4601/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4601/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4601", "html_url": "https://github.com/huggingface/datasets/pull/4601", "diff_url": "https://github.com/huggingface/datasets/pull/4601.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4601.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4600/comments
https://api.github.com/repos/huggingface/datasets/issues/4600/events
https://github.com/huggingface/datasets/pull/4600
1,289,177,042
PR_kwDODunzps46l3P1
4,600
Remove multiple config section
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "repos_url": "https://api.github.com/users/stevhliu/repos", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,529,761,000
1,656,956,480,000
1,656,955,781,000
MEMBER
null
This PR removes docs for a future feature and redirects to #4578 instead. See this [discussion](https://huggingface.slack.com/archives/C034N0A7H09/p1656107063801969) for more details :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4600/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4600", "html_url": "https://github.com/huggingface/datasets/pull/4600", "diff_url": "https://github.com/huggingface/datasets/pull/4600.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4600.patch", "merged_at": 1656955781000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4599
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4599/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4599/comments
https://api.github.com/repos/huggingface/datasets/issues/4599/events
https://github.com/huggingface/datasets/pull/4599
1,288,849,933
PR_kwDODunzps46kvfC
4,599
Smooth-BLEU bug fixed
{ "login": "Aktsvigun", "id": 36672861, "node_id": "MDQ6VXNlcjM2NjcyODYx", "avatar_url": "https://avatars.githubusercontent.com/u/36672861?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Aktsvigun", "html_url": "https://github.com/Aktsvigun", "followers_url": "https://api.github.com/users/Aktsvigun/followers", "following_url": "https://api.github.com/users/Aktsvigun/following{/other_user}", "gists_url": "https://api.github.com/users/Aktsvigun/gists{/gist_id}", "starred_url": "https://api.github.com/users/Aktsvigun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Aktsvigun/subscriptions", "organizations_url": "https://api.github.com/users/Aktsvigun/orgs", "repos_url": "https://api.github.com/users/Aktsvigun/repos", "events_url": "https://api.github.com/users/Aktsvigun/events{/privacy}", "received_events_url": "https://api.github.com/users/Aktsvigun/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
1,656,514,302,000
1,657,213,482,000
null
NONE
null
Hi, the current implementation of smooth-BLEU contains a bug: it smoothes unigrams as well. Consequently, when both the reference and translation consist of totally different tokens, it anyway returns a non-zero value (please see the attached image). This however contradicts the source paper suggesting the smooth-BLEU _(Chin-Yew Lin, Franz Josef Och. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. COLING 2004.)_ : > Add one count to the n-gram hit and total ngram count for n > 1. Therefore, for candidate translations with less than n words, they can still get a positive smoothed BLEU score from shorter n-gram matches; however if nothing matches then they will get zero scores. This pull request aims at fixing this bug. I made a pull request in the target repository `tensorflow/nmt`, which implements this script, yet the last commit there is dating 19.02.2019 and I doubt whether this will be fixed promptly. Yet, this bug is critical, for instance for summarization datasets with short summaries (e.g. AESLC), since smoothing needs to be applied there. Therefore, the easiest solution that I found is to fork the repo and download this script directly from the forked fixed repo. Kind, Akim Tsvigun <img width="516" alt="Снимок экрана 2022-06-29 в 17 49 27" src="https://user-images.githubusercontent.com/36672861/176466935-ac579e6d-6a93-4111-ab41-9b33056e7d47.png">
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4599/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4599/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4599", "html_url": "https://github.com/huggingface/datasets/pull/4599", "diff_url": "https://github.com/huggingface/datasets/pull/4599.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4599.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4598/comments
https://api.github.com/repos/huggingface/datasets/issues/4598/events
https://github.com/huggingface/datasets/pull/4598
1,288,774,514
PR_kwDODunzps46kfOS
4,598
Host financial_phrasebank data on the Hub
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,511,171,000
1,656,668,474,000
1,656,667,776,000
MEMBER
null
Fix #4597.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4598/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4598", "html_url": "https://github.com/huggingface/datasets/pull/4598", "diff_url": "https://github.com/huggingface/datasets/pull/4598.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4598.patch", "merged_at": 1656667776000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4597
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4597/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4597/comments
https://api.github.com/repos/huggingface/datasets/issues/4597/events
https://github.com/huggingface/datasets/issues/4597
1,288,672,007
I_kwDODunzps5Mz5MH
4,597
Streaming issue for financial_phrasebank
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 4069435429, "node_id": "LA_kwDODunzps7yjqgl", "url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive", "name": "hosted-on-google-drive", "color": "8B51EF", "default": false, "description": "" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "cc @huggingface/datasets: it seems like https://www.researchgate.net/ is flaky for datasets hosting (I put the \"hosted-on-google-drive\" tag since it's the same kind of issue I think)", "Let's see if their license allows hosting their data on the Hub.", "License is Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported (CC BY-NC-SA 3.0).\r\n\r\nWe can host their data on the Hub." ]
1,656,506,743,000
1,656,667,776,000
1,656,667,776,000
MEMBER
null
### Link https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree/train ### Description As reported by a community member using [AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/5#62bc217436d0e5d316a768f0), there seems to be a problem streaming this dataset: ``` Server error Status code: 400 Exception: Exception Message: Give up after 5 attempts with ConnectionError ``` ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4597/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4596
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4596/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4596/comments
https://api.github.com/repos/huggingface/datasets/issues/4596/events
https://github.com/huggingface/datasets/issues/4596
1,288,381,735
I_kwDODunzps5MyyUn
4,596
Dataset Viewer issue for universal_dependencies
{ "login": "Jordy-VL", "id": 16034009, "node_id": "MDQ6VXNlcjE2MDM0MDA5", "avatar_url": "https://avatars.githubusercontent.com/u/16034009?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jordy-VL", "html_url": "https://github.com/Jordy-VL", "followers_url": "https://api.github.com/users/Jordy-VL/followers", "following_url": "https://api.github.com/users/Jordy-VL/following{/other_user}", "gists_url": "https://api.github.com/users/Jordy-VL/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jordy-VL/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jordy-VL/subscriptions", "organizations_url": "https://api.github.com/users/Jordy-VL/orgs", "repos_url": "https://api.github.com/users/Jordy-VL/repos", "events_url": "https://api.github.com/users/Jordy-VL/events{/privacy}", "received_events_url": "https://api.github.com/users/Jordy-VL/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks, looking at it!" ]
1,656,492,629,000
1,656,493,188,000
null
NONE
null
### Link https://huggingface.co/datasets/universal_dependencies ### Description invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0 ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4596/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4596/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4595
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4595/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4595/comments
https://api.github.com/repos/huggingface/datasets/issues/4595/events
https://github.com/huggingface/datasets/issues/4595
1,288,275,976
I_kwDODunzps5MyYgI
4,595
Dataset Viewer issue with False positive PII redaction
{ "login": "cakiki", "id": 3664563, "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cakiki", "html_url": "https://github.com/cakiki", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "organizations_url": "https://api.github.com/users/cakiki/orgs", "repos_url": "https://api.github.com/users/cakiki/repos", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "received_events_url": "https://api.github.com/users/cakiki/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The value is in the data, it's not an issue with the \"dataset-viewer\".\r\n\r\n<img width=\"1161\" alt=\"Capture d’écran 2022-06-29 à 10 25 51\" src=\"https://user-images.githubusercontent.com/1676121/176389325-4d2a9a7f-1583-45b8-aa7a-960ffaa6a36a.png\">\r\n\r\n Maybe open a PR: https://huggingface.co/datasets/cakiki/rosetta-code/discussions\r\n", "This was indeed a scraping issue which I assumed was a display issue; sorry about that!" ]
1,656,486,957,000
1,656,491,381,000
1,656,491,269,000
CONTRIBUTOR
null
### Link https://huggingface.co/datasets/cakiki/rosetta-code ### Description Hello, I just noticed an entry being redacted that shouldn't have been: `RootMeanSquare@Range[10]` is being displayed as `[email protected][10]` ### Owner _No response_
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4595/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4594
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4594/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4594/comments
https://api.github.com/repos/huggingface/datasets/issues/4594/events
https://github.com/huggingface/datasets/issues/4594
1,288,070,023
I_kwDODunzps5MxmOH
4,594
load_from_disk suggests incorrect fix when used to load DatasetDict
{ "login": "dvsth", "id": 11157811, "node_id": "MDQ6VXNlcjExMTU3ODEx", "avatar_url": "https://avatars.githubusercontent.com/u/11157811?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dvsth", "html_url": "https://github.com/dvsth", "followers_url": "https://api.github.com/users/dvsth/followers", "following_url": "https://api.github.com/users/dvsth/following{/other_user}", "gists_url": "https://api.github.com/users/dvsth/gists{/gist_id}", "starred_url": "https://api.github.com/users/dvsth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dvsth/subscriptions", "organizations_url": "https://api.github.com/users/dvsth/orgs", "repos_url": "https://api.github.com/users/dvsth/repos", "events_url": "https://api.github.com/users/dvsth/events{/privacy}", "received_events_url": "https://api.github.com/users/dvsth/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,656,466,801,000
1,656,475,424,000
1,656,475,424,000
NONE
null
Edit: Please feel free to remove this issue. The problem was not the error message but the fact that the DatasetDict.load_from_disk does not support loading nested splits, i.e. if one of the splits is itself a DatasetDict. If nesting splits is an antipattern, perhaps the load_from_disk function can throw a warning indicating that?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4594/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4594/timeline
null
not_planned
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4593
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4593/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4593/comments
https://api.github.com/repos/huggingface/datasets/issues/4593/events
https://github.com/huggingface/datasets/pull/4593
1,288,067,699
PR_kwDODunzps46iIkn
4,593
Fix error message when using load_from_disk to load DatasetDict
{ "login": "dvsth", "id": 11157811, "node_id": "MDQ6VXNlcjExMTU3ODEx", "avatar_url": "https://avatars.githubusercontent.com/u/11157811?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dvsth", "html_url": "https://github.com/dvsth", "followers_url": "https://api.github.com/users/dvsth/followers", "following_url": "https://api.github.com/users/dvsth/following{/other_user}", "gists_url": "https://api.github.com/users/dvsth/gists{/gist_id}", "starred_url": "https://api.github.com/users/dvsth/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dvsth/subscriptions", "organizations_url": "https://api.github.com/users/dvsth/orgs", "repos_url": "https://api.github.com/users/dvsth/repos", "events_url": "https://api.github.com/users/dvsth/events{/privacy}", "received_events_url": "https://api.github.com/users/dvsth/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,656,466,467,000
1,656,475,319,000
1,656,475,299,000
NONE
null
Issue #4594 Issue: When `datasets.load_from_disk` is wrongly used to load a `DatasetDict`, the error message suggests using `datasets.load_from_disk`, which is the same function that generated the error. Fix: The appropriate function which should be suggested instead is `datasets.dataset_dict.load_from_disk`. Changes: Change the suggestion to say "Please use `datasets.dataset_dict.load_from_disk` instead."
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4593/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4593/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4593", "html_url": "https://github.com/huggingface/datasets/pull/4593", "diff_url": "https://github.com/huggingface/datasets/pull/4593.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4593.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4592
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4592/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4592/comments
https://api.github.com/repos/huggingface/datasets/issues/4592/events
https://github.com/huggingface/datasets/issues/4592
1,288,029,377
I_kwDODunzps5MxcTB
4,592
Issue with jalFaizy/detect_chess_pieces when running datasets-cli test
{ "login": "faizankshaikh", "id": 8406903, "node_id": "MDQ6VXNlcjg0MDY5MDM=", "avatar_url": "https://avatars.githubusercontent.com/u/8406903?v=4", "gravatar_id": "", "url": "https://api.github.com/users/faizankshaikh", "html_url": "https://github.com/faizankshaikh", "followers_url": "https://api.github.com/users/faizankshaikh/followers", "following_url": "https://api.github.com/users/faizankshaikh/following{/other_user}", "gists_url": "https://api.github.com/users/faizankshaikh/gists{/gist_id}", "starred_url": "https://api.github.com/users/faizankshaikh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/faizankshaikh/subscriptions", "organizations_url": "https://api.github.com/users/faizankshaikh/orgs", "repos_url": "https://api.github.com/users/faizankshaikh/repos", "events_url": "https://api.github.com/users/faizankshaikh/events{/privacy}", "received_events_url": "https://api.github.com/users/faizankshaikh/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @faizankshaikh\r\n\r\nPlease note that we have recently launched the Community feature, specifically targeted to create Discussions (about issues/questions/asking-for-help) on each Dataset on the Hub:\r\n- Blog post: https://huggingface.co/blog/community-update\r\n- Docs: https://huggingface.co/docs/hub/repositories-pull-requests-discussions\r\n\r\nThe Discussion tab for your \"jalFaizy/detect_chess_pieces\" dataset is here: https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions\r\nYou can use it to ask for help by pinging the Datasets maintainers: see our docs here: https://huggingface.co/docs/datasets/master/en/share#ask-for-a-help-and-reviews\r\n\r\nI'm transferring this discussion to your Discussion tab and trying to address it: https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions/1", "Thank you @albertvillanova , I will keep that in mind.\r\n\r\nJust a quick note - I posted the issue on Github because the dataset viewer suggested me to \"open an issue for direct support\". Maybe it can be updated with your suggestion\r\n\r\n![image](https://user-images.githubusercontent.com/8406903/176397633-7b077d81-2044-4487-b58e-6346b05be5cf.png)\r\n\r\n\r\n", "Thank you pointing this out: yes, definitely, we should fix the error message. We are working on this." ]
1,656,461,754,000
1,656,498,603,000
1,656,488,967,000
NONE
null
### Link https://huggingface.co/datasets/jalFaizy/detect_chess_pieces ### Description I am trying to write a appropriate data loader for [a custom dataset](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces) using [this script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_chess_pieces.py) When I run the command `$ datasets-cli test "D:\workspace\HF\detect_chess_pieces" --save_infos --all_configs` It gives the following error ``` Using custom data configuration default Traceback (most recent call last): File "c:\users\faiza\anaconda3\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "c:\users\faiza\anaconda3\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\faiza\anaconda3\Scripts\datasets-cli.exe\__main__.py", line 7, in <module> File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\datasets_cli.py", line 39, in main service.run() File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\test.py", line 132, in run for j, builder in enumerate(get_builders()): File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\test.py", line 125, in get_builders yield builder_cls( File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 1148, in __init__ super().__init__(*args, **kwargs) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 306, in __init__ info = self.get_exported_dataset_info() File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 405, in get_exported_dataset_info return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo()) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 390, in get_all_exported_dataset_infos return DatasetInfosDict.from_directory(cls.get_imported_module_dir()) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 309, in from_directory dataset_infos_dict = { File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 310, in <dictcomp> config_name: DatasetInfo.from_dict(dataset_info_dict) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 272, in from_dict return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names}) File "<string>", line 20, in __init__ File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 160, in __post_init__ templates = [ File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 161, in <listcomp> template if isinstance(template, TaskTemplate) else task_template_from_dict(template) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\tasks\__init__.py", line 43, in task_template_from_dict return template.from_dict(task_template_dict) AttributeError: 'NoneType' object has no attribute 'from_dict' ``` My assumption is that there is some kind of issue in how the "task_templates" are read, because even if I keep them as None, or not include the argument at all, the same error occurs ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4592/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4592/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4591/comments
https://api.github.com/repos/huggingface/datasets/issues/4591/events
https://github.com/huggingface/datasets/issues/4591
1,288,021,332
I_kwDODunzps5MxaVU
4,591
Can't push Images to hub with manual Dataset
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "organizations_url": "https://api.github.com/users/cceyda/orgs", "repos_url": "https://api.github.com/users/cceyda/repos", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "received_events_url": "https://api.github.com/users/cceyda/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi, thanks for reporting! This issue stems from the changes introduced in https://github.com/huggingface/datasets/pull/4282 (cc @lhoestq), in which list casts are ignored if they don't change the list type (required to preserve `null` values). And `push_to_hub` does a special cast to embed external image files but doesn't change the types, hence the failure." ]
1,656,460,883,000
1,657,281,696,000
1,657,281,695,000
CONTRIBUTOR
null
## Describe the bug If I create a dataset including an 'Image' feature manually, when pushing to hub decoded images are not pushed, instead it looks for image where image local path is/used to be. This doesn't (at least didn't used to) happen with imagefolder. I want to build dataset manually because it is complicated. This happens even though the dataset is looking like decoded images: ![image](https://user-images.githubusercontent.com/15624271/176322689-2cc819cf-9d5c-4a8f-9f3d-83ae8ec06f20.png) and I use `embed_external_files=True` while `push_to_hub` (same with false) ## Steps to reproduce the bug ```python from PIL import Image from datasets import Image as ImageFeature from datasets import Features,Dataset #manually create dataset feats=Features( { "images": [ImageFeature()], #same even if explicitly ImageFeature(decode=True) "input_image": ImageFeature(), } ) test_data={"images":[[Image.open("test.jpg"),Image.open("test.jpg"),Image.open("test.jpg")]], "input_image":[Image.open("test.jpg")]} test_dataset=Dataset.from_dict(test_data,features=feats) print(test_dataset) test_dataset.push_to_hub("ceyda/image_test_public",private=False,token="",embed_external_files=True) # clear cache rm -r ~/.cache/huggingface # remove "test.jpg" # remove to see that it is looking for image on the local path test_dataset=load_dataset("ceyda/image_test_public",use_auth_token="") print(test_dataset) print(test_dataset['train'][0]) ``` ## Expected results should be able to push image bytes if dataset has `Image(decode=True)` ## Actual results errors because it is trying to decode file from the non existing local path. ``` ----> print(test_dataset['train'][0]) File ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py:2154, in Dataset.__getitem__(self, key) 2152 def __getitem__(self, key): # noqa: F811 2153 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2154 return self._getitem( 2155 key, 2156 ) File ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs) 2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2139 formatted_output = format_table( 2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2141 ) 2142 return formatted_output File ~/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: ... -> 3068 fp = builtins.open(filename, "rb") 3069 exclusive_fp = True 3071 try: FileNotFoundError: [Errno 2] No such file or directory: 'test.jpg' ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4591/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4591/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4590
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4590/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4590/comments
https://api.github.com/repos/huggingface/datasets/issues/4590/events
https://github.com/huggingface/datasets/pull/4590
1,287,941,058
PR_kwDODunzps46htv0
4,590
Generalize meta_path json file creation in load.py [#4540]
{ "login": "VijayKalmath", "id": 20517962, "node_id": "MDQ6VXNlcjIwNTE3OTYy", "avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VijayKalmath", "html_url": "https://github.com/VijayKalmath", "followers_url": "https://api.github.com/users/VijayKalmath/followers", "following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}", "gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}", "starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions", "organizations_url": "https://api.github.com/users/VijayKalmath/orgs", "repos_url": "https://api.github.com/users/VijayKalmath/repos", "events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}", "received_events_url": "https://api.github.com/users/VijayKalmath/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@albertvillanova, Can you please review this PR for Issue #4540 ", "@lhoestq Thank you for merging the PR . Is there a slack channel for contributing to the datasets library. I would love to work on the library and make meaningful contributions.", "Hi ! Sure feel free to join our discord ^^ \r\nhttps://discuss.huggingface.co/t/join-the-hugging-face-discord/11263 so that we can discuss together mor eeasily. Otherwise everything happens on github ;)" ]
1,656,452,886,000
1,657,292,113,000
1,657,199,865,000
CONTRIBUTOR
null
# What does this PR do? ## Summary *In function `_copy_script_and_other_resources_in_importable_dir`, using string split when generating `meta_path` throws error in the edge case raised in #4540.* ## Additions - ## Changes - Changed meta_path to use `os.path.splitext` instead of using `str.split` to generalize code. ## Deletions - ## Issues Addressed : Fixes #4540
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4590/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4590/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4590", "html_url": "https://github.com/huggingface/datasets/pull/4590", "diff_url": "https://github.com/huggingface/datasets/pull/4590.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4590.patch", "merged_at": 1657199864000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4589
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4589/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4589/comments
https://api.github.com/repos/huggingface/datasets/issues/4589/events
https://github.com/huggingface/datasets/issues/4589
1,287,600,029
I_kwDODunzps5Mvzed
4,589
Permission denied: '/home/.cache' when load_dataset with local script
{ "login": "jiangh0", "id": 24559732, "node_id": "MDQ6VXNlcjI0NTU5NzMy", "avatar_url": "https://avatars.githubusercontent.com/u/24559732?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiangh0", "html_url": "https://github.com/jiangh0", "followers_url": "https://api.github.com/users/jiangh0/followers", "following_url": "https://api.github.com/users/jiangh0/following{/other_user}", "gists_url": "https://api.github.com/users/jiangh0/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiangh0/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiangh0/subscriptions", "organizations_url": "https://api.github.com/users/jiangh0/orgs", "repos_url": "https://api.github.com/users/jiangh0/repos", "events_url": "https://api.github.com/users/jiangh0/events{/privacy}", "received_events_url": "https://api.github.com/users/jiangh0/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,656,433,563,000
1,656,483,988,000
1,656,483,908,000
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4589/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4588
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4588/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4588/comments
https://api.github.com/repos/huggingface/datasets/issues/4588/events
https://github.com/huggingface/datasets/pull/4588
1,287,368,751
PR_kwDODunzps46f2kF
4,588
Host head_qa data on the Hub and fix NonMatchingChecksumError
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Hi @albertvillanova ! Thanks for the fix ;)\r\nCan I safely checkout from this branch to build `datasets` or it is preferable to wait until all CI tests pass?\r\nThanks 🙏 ", "@younesbelkada we have just merged this PR." ]
1,656,423,568,000
1,657,036,875,000
1,657,036,192,000
MEMBER
null
This PR: - Hosts head_qa data on the Hub instead of Google Drive - Fixes NonMatchingChecksumError Fix https://huggingface.co/datasets/head_qa/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4588/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4588/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4588", "html_url": "https://github.com/huggingface/datasets/pull/4588", "diff_url": "https://github.com/huggingface/datasets/pull/4588.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4588.patch", "merged_at": 1657036192000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4587
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4587/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4587/comments
https://api.github.com/repos/huggingface/datasets/issues/4587/events
https://github.com/huggingface/datasets/pull/4587
1,287,291,494
PR_kwDODunzps46flzR
4,587
Validate new_fingerprint passed by user
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,420,381,000
1,656,425,517,000
1,656,424,844,000
MEMBER
null
Users can pass the dataset fingerprint they want in `map` and other dataset transforms. However the fingerprint is used to name cache files so we need to make sure it doesn't contain bad characters as mentioned in https://github.com/huggingface/datasets/issues/1718, and that it's not too long
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4587/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4587/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4587", "html_url": "https://github.com/huggingface/datasets/pull/4587", "diff_url": "https://github.com/huggingface/datasets/pull/4587.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4587.patch", "merged_at": 1656424844000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4586
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4586/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4586/comments
https://api.github.com/repos/huggingface/datasets/issues/4586/events
https://github.com/huggingface/datasets/pull/4586
1,287,105,636
PR_kwDODunzps46e9xB
4,586
Host pn_summary data on the Hub instead of Google Drive
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,410,705,000
1,656,427,976,000
1,656,427,323,000
MEMBER
null
Fix #4581.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4586/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4586/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4586", "html_url": "https://github.com/huggingface/datasets/pull/4586", "diff_url": "https://github.com/huggingface/datasets/pull/4586.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4586.patch", "merged_at": 1656427323000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4585/comments
https://api.github.com/repos/huggingface/datasets/issues/4585/events
https://github.com/huggingface/datasets/pull/4585
1,287,064,929
PR_kwDODunzps46e1Ne
4,585
Host multi_news data on the Hub instead of Google Drive
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,408,726,000
1,656,425,975,000
1,656,425,328,000
MEMBER
null
Host data files of multi_news dataset on the Hub. They were on Google Drive. Fix #4580.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4585/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4585/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4585", "html_url": "https://github.com/huggingface/datasets/pull/4585", "diff_url": "https://github.com/huggingface/datasets/pull/4585.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4585.patch", "merged_at": 1656425328000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4584
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4584/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4584/comments
https://api.github.com/repos/huggingface/datasets/issues/4584/events
https://github.com/huggingface/datasets/pull/4584
1,286,911,993
PR_kwDODunzps46eVF7
4,584
Add binary classification task IDs
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4584). All of your documentation changes will be reflected on that endpoint.", "> Awesome thanks ! Can you add it to https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts first please ? This is where we define the cross libraries tasks taxonomy ;)\r\n\r\nThanks for the tip! Done in https://github.com/huggingface/hub-docs/pull/217" ]
1,656,401,439,000
1,657,120,794,000
null
MEMBER
null
As a precursor to aligning the task IDs in `datasets` and AutoTrain, we need a way to distinguish binary vs multiclass vs multilabel classification. This PR adds binary classification to the task IDs to enable this. Related AutoTrain issue: https://github.com/huggingface/autonlp-backend/issues/597 cc @abhishekkrthakur @SBrandeis
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4584/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4584/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4584", "html_url": "https://github.com/huggingface/datasets/pull/4584", "diff_url": "https://github.com/huggingface/datasets/pull/4584.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4584.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4583/comments
https://api.github.com/repos/huggingface/datasets/issues/4583/events
https://github.com/huggingface/datasets/pull/4583
1,286,790,871
PR_kwDODunzps46d7xo
4,583
<code> implementation of FLAC support using torchaudio
{ "login": "rafael-ariascalles", "id": 45745870, "node_id": "MDQ6VXNlcjQ1NzQ1ODcw", "avatar_url": "https://avatars.githubusercontent.com/u/45745870?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rafael-ariascalles", "html_url": "https://github.com/rafael-ariascalles", "followers_url": "https://api.github.com/users/rafael-ariascalles/followers", "following_url": "https://api.github.com/users/rafael-ariascalles/following{/other_user}", "gists_url": "https://api.github.com/users/rafael-ariascalles/gists{/gist_id}", "starred_url": "https://api.github.com/users/rafael-ariascalles/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rafael-ariascalles/subscriptions", "organizations_url": "https://api.github.com/users/rafael-ariascalles/orgs", "repos_url": "https://api.github.com/users/rafael-ariascalles/repos", "events_url": "https://api.github.com/users/rafael-ariascalles/events{/privacy}", "received_events_url": "https://api.github.com/users/rafael-ariascalles/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,656,393,861,000
1,656,395,222,000
1,656,395,222,000
NONE
null
I had added Audio FLAC support with torchaudio given that Librosa and SoundFile can give problems. Also, FLAC is been used as audio from https://mlcommons.org/en/peoples-speech/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4583/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4583", "html_url": "https://github.com/huggingface/datasets/pull/4583", "diff_url": "https://github.com/huggingface/datasets/pull/4583.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4583.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4582
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4582/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4582/comments
https://api.github.com/repos/huggingface/datasets/issues/4582/events
https://github.com/huggingface/datasets/pull/4582
1,286,517,060
PR_kwDODunzps46dC59
4,582
add_column should preserve _indexes
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "organizations_url": "https://api.github.com/users/cceyda/orgs", "repos_url": "https://api.github.com/users/cceyda/repos", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "received_events_url": "https://api.github.com/users/cceyda/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4582). All of your documentation changes will be reflected on that endpoint." ]
1,656,369,347,000
1,657,120,794,000
null
CONTRIBUTOR
null
https://github.com/huggingface/datasets/issues/3769#issuecomment-1167146126 doing `.add_column("x",x_data)` also removed any `_indexes` on the dataset, decided this shouldn't be the case. This was because `add_column` was creating a new `Dataset(...)` and wasn't possible to pass indexes on init. with this PR now can pass 'indexes' on init through `IndexableMixin` - [x] Added test
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4582/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4582", "html_url": "https://github.com/huggingface/datasets/pull/4582", "diff_url": "https://github.com/huggingface/datasets/pull/4582.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4582.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4581/comments
https://api.github.com/repos/huggingface/datasets/issues/4581/events
https://github.com/huggingface/datasets/issues/4581
1,286,362,907
I_kwDODunzps5MrFcb
4,581
Dataset Viewer issue for pn_summary
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "linked to https://github.com/huggingface/datasets/issues/4580#issuecomment-1168373066?", "Note that I refreshed twice this dataset, and I still have (another) error on one of the splits\r\n\r\n```\r\nStatus code: 400\r\nException: ClientResponseError\r\nMessage: 403, message='Forbidden', url=URL('https://doc-14-4c-docs.googleusercontent.com/docs/securesc/ha0ro937gcuc7l7deffksulhg5h7mbp1/pgotjmcuh77q0lk7p44rparfrhv459kp/1656403650000/11771870722949762109/*/16OgJ_OrfzUF_i3ftLjFn9kpcyoi7UJeO?e=download')\r\n```\r\n\r\nLike the three splits are processed in parallel by the workers, I imagine that the Google hosting is rate-limiting us.\r\n\r\ncc @albertvillanova \r\n\r\n", "Exactly, Google Drive bans our loading scripts.\r\n\r\nWhen possible, we should host somewhere else." ]
1,656,363,372,000
1,656,427,323,000
1,656,427,323,000
MEMBER
null
### Link https://huggingface.co/datasets/pn_summary/viewer/1.0.0/validation ### Description Getting an index error on the `validation` and `test` splits: ``` Server error Status code: 400 Exception: IndexError Message: list index out of range ``` ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4581/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4580
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4580/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4580/comments
https://api.github.com/repos/huggingface/datasets/issues/4580/events
https://github.com/huggingface/datasets/issues/4580
1,286,312,912
I_kwDODunzps5Mq5PQ
4,580
Dataset Viewer issue for multi_news
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @lewtun.\r\n\r\nI forced the refreshing of the preview and it worked OK for train and validation splits.\r\n\r\nI guess the error has to do with the data files being hosted at Google Drive: this gives errors when requested automatically using scripts.\r\nWe should host them to fix the error. Let's see if the license allows that.", "I guess we can host the data: https://github.com/Alex-Fabbri/Multi-News/blob/master/LICENSE.txt" ]
1,656,361,525,000
1,656,425,328,000
1,656,425,328,000
MEMBER
null
### Link https://huggingface.co/datasets/multi_news ### Description Not sure what the index error is referring to here: ``` Status code: 400 Exception: IndexError Message: list index out of range ``` ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4580/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4579
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4579/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4579/comments
https://api.github.com/repos/huggingface/datasets/issues/4579/events
https://github.com/huggingface/datasets/pull/4579
1,286,106,285
PR_kwDODunzps46bo2h
4,579
Support streaming cfq dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq I've been refactoring a little the code:\r\n- Use less RAM by loading only the required samples: only if its index is in the splits file\r\n- Start yielding \"earlier\" in streaming mode: for each `split_idx`:\r\n - either yield from buffer\r\n - or iterate over samples and either yield or buffer the sample\r\n \r\n The speed gain obviously depends on how the indexes are sorted in the split file:\r\n - Best case: indices are [1, 2, 3]\r\n - Worst case (no speed gain): indices are [3, 1, 2] or [3, 2, 1]\r\n\r\nLet me know what you think.", "I have to update the dummy data so that it aligns with the real data (inside the archive, the samples file `dataset.json` is the last member).", "There is an issue when testing `test_load_dataset_cfq` with dummy data:\r\n- `MockDownloadManager.iter_archive` yields FIRST `'cfq/dataset.json'`\r\n- [`Streaming`]`DownloadManager.iter_archive` yields LAST `'cfq/dataset.json'` when using real data tar.gz archive\r\n\r\nNote that this issue arises only with dummy data: loading the real dataset works smoothly for all configurations: I recreated the `dataset_infos.json` file to check it (it generated the same file).", "This PR should be merged first:\r\n- #4611", "Impressive, thank you ! :o \r\n\r\nfeel free to merge master into this branch, now that the files order is respected. You can merge if the CI is green :)" ]
1,656,349,883,000
1,656,963,301,000
1,656,962,637,000
MEMBER
null
Support streaming cfq dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4579/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4579/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4579", "html_url": "https://github.com/huggingface/datasets/pull/4579", "diff_url": "https://github.com/huggingface/datasets/pull/4579.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4579.patch", "merged_at": 1656962637000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4578
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4578/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4578/comments
https://api.github.com/repos/huggingface/datasets/issues/4578/events
https://github.com/huggingface/datasets/issues/4578
1,286,086,400
I_kwDODunzps5MqB8A
4,578
[Multi Configs] Use directories to differentiate between subsets/configurations
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,656,348,911,000
1,656,348,919,000
null
MEMBER
null
Currently to define several subsets/configurations of your dataset, you need to use a dataset script. However it would be nice to have a no-code way to to this. For example we could specify different configurations of a dataset (for example, if a dataset contains different languages) with one directory per configuration. These structures are not supported right now, but would be nice to have: ``` my_dataset_repository/ ├── README.md ├── en/ │ ├── train.csv │ └── test.csv └── fr/ ├── train.csv └── test.csv ``` Or with one directory per split: ``` my_dataset_repository/ ├── README.md ├── en/ │ ├── train/ │ │ ├── shard_0.csv │ │ └── shard_1.csv │ └── test/ │ ├── shard_0.csv │ └── shard_1.csv └── fr/ ├── train/ │ ├── shard_0.csv │ └── shard_1.csv └── test/ ├── shard_0.csv └── shard_1.csv ``` cc @stevhliu @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4578/reactions", "total_count": 5, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4578/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4577/comments
https://api.github.com/repos/huggingface/datasets/issues/4577/events
https://github.com/huggingface/datasets/pull/4577
1,285,703,775
PR_kwDODunzps46aTWL
4,577
Add authentication tip to `load_dataset`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,331,534,000
1,656,940,395,000
1,656,939,690,000
CONTRIBUTOR
null
Add an authentication tip similar to the one in transformers' `PreTrainedModel.from_pretrained` to `load_dataset`/`load_dataset_builder`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4577/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4577", "html_url": "https://github.com/huggingface/datasets/pull/4577", "diff_url": "https://github.com/huggingface/datasets/pull/4577.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4577.patch", "merged_at": 1656939690000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4576
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4576/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4576/comments
https://api.github.com/repos/huggingface/datasets/issues/4576/events
https://github.com/huggingface/datasets/pull/4576
1,285,698,576
PR_kwDODunzps46aSN_
4,576
Include `metadata.jsonl` in resolved data files
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I still don't know if the way we implemented data files resolution could support the metadata.jsonl file without bad side effects for the other packaged builders. In particular here if you have a folder of csv/parquet/whatever files and a metadata.jsonl file, it would return \r\n```\r\nsplit: patterns_dict[split] + [METADATA_PATTERN]\r\n```\r\nwhich is a bit unexpected and can lead to errors.\r\n\r\nMaybe this logic can be specific to imagefolder somehow ? This could be an additional pattern `[\"metadata.jsonl\", \"**/metadata.jsonl\"]` just for imagefolder, that is only used when `data_files=` is not specified by the user.\r\n\r\nI guess it's ok to have patterns that lead to duplicate metadata.jsonl files for imagefolder, since the imagefolder logic only considers the closest metadata file for each image.\r\n\r\nWhat do you think ?", "Yes, that's indeed the problem. My solution in https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 that accounts for that (include metadata files only if image files are present; not ideal): https://github.com/huggingface/datasets/blob/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95/src/datasets/data_files.py#L119-L125.\r\nPerhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as `imagefolder` and append metadata files to already resolved data files (if there are any). WDYT?", "@lhoestq \r\n\r\n> Perhaps a cleaner approach would be to check for metadata files after the packaged module type is inferred as imagefolder and append metadata files to already resolved data files (if there are any). WDYT?\r\n\r\nI decided to go with this approach.\r\n\r\n Not sure if you meant the same thing with this comment:\r\n\r\n> Maybe this logic can be specific to imagefolder somehow ? This could be an additional pattern [\"metadata.jsonl\", \"**/metadata.jsonl\"] just for imagefolder, that is only used when data_files= is not specified by the user.\r\n\r\n\r\nIt adds more code but is easy to follow IMO.\r\n", "The CI still struggles but you can merge since at least one of the two WIN CI succeeded" ]
1,656,331,289,000
1,656,679,495,000
1,656,584,132,000
CONTRIBUTOR
null
Include `metadata.jsonl` in resolved data files. Fix #4548 @lhoestq ~~https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 adds support for metadata files placed at the root, and https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 accounts for nested metadata files also, but this results in more complex code. Let me know which one of these two approaches you prefer.~~ Maybe https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 is good enough for now (for the sake of simplicity). https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 breaks the imagefolder tests due to duplicates in the resolved metadata files. One way to fix this would be to resolve the metadata pattern only on parent directories, but this adds even more logic to `_get_data_files_patterns`, so not sure if this is what we should do.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4576/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4576/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4576", "html_url": "https://github.com/huggingface/datasets/pull/4576", "diff_url": "https://github.com/huggingface/datasets/pull/4576.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4576.patch", "merged_at": 1656584131000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4575
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4575/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4575/comments
https://api.github.com/repos/huggingface/datasets/issues/4575/events
https://github.com/huggingface/datasets/issues/4575
1,285,446,700
I_kwDODunzps5Mnlws
4,575
Problem about wmt17 zh-en dataset
{ "login": "winterfell2021", "id": 85819194, "node_id": "MDQ6VXNlcjg1ODE5MTk0", "avatar_url": "https://avatars.githubusercontent.com/u/85819194?v=4", "gravatar_id": "", "url": "https://api.github.com/users/winterfell2021", "html_url": "https://github.com/winterfell2021", "followers_url": "https://api.github.com/users/winterfell2021/followers", "following_url": "https://api.github.com/users/winterfell2021/following{/other_user}", "gists_url": "https://api.github.com/users/winterfell2021/gists{/gist_id}", "starred_url": "https://api.github.com/users/winterfell2021/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/winterfell2021/subscriptions", "organizations_url": "https://api.github.com/users/winterfell2021/orgs", "repos_url": "https://api.github.com/users/winterfell2021/repos", "events_url": "https://api.github.com/users/winterfell2021/events{/privacy}", "received_events_url": "https://api.github.com/users/winterfell2021/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Running into the same error with `wmt17/zh-en`, `wmt18/zh-en` and `wmt19/zh-en`." ]
1,656,318,942,000
1,656,527,559,000
null
NONE
null
It seems that in subset casia2015, some samples are like `{'c[hn]':'xxx', 'en': 'aa'}`. So when using `data = load_dataset('wmt17', "zh-en")` to load the wmt17 zh-en dataset, which will raise the exception: ``` Traceback (most recent call last): File "train.py", line 78, in <module> data = load_dataset(args.dataset, "zh-en") File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1684, in load_dataset use_auth_token=use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1221, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1215, in _prepare_split num_examples, num_bytes = writer.finalize() File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 533, in finalize self.write_examples_on_file() File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 410, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 503, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 198, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1846, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1756, in array_cast raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") TypeError: Couldn't cast array of type struct<c[hn]: string, en: string, zh: string> to struct<en: string, zh: string> ``` So the solution of this problem is to change the original array manually: ``` if 'c[hn]' in str(array.type): py_array = array.to_pylist() data_list = [] for vo in py_array: tmp = { 'en': vo['en'], } if 'zh' not in vo: tmp['zh'] = vo['c[hn]'] else: tmp['zh'] = vo['zh'] data_list.append(tmp) array = pa.array(data_list, type=pa.struct([ pa.field('en', pa.string()), pa.field('zh', pa.string()), ])) ``` Therefore, maybe a correct version of original casia2015 file need to be updated
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4575/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4575/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4574
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4574/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4574/comments
https://api.github.com/repos/huggingface/datasets/issues/4574/events
https://github.com/huggingface/datasets/pull/4574
1,285,380,616
PR_kwDODunzps46ZOpZ
4,574
Support streaming mlsum dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "After unpinning `s3fs` and pinning `fsspec[http]>=2021.11.1`, the CI installs\r\n- `fsspec-2022.1.0`\r\n- `s3fs-0.5.1`\r\n\r\nand raises the following error:\r\n```\r\n ImportError while loading conftest '/home/runner/work/datasets/datasets/tests/conftest.py'.\r\ntests/conftest.py:13: in <module>\r\n import datasets\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/__init__.py:37: in <module>\r\n from .arrow_dataset import Dataset\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/arrow_dataset.py:62: in <module>\r\n from .arrow_reader import ArrowReader\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/arrow_reader.py:29: in <module>\r\n from .download.download_config import DownloadConfig\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/download/__init__.py:10: in <module>\r\n from .streaming_download_manager import StreamingDownloadManager\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/download/streaming_download_manager.py:20: in <module>\r\n from ..filesystems import COMPRESSION_FILESYSTEMS\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/filesystems/__init__.py:13: in <module>\r\n from .s3filesystem import S3FileSystem # noqa: F401\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/datasets/filesystems/s3filesystem.py:1: in <module>\r\n import s3fs\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/s3fs/__init__.py:1: in <module>\r\n from .core import S3FileSystem, S3File\r\n/opt/hostedtoolcache/Python/3.6.15/x64/lib/python3.6/site-packages/s3fs/core.py:12: in <module>\r\n from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync\r\nE ImportError: cannot import name 'maybe_sync'\r\n```\r\n\r\nThe installed `s3fs` version is too old. What about pinning a min version?", "Maybe you can try setting the same minimum version as fsspec ? `s3fs>=2021.11.1`", "Yes, I have checked that they both require to have the same version. \r\n\r\nThe issue then was coming from aiobotocore, boto3, botocore. I have changed them from strict to min version requirements.\r\n> s3fs 2021.11.1 depends on aiobotocore~=2.0.1", "I have updated all min versions so that they are compatible one with each other. I'm pushing again...", "Thanks !", "Nice!" ]
1,656,315,423,000
1,658,410,650,000
1,658,407,200,000
MEMBER
null
Support streaming mlsum dataset. This PR: - pins `fsspec` min version with fixed BlockSizeError: `fsspec[http]>=2021.11.1` - https://github.com/fsspec/filesystem_spec/pull/830 - unpins `s3fs==2021.08.1` to align it with `fsspec` requirement: `s3fs>=2021.11.1` > s3fs 2021.8.1 requires fsspec==2021.08.1 - see discussion: https://github.com/huggingface/datasets/pull/2858/files#r700027326 - updates the following requirements to be compatible with the previous ones and one with each other: - `aiobotocore==1.4.2` to `aiobotocore>=2.0.1` (required by s3fs>=2021.11.1) - `boto3==1.17.106` to `boto3>=1.19.8` (to be compatible with aiobotocore>=2.0.1) - `botocore==1.20.106` to `botocore>=1.22.8` (to be compatible with aiobotocore and boto3) Fix #4572.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4574/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4574/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4574", "html_url": "https://github.com/huggingface/datasets/pull/4574", "diff_url": "https://github.com/huggingface/datasets/pull/4574.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4574.patch", "merged_at": 1658407200000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4573/comments
https://api.github.com/repos/huggingface/datasets/issues/4573/events
https://github.com/huggingface/datasets/pull/4573
1,285,023,629
PR_kwDODunzps46YEEa
4,573
Fix evaluation metadata for ncbi_disease
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4573). All of your documentation changes will be reflected on that endpoint." ]
1,656,275,372,000
1,657,120,794,000
null
MEMBER
null
This PR fixes the task in the evaluation metadata and removes the metrics info as we've decided this is not a great way to propagate this information downstream.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4573/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4573/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4573", "html_url": "https://github.com/huggingface/datasets/pull/4573", "diff_url": "https://github.com/huggingface/datasets/pull/4573.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4573.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4572
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4572/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4572/comments
https://api.github.com/repos/huggingface/datasets/issues/4572/events
https://github.com/huggingface/datasets/issues/4572
1,285,022,499
I_kwDODunzps5Ml-Mj
4,572
Dataset Viewer issue for mlsum
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @lewtun.\r\n\r\nAfter investigation, it seems that the server https://gitlab.lip6.fr does not allow HTTP Range requests.\r\n\r\nWe are trying to find a workaround..." ]
1,656,275,057,000
1,658,407,201,000
1,658,407,201,000
MEMBER
null
### Link https://huggingface.co/datasets/mlsum/viewer/de/train ### Description There's seems to be a problem with the download / streaming of this dataset: ``` Server error Status code: 400 Exception: BadZipFile Message: File is not a zip file ``` ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4572/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4572/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4571
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4571/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4571/comments
https://api.github.com/repos/huggingface/datasets/issues/4571/events
https://github.com/huggingface/datasets/issues/4571
1,284,883,289
I_kwDODunzps5MlcNZ
4,571
Dataset Viewer issue for gsarti/flores_101
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Related to https://github.com/huggingface/datasets/issues/4562#issuecomment-1166911751\r\n\r\nI'll assign @albertvillanova ", "I'm just wondering why we don't have this dataset under:\r\n- the `facebook` namespace\r\n- or the canonical dataset `flores`: why does this only have 2 languages?" ]
1,656,242,349,000
1,656,320,906,000
null
MEMBER
null
### Link https://huggingface.co/datasets/gsarti/flores_101 ### Description It seems like streaming isn't supported for this dataset: ``` Server Error Status code: 400 Exception: NotImplementedError Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset/flores101_dataset.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. ``` ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4571/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4571/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4570
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4570/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4570/comments
https://api.github.com/repos/huggingface/datasets/issues/4570/events
https://github.com/huggingface/datasets/issues/4570
1,284,846,168
I_kwDODunzps5MlTJY
4,570
Dataset sharding non-contiguous?
{ "login": "cakiki", "id": 3664563, "node_id": "MDQ6VXNlcjM2NjQ1NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cakiki", "html_url": "https://github.com/cakiki", "followers_url": "https://api.github.com/users/cakiki/followers", "following_url": "https://api.github.com/users/cakiki/following{/other_user}", "gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}", "starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cakiki/subscriptions", "organizations_url": "https://api.github.com/users/cakiki/orgs", "repos_url": "https://api.github.com/users/cakiki/repos", "events_url": "https://api.github.com/users/cakiki/events{/privacy}", "received_events_url": "https://api.github.com/users/cakiki/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "This was silly; I was sure I'd looked for a `contiguous` argument, and was certain there wasn't one the first time I looked :smile:\r\n\r\nSorry about that.", "Hi! You can pass `contiguous=True` to `.shard()` get contiguous shards. More info on this and the default behavior can be found in the [docs](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.shard).\r\n\r\nEDIT: Answered as you closed the thread 😄 ", "Hahaha I'm sorry; my excuse is: it's Sunday. (Which makes me all the more grateful for your response :smiley: ", "@mariosasko Sorry for reviving this, but I was curious as to why `contiguous=False` was the default. This might be a personal bias, but I feel that a user would expect the opposite to be the default. :thinking: ", "This project started as a fork of TFDS, and `contiguous=False` is the default behavior [there](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shard)." ]
1,656,232,445,000
1,656,586,847,000
1,656,254,180,000
CONTRIBUTOR
null
## Describe the bug I'm not sure if this is a bug; more likely normal behavior but i wanted to double check. Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset? This might be related to this pull request (https://github.com/huggingface/datasets/pull/4466) but I have to admit I did not properly look into the changes made. ## Steps to reproduce the bug ```python max_shard_size = convert_file_size_to_int('300MB') dataset_nbytes = dataset.data.nbytes num_shards = int(dataset_nbytes / max_shard_size) + 1 num_shards = max(num_shards, 1) print(f"{num_shards=}") for shard_index in range(num_shards): shard = dataset.shard(num_shards=num_shards, index=shard_index) shard.to_parquet(f"tokenized/tokenized-{shard_index:03d}.parquet") os.listdir('tokenized/') ``` ## Expected results I expected the shards to match the order of the data of the original dataset; i.e. `dataset[10]` being the same as `shard_1[10]` for example ## Actual results Only the first element is the same; i.e. `dataset[0]` is the same as `shard_1[0]` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31 - Python version: 3.10.4 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4570/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4570/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4569
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4569/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4569/comments
https://api.github.com/repos/huggingface/datasets/issues/4569/events
https://github.com/huggingface/datasets/issues/4569
1,284,833,694
I_kwDODunzps5MlQGe
4,569
Dataset Viewer issue for sst2
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @lewtun, thanks for reporting.\r\n\r\nI have checked locally and refreshed the preview and it seems working smooth now:\r\n```python\r\nIn [8]: ds\r\nOut[8]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 67349\r\n })\r\n validation: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 872\r\n })\r\n test: Dataset({\r\n features: ['idx', 'sentence', 'label'],\r\n num_rows: 1821\r\n })\r\n})\r\n```\r\n\r\nCould you confirm? ", "Thanks @albertvillanova - it is indeed working now (not sure what caused the error in the first place). Closing this :)" ]
1,656,228,774,000
1,656,311,868,000
1,656,311,868,000
MEMBER
null
### Link https://huggingface.co/datasets/sst2 ### Description Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem): ``` Status code: 400 Exception: Exception Message: Give up after 5 attempts with ConnectionError ``` ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4569/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4569/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4568
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4568/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4568/comments
https://api.github.com/repos/huggingface/datasets/issues/4568/events
https://github.com/huggingface/datasets/issues/4568
1,284,655,624
I_kwDODunzps5MkkoI
4,568
XNLI cache reload is very slow
{ "login": "Muennighoff", "id": 62820084, "node_id": "MDQ6VXNlcjYyODIwMDg0", "avatar_url": "https://avatars.githubusercontent.com/u/62820084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Muennighoff", "html_url": "https://github.com/Muennighoff", "followers_url": "https://api.github.com/users/Muennighoff/followers", "following_url": "https://api.github.com/users/Muennighoff/following{/other_user}", "gists_url": "https://api.github.com/users/Muennighoff/gists{/gist_id}", "starred_url": "https://api.github.com/users/Muennighoff/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muennighoff/subscriptions", "organizations_url": "https://api.github.com/users/Muennighoff/orgs", "repos_url": "https://api.github.com/users/Muennighoff/repos", "events_url": "https://api.github.com/users/Muennighoff/events{/privacy}", "received_events_url": "https://api.github.com/users/Muennighoff/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi,\r\nCould you tell us how you are running this code?\r\nI tested on my machine (M1 Mac). And it is running fine both on and off internet.\r\n\r\n<img width=\"1033\" alt=\"Screen Shot 2022-07-03 at 1 32 25 AM\" src=\"https://user-images.githubusercontent.com/8711912/177026364-4ad7cedb-e524-4513-97f7-7961bbb34c90.png\">\r\nTested on both stable and dev version. ", "Sure, I was running it on a Linux machine.\r\nI found that if I turn the Internet off, it would still try to make a HTTPS call which would slow down the cache loading. If you can't reproduce then we can close the issue.", "Hi @Muennighoff! You can set the env variable `HF_DATASETS_OFFLINE` to `1` to avoid this behavior in offline mode. More info is available [here](https://huggingface.co/docs/datasets/master/en/loading#offline)." ]
1,656,175,436,000
1,656,944,980,000
1,656,944,980,000
NONE
null
### Reproduce Using `2.3.3.dev0` `from datasets import load_dataset` `load_dataset("xnli", "en")` Turn off Internet `load_dataset("xnli", "en")` I cancelled the second `load_dataset` eventually cuz it took super long. It would be great to have something to specify e.g. `only_load_from_cache` and avoid the library trying to download when there is no Internet. If I leave it running it works but takes way longer than when there is Internet. I would expect loading from cache to take the same amount of time regardless of whether there is Internet. ``` --------------------------------------------------------------------------- gaierror Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self) 174 conn = connection.create_connection( --> 175 (self._dns_host, self.port), self.timeout, **extra_kw 176 ) /opt/conda/lib/python3.7/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 71 ---> 72 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 73 af, socktype, proto, canonname, sa = res /opt/conda/lib/python3.7/socket.py in getaddrinfo(host, port, family, type, proto, flags) 751 addrlist = [] --> 752 for res in _socket.getaddrinfo(host, port, family, type, proto, flags): 753 af, socktype, proto, canonname, sa = res gaierror: [Errno -3] Temporary failure in name resolution During handling of the above exception, another exception occurred: KeyboardInterrupt Traceback (most recent call last) /tmp/ipykernel_33/3594208039.py in <module> ----> 1 load_dataset("xnli", "en") /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1673 revision=revision, 1674 use_auth_token=use_auth_token, -> 1675 **config_kwargs, 1676 ) 1677 /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1494 download_mode=download_mode, 1495 data_dir=data_dir, -> 1496 data_files=data_files, 1497 ) 1498 /opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1182 download_config=download_config, 1183 download_mode=download_mode, -> 1184 dynamic_modules_path=dynamic_modules_path, 1185 ).get_module() 1186 elif path.count("/") == 1: # community dataset on the Hub /opt/conda/lib/python3.7/site-packages/datasets/load.py in __init__(self, name, revision, download_config, download_mode, dynamic_modules_path) 506 self.dynamic_modules_path = dynamic_modules_path 507 assert self.name.count("/") == 0 --> 508 increase_load_count(name, resource_type="dataset") 509 510 def download_loading_script(self, revision: Optional[str]) -> str: /opt/conda/lib/python3.7/site-packages/datasets/load.py in increase_load_count(name, resource_type) 166 if not config.HF_DATASETS_OFFLINE and config.HF_UPDATE_DOWNLOAD_COUNTS: 167 try: --> 168 head_hf_s3(name, filename=name + ".py", dataset=(resource_type == "dataset")) 169 except Exception: 170 pass /opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in head_hf_s3(identifier, filename, use_cdn, dataset, max_retries) 93 return http_head( 94 hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset), ---> 95 max_retries=max_retries, 96 ) 97 /opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries) 445 allow_redirects=allow_redirects, 446 timeout=timeout, --> 447 max_retries=max_retries, 448 ) 449 return response /opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params) 366 tries += 1 367 try: --> 368 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) 369 success = True 370 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err: /opt/conda/lib/python3.7/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 /opt/conda/lib/python3.7/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 527 } 528 send_kwargs.update(settings) --> 529 resp = self.send(prep, **send_kwargs) 530 531 return resp /opt/conda/lib/python3.7/site-packages/requests/sessions.py in send(self, request, **kwargs) 643 644 # Send the request --> 645 r = adapter.send(request, **kwargs) 646 647 # Total elapsed time of the request (approximately) /opt/conda/lib/python3.7/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 decode_content=False, 449 retries=self.max_retries, --> 450 timeout=timeout 451 ) 452 /opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 708 body=body, 709 headers=headers, --> 710 chunked=chunked, 711 ) 712 /opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 384 # Trigger any extra validation we need to do. 385 try: --> 386 self._validate_conn(conn) 387 except (SocketTimeout, BaseSSLError) as e: 388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. /opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 1038 # Force connect early to allow us to validate the connection. 1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` -> 1040 conn.connect() 1041 1042 if not conn.is_verified: /opt/conda/lib/python3.7/site-packages/urllib3/connection.py in connect(self) 356 def connect(self): 357 # Add certificate verification --> 358 self.sock = conn = self._new_conn() 359 hostname = self.host 360 tls_in_tls = False /opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self) 173 try: 174 conn = connection.create_connection( --> 175 (self._dns_host, self.port), self.timeout, **extra_kw 176 ) 177 KeyboardInterrupt: ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4568/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4567
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4567/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4567/comments
https://api.github.com/repos/huggingface/datasets/issues/4567/events
https://github.com/huggingface/datasets/pull/4567
1,284,528,474
PR_kwDODunzps46Wh0-
4,567
Add evaluation data for amazon_reviews_multi
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4567). All of your documentation changes will be reflected on that endpoint." ]
1,656,150,052,000
1,657,120,794,000
null
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4567/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4567/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4567", "html_url": "https://github.com/huggingface/datasets/pull/4567", "diff_url": "https://github.com/huggingface/datasets/pull/4567.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4567.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4566
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4566/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4566/comments
https://api.github.com/repos/huggingface/datasets/issues/4566/events
https://github.com/huggingface/datasets/issues/4566
1,284,397,594
I_kwDODunzps5Mjloa
4,566
Document link #load_dataset_enhancing_performance points to nowhere
{ "login": "subercui", "id": 11674033, "node_id": "MDQ6VXNlcjExNjc0MDMz", "avatar_url": "https://avatars.githubusercontent.com/u/11674033?v=4", "gravatar_id": "", "url": "https://api.github.com/users/subercui", "html_url": "https://github.com/subercui", "followers_url": "https://api.github.com/users/subercui/followers", "following_url": "https://api.github.com/users/subercui/following{/other_user}", "gists_url": "https://api.github.com/users/subercui/gists{/gist_id}", "starred_url": "https://api.github.com/users/subercui/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/subercui/subscriptions", "organizations_url": "https://api.github.com/users/subercui/orgs", "repos_url": "https://api.github.com/users/subercui/repos", "events_url": "https://api.github.com/users/subercui/events{/privacy}", "received_events_url": "https://api.github.com/users/subercui/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi! This is indeed the link the docstring should point to. Are you interested in submitting a PR to fix this?", "https://github.com/huggingface/datasets/blame/master/docs/source/cache.mdx#L93\r\n\r\nThere seems already an anchor here. Somehow it doesn't work. I am not very familiar with how this online documentation works." ]
1,656,119,899,000
1,656,534,594,000
null
NONE
null
## Describe the bug A clear and concise description of what the bug is. ![image](https://user-images.githubusercontent.com/11674033/175752806-5b066b92-9d28-4771-9112-5c8606f07741.png) The [load_dataset_enhancing_performance](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#load_dataset_enhancing_performance) link [here](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.load_from_disk.keep_in_memory) points to nowhere, I guess it should point to https://huggingface.co/docs/datasets/v2.3.2/en/cache#improve-performance?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4566/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4566/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4565
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4565/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4565/comments
https://api.github.com/repos/huggingface/datasets/issues/4565/events
https://github.com/huggingface/datasets/issues/4565
1,284,141,666
I_kwDODunzps5MinJi
4,565
Add UFSC OCPap dataset
{ "login": "johnnv1", "id": 20444345, "node_id": "MDQ6VXNlcjIwNDQ0MzQ1", "avatar_url": "https://avatars.githubusercontent.com/u/20444345?v=4", "gravatar_id": "", "url": "https://api.github.com/users/johnnv1", "html_url": "https://github.com/johnnv1", "followers_url": "https://api.github.com/users/johnnv1/followers", "following_url": "https://api.github.com/users/johnnv1/following{/other_user}", "gists_url": "https://api.github.com/users/johnnv1/gists{/gist_id}", "starred_url": "https://api.github.com/users/johnnv1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/johnnv1/subscriptions", "organizations_url": "https://api.github.com/users/johnnv1/orgs", "repos_url": "https://api.github.com/users/johnnv1/repos", "events_url": "https://api.github.com/users/johnnv1/events{/privacy}", "received_events_url": "https://api.github.com/users/johnnv1/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "I will add this directly on the hub (same as #4486)—in https://huggingface.co/lapix" ]
1,656,101,274,000
1,657,134,182,000
1,657,134,182,000
NONE
null
## Adding a Dataset - **Name:** UFSC OCPap: Papanicolaou Stained Oral Cytology Dataset (v4) - **Description:** The UFSC OCPap dataset comprises 9,797 labeled images of 1200x1600 pixels acquired from 5 slides of cancer diagnosed and 3 healthy of oral brush samples, from distinct patients. - **Paper:** https://dx.doi.org/10.2139/ssrn.4119212 - **Data:** https://data.mendeley.com/datasets/dr7ydy9xbk/1 - **Motivation:** real data of pap stained oral cytology samples Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4565/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4564
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4564/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4564/comments
https://api.github.com/repos/huggingface/datasets/issues/4564/events
https://github.com/huggingface/datasets/pull/4564
1,283,932,333
PR_kwDODunzps46UqUN
4,564
Support streaming bookcorpus dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,087,219,000
1,657,100,088,000
1,657,099,384,000
MEMBER
null
Support streaming bookcorpus dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4564/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4564/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4564", "html_url": "https://github.com/huggingface/datasets/pull/4564", "diff_url": "https://github.com/huggingface/datasets/pull/4564.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4564.patch", "merged_at": 1657099384000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4563
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4563/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4563/comments
https://api.github.com/repos/huggingface/datasets/issues/4563/events
https://github.com/huggingface/datasets/pull/4563
1,283,914,383
PR_kwDODunzps46UmZQ
4,563
Support streaming allocine dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,086,103,000
1,656,089,697,000
1,656,089,081,000
MEMBER
null
Support streaming allocine dataset. Fix #4562.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4563/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4563/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4563", "html_url": "https://github.com/huggingface/datasets/pull/4563", "diff_url": "https://github.com/huggingface/datasets/pull/4563.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4563.patch", "merged_at": 1656089081000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4562
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4562/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4562/comments
https://api.github.com/repos/huggingface/datasets/issues/4562/events
https://github.com/huggingface/datasets/issues/4562
1,283,779,557
I_kwDODunzps5MhOvl
4,562
Dataset Viewer issue for allocine
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "I removed my assignment as @huggingface/datasets should be able to answer better than me\r\n", "Let me have a look...", "Thanks for the quick fix @albertvillanova ", "Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content *sequentially* (no random access).", "> Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content _sequentially_ (no random access).\r\n\r\nAh thanks for the clarification! I'll look out for this next time and implement the fix myself :)" ]
1,656,078,638,000
1,656,311,972,000
1,656,089,081,000
MEMBER
null
### Link https://huggingface.co/datasets/allocine ### Description Not sure if this is a problem with `bz2` compression, but I thought these datasets could be streamed: ``` Status code: 400 Exception: AttributeError Message: 'TarContainedFile' object has no attribute 'readable' ``` ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4562/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4562/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4561
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4561/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4561/comments
https://api.github.com/repos/huggingface/datasets/issues/4561/events
https://github.com/huggingface/datasets/pull/4561
1,283,624,242
PR_kwDODunzps46TnVe
4,561
Add evaluation data to acronym_identification
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,069,453,000
1,656,322,675,000
1,656,319,762,000
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4561/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4561/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4561", "html_url": "https://github.com/huggingface/datasets/pull/4561", "diff_url": "https://github.com/huggingface/datasets/pull/4561.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4561.patch", "merged_at": 1656319762000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4560
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4560/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4560/comments
https://api.github.com/repos/huggingface/datasets/issues/4560/events
https://github.com/huggingface/datasets/pull/4560
1,283,558,873
PR_kwDODunzps46TY9n
4,560
Add evaluation metadata to imagenet-1k
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4560). All of your documentation changes will be reflected on that endpoint." ]
1,656,065,561,000
1,657,120,794,000
null
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4560/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4560", "html_url": "https://github.com/huggingface/datasets/pull/4560", "diff_url": "https://github.com/huggingface/datasets/pull/4560.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4560.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4559
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4559/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4559/comments
https://api.github.com/repos/huggingface/datasets/issues/4559/events
https://github.com/huggingface/datasets/pull/4559
1,283,544,937
PR_kwDODunzps46TV7-
4,559
Add action names in schema_guided_dstc8 dataset card
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,064,801,000
1,656,068,068,000
1,656,067,427,000
MEMBER
null
As aseked in https://huggingface.co/datasets/schema_guided_dstc8/discussions/1, I added the action names in the dataset card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4559/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4559/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4559", "html_url": "https://github.com/huggingface/datasets/pull/4559", "diff_url": "https://github.com/huggingface/datasets/pull/4559.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4559.patch", "merged_at": 1656067427000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4558
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4558/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4558/comments
https://api.github.com/repos/huggingface/datasets/issues/4558/events
https://github.com/huggingface/datasets/pull/4558
1,283,479,650
PR_kwDODunzps46THl_
4,558
Add evaluation metadata to wmt14
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4558). All of your documentation changes will be reflected on that endpoint." ]
1,656,061,734,000
1,657,200,016,000
null
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4558/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4558/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4558", "html_url": "https://github.com/huggingface/datasets/pull/4558", "diff_url": "https://github.com/huggingface/datasets/pull/4558.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4558.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4557
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4557/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4557/comments
https://api.github.com/repos/huggingface/datasets/issues/4557/events
https://github.com/huggingface/datasets/pull/4557
1,283,473,889
PR_kwDODunzps46TGZK
4,557
Add evaluation metadata to wmt16
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4557). All of your documentation changes will be reflected on that endpoint.", "> Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right?\r\n\r\nyes :)" ]
1,656,061,463,000
1,657,200,090,000
null
MEMBER
null
Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4557/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4557/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4557", "html_url": "https://github.com/huggingface/datasets/pull/4557", "diff_url": "https://github.com/huggingface/datasets/pull/4557.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4557.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4556
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4556/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4556/comments
https://api.github.com/repos/huggingface/datasets/issues/4556/events
https://github.com/huggingface/datasets/issues/4556
1,283,462,881
I_kwDODunzps5MgBbh
4,556
Dataset Viewer issue for conll2003
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "Fixed, thanks." ]
1,656,060,918,000
1,656,064,239,000
1,656,064,239,000
MEMBER
null
### Link https://huggingface.co/datasets/conll2003/viewer/conll2003/test ### Description Seems like a cache problem with this config / split: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/conll2003/__init__.py' ``` ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4556/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4556/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4555
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4555/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4555/comments
https://api.github.com/repos/huggingface/datasets/issues/4555/events
https://github.com/huggingface/datasets/issues/4555
1,283,451,651
I_kwDODunzps5Mf-sD
4,555
Dataset Viewer issue for xtreme
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "organizations_url": "https://api.github.com/users/lewtun/orgs", "repos_url": "https://api.github.com/users/lewtun/repos", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "received_events_url": "https://api.github.com/users/lewtun/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "Fixed, thanks." ]
1,656,060,368,000
1,656,064,245,000
1,656,064,245,000
MEMBER
null
### Link https://huggingface.co/datasets/xtreme/viewer/PAN-X.de/test ### Description There seems to be a problem with the cache of this config / split: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/xtreme/349258adc25bb45e47de193222f95e68a44f7a7ab53c4283b3f007208a11bf7e/xtreme.py' ``` ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4555/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4555/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4554/comments
https://api.github.com/repos/huggingface/datasets/issues/4554/events
https://github.com/huggingface/datasets/pull/4554
1,283,369,453
PR_kwDODunzps46Sv_f
4,554
Fix WMT dataset loading issue and docs update (Re-opened)
{ "login": "khushmeeet", "id": 8711912, "node_id": "MDQ6VXNlcjg3MTE5MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/8711912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/khushmeeet", "html_url": "https://github.com/khushmeeet", "followers_url": "https://api.github.com/users/khushmeeet/followers", "following_url": "https://api.github.com/users/khushmeeet/following{/other_user}", "gists_url": "https://api.github.com/users/khushmeeet/gists{/gist_id}", "starred_url": "https://api.github.com/users/khushmeeet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/khushmeeet/subscriptions", "organizations_url": "https://api.github.com/users/khushmeeet/orgs", "repos_url": "https://api.github.com/users/khushmeeet/repos", "events_url": "https://api.github.com/users/khushmeeet/events{/privacy}", "received_events_url": "https://api.github.com/users/khushmeeet/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,656,055,576,000
1,657,294,760,000
1,657,294,064,000
CONTRIBUTOR
null
This PR is a fix for #4354 Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets. Let me know, if any additional changes are required. Thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4554/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4554", "html_url": "https://github.com/huggingface/datasets/pull/4554", "diff_url": "https://github.com/huggingface/datasets/pull/4554.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4554.patch", "merged_at": 1657294064000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4553
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4553/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4553/comments
https://api.github.com/repos/huggingface/datasets/issues/4553/events
https://github.com/huggingface/datasets/pull/4553
1,282,779,560
PR_kwDODunzps46Q1q7
4,553
Stop dropping columns in to_tf_dataset() before we load batches
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq Rebasing fixed the test failures, so this should be ready to review now! There's still a failure on Win but it seems unrelated.", "Gentle ping @lhoestq ! This is a simple fix (dropping columns after loading a batch from the dataset rather than with `.remove_columns()` to make sure we don't break transforms), and tests are green so we're ready for review!", "@lhoestq Test is in!" ]
1,656,008,465,000
1,656,961,213,000
1,656,960,541,000
MEMBER
null
`to_tf_dataset()` dropped unnecessary columns before loading batches from the dataset, but this is causing problems when using a transform, because the dropped columns might be needed to compute the transform. Since there's no real way to check which columns the transform might need, we skip dropping columns and instead drop keys from the batch after we load it. cc @amyeroberts and https://github.com/huggingface/notebooks/pull/202
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4553/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4553/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4553", "html_url": "https://github.com/huggingface/datasets/pull/4553", "diff_url": "https://github.com/huggingface/datasets/pull/4553.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4553.patch", "merged_at": 1656960541000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4552
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4552/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4552/comments
https://api.github.com/repos/huggingface/datasets/issues/4552/events
https://github.com/huggingface/datasets/pull/4552
1,282,615,646
PR_kwDODunzps46QSHV
4,552
Tell users to upload on the hub directly
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks ! I updated the two remaining files" ]
1,655,999,272,000
1,656,258,586,000
1,656,257,951,000
MEMBER
null
As noted in https://github.com/huggingface/datasets/pull/4534, it is still not clear that it is recommended to add datasets on the Hugging Face Hub directly instead of GitHub, so I updated some docs. Moreover since users won't be able to get reviews from us on the Hub, I added a paragraph to tell users that they can open a discussion and tag `datasets` maintainers for reviews. Finally I removed the _previous good reasons_ to add a dataset on GitHub to only keep this one: > In some rare cases it makes more sense to open a PR on GitHub. For example when you are not the author of the dataset and there is no clear organization / namespace that you can put the dataset under. Does it sound good to you @albertvillanova @julien-c ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4552/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4552/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4552", "html_url": "https://github.com/huggingface/datasets/pull/4552", "diff_url": "https://github.com/huggingface/datasets/pull/4552.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4552.patch", "merged_at": 1656257951000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4551
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4551/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4551/comments
https://api.github.com/repos/huggingface/datasets/issues/4551/events
https://github.com/huggingface/datasets/pull/4551
1,282,534,807
PR_kwDODunzps46QAV-
4,551
Perform hidden file check on relative data file path
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm aware of this behavior, which is tricky to solve due to fsspec's hidden file handling (see https://github.com/huggingface/datasets/issues/4115#issuecomment-1108819538). I've tested some regex patterns to address this, and they seem to work (will push them on Monday; btw they don't break any of fsspec's tests, so maybe we can contribute this as an enhancement to them). Also, perhaps we should include the files starting with `__` in the results again (we hadn't had issues with this pattern before). WDYT?", "I see. Feel free to merge this one if it's good for you btw :)\r\n\r\n> Also, perhaps we should include the files starting with __ in the results again (we hadn't had issues with this pattern before)\r\n\r\nThe point was mainly to ignore `__pycache__` directories for example. Also also for consistency with the iter_files/iter_archive which are already ignoring them", "Very elegant solution! Feel free to merge if the CI is green after adding the tests.", "CI failure is unrelated to this PR" ]
1,655,995,751,000
1,656,600,560,000
1,656,599,898,000
CONTRIBUTOR
null
Fix #4549
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4551/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4551/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4551", "html_url": "https://github.com/huggingface/datasets/pull/4551", "diff_url": "https://github.com/huggingface/datasets/pull/4551.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4551.patch", "merged_at": 1656599898000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4550
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4550/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4550/comments
https://api.github.com/repos/huggingface/datasets/issues/4550/events
https://github.com/huggingface/datasets/issues/4550
1,282,374,441
I_kwDODunzps5Mb3sp
4,550
imdb source error
{ "login": "Muhtasham", "id": 20128202, "node_id": "MDQ6VXNlcjIwMTI4MjAy", "avatar_url": "https://avatars.githubusercontent.com/u/20128202?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Muhtasham", "html_url": "https://github.com/Muhtasham", "followers_url": "https://api.github.com/users/Muhtasham/followers", "following_url": "https://api.github.com/users/Muhtasham/following{/other_user}", "gists_url": "https://api.github.com/users/Muhtasham/gists{/gist_id}", "starred_url": "https://api.github.com/users/Muhtasham/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Muhtasham/subscriptions", "organizations_url": "https://api.github.com/users/Muhtasham/orgs", "repos_url": "https://api.github.com/users/Muhtasham/repos", "events_url": "https://api.github.com/users/Muhtasham/events{/privacy}", "received_events_url": "https://api.github.com/users/Muhtasham/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Thanks for reporting, @Muhtasham.\r\n\r\nIndeed IMDB dataset is not accessible from yesterday, because the data is hosted on the data owners servers at Stanford (http://ai.stanford.edu/) and these are down due to a power outage originated by a fire: https://twitter.com/StanfordAILab/status/1539472302399623170?s=20&t=1HU1hrtaXprtn14U61P55w\r\n\r\nAs a temporary workaroud, you can load the IMDB dataset with this tweak:\r\n```python\r\nds = load_dataset(\"imdb\", revision=\"tmp-fix-imdb\")\r\n```\r\n" ]
1,655,989,372,000
1,655,992,025,000
1,655,992,024,000
NONE
null
## Describe the bug imdb dataset not loading ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("imdb") ``` ## Expected results ## Actual results ```bash 06/23/2022 14:45:18 - INFO - datasets.builder - Dataset not on Hf google storage. Downloading and preparing it from source 06/23/2022 14:46:34 - INFO - datasets.utils.file_utils - HEAD request to http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz timed out, retrying... [1.0] ..... ConnectionError: Couldn't reach http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz (ConnectTimeout(MaxRetryError("HTTPConnectionPool(host='ai.stanford.edu', port=80): Max retries exceeded with url: /~amaas/data/sentiment/aclImdb_v1.tar.gz (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f2d750cf690>, 'Connection to ai.stanford.edu timed out. (connect timeout=100)'))"))) ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4550/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4550/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4549
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4549/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4549/comments
https://api.github.com/repos/huggingface/datasets/issues/4549/events
https://github.com/huggingface/datasets/issues/4549
1,282,312,975
I_kwDODunzps5MbosP
4,549
FileNotFoundError when passing a data_file inside a directory starting with double underscores
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "I have consistently experienced this bug on GitHub actions when bumping to `2.3.2`", "We're working on a fix ;)" ]
1,655,986,764,000
1,656,599,898,000
1,656,599,898,000
MEMBER
null
Bug experienced in the `accelerate` CI: https://github.com/huggingface/accelerate/runs/7016055148?check_suite_focus=true This is related to https://github.com/huggingface/datasets/pull/4505 and the changes from https://github.com/huggingface/datasets/pull/4412
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4549/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4549/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4548/comments
https://api.github.com/repos/huggingface/datasets/issues/4548/events
https://github.com/huggingface/datasets/issues/4548
1,282,218,096
I_kwDODunzps5MbRhw
4,548
Metadata.jsonl for Imagefolder is ignored if it's in a parent directory to the splits directories/do not have "{split}_" prefix
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "I agree it would be nice to support this. It doesn't fit really well in the current data_files.py, where files of each splits are separated in different folder though, maybe we have to modify a bit the logic here. \r\n\r\nOne idea would be to extend `get_patterns_in_dataset_repository` and `get_patterns_locally` to additionally check for `metadata.json`, but feel free to comment if you have better ideas (I feel like we're reaching the limits of what the current implementation IMO, so we could think of a different way of resolving the data files if necessary)" ]
1,655,981,937,000
1,656,584,132,000
1,656,584,132,000
CONTRIBUTOR
null
If data contains a single `metadata.jsonl` file for several splits, it won't be included in a dataset's `data_files` and therefore ignored. This happens when a directory is structured like as follows: ``` train/ file_1.jpg file_2.jpg test/ file_3.jpg file_4.jpg metadata.jsonl ``` or like as follows: ``` train_file_1.jpg train_file_2.jpg test_file_3.jpg test_file_4.jpg metadata.jsonl ``` The same for HF repos. because it's ignored by the patterns [here](https://github.com/huggingface/datasets/blob/master/src/datasets/data_files.py#L29) @lhoestq @mariosasko Do you think it's better to add this functionality in `data_files.py` or just specifically in imagefolder/audiofolder code? In `data_files.py` would me more general but I don't know if there are any other cases when that might be needed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4548/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4548/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4547
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4547/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4547/comments
https://api.github.com/repos/huggingface/datasets/issues/4547/events
https://github.com/huggingface/datasets/pull/4547
1,282,160,517
PR_kwDODunzps46Ot5u
4,547
[CI] Fix some warnings
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "There is a CI failure only related to the missing content of the universal_dependencies dataset card, we can ignore this failure in this PR", "good catch, I thought I resolved them all sorry", "Alright it should be good now" ]
1,655,979,049,000
1,656,425,457,000
1,656,424,794,000
MEMBER
null
There are some warnings in the CI that are annoying, I tried to remove most of them
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4547/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4547/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4547", "html_url": "https://github.com/huggingface/datasets/pull/4547", "diff_url": "https://github.com/huggingface/datasets/pull/4547.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4547.patch", "merged_at": 1656424794000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4546
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4546/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4546/comments
https://api.github.com/repos/huggingface/datasets/issues/4546/events
https://github.com/huggingface/datasets/pull/4546
1,282,093,288
PR_kwDODunzps46Oe_K
4,546
[CI] fixing seqeval install in ci by pinning setuptools-scm
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,976,277,000
1,655,979,856,000
1,655,979,224,000
MEMBER
null
The latest setuptools-scm version supported on 3.6 is 6.4.2. However for some reason circleci has version 7, which doesn't work. I fixed this by pinning the version of setuptools-scm in the circleci job Fix https://github.com/huggingface/datasets/issues/4544
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4546/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4546/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4546", "html_url": "https://github.com/huggingface/datasets/pull/4546", "diff_url": "https://github.com/huggingface/datasets/pull/4546.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4546.patch", "merged_at": 1655979224000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4545
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4545/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4545/comments
https://api.github.com/repos/huggingface/datasets/issues/4545/events
https://github.com/huggingface/datasets/pull/4545
1,280,899,028
PR_kwDODunzps46KV-y
4,545
Make DuplicateKeysError more user friendly [For Issue #2556]
{ "login": "VijayKalmath", "id": 20517962, "node_id": "MDQ6VXNlcjIwNTE3OTYy", "avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VijayKalmath", "html_url": "https://github.com/VijayKalmath", "followers_url": "https://api.github.com/users/VijayKalmath/followers", "following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}", "gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}", "starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions", "organizations_url": "https://api.github.com/users/VijayKalmath/orgs", "repos_url": "https://api.github.com/users/VijayKalmath/repos", "events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}", "received_events_url": "https://api.github.com/users/VijayKalmath/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "> Nice thanks !\r\n> \r\n> After your changes feel free to mark this PR as \"ready for review\" ;)\r\n\r\nMarking PR ready for review.\r\n\r\n@lhoestq Let me know if there is anything else required or if we are good to go ahead and merge.", "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,931,694,000
1,656,409,026,000
1,656,408,364,000
CONTRIBUTOR
null
# What does this PR do? ## Summary *DuplicateKeysError error does not provide any information regarding the examples which have the same the key.* *This information is very helpful for debugging the dataset generator script.* ## Additions - ## Changes - Changed `DuplicateKeysError Class` in `src/datasets/keyhash.py` to add current index and duplicate_key_indices to error message. - Changed `check_duplicate_keys` function in `src/datasets/arrow_writer.py` to find indices of examples with duplicate hash if duplicate keys are found. ## Deletions - ## To do : - [x] Find way to find and print path `<Path to Dataset>` in Error message ## Issues Addressed : Fixes #2556
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4545/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4545/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4545", "html_url": "https://github.com/huggingface/datasets/pull/4545", "diff_url": "https://github.com/huggingface/datasets/pull/4545.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4545.patch", "merged_at": 1656408364000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4544
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4544/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4544/comments
https://api.github.com/repos/huggingface/datasets/issues/4544/events
https://github.com/huggingface/datasets/issues/4544
1,280,500,340
I_kwDODunzps5MUuJ0
4,544
[CI] seqeval installation fails sometimes on python 3.6
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
1,655,915,723,000
1,655,979,224,000
1,655,979,224,000
MEMBER
null
The CI sometimes fails to install seqeval, which cause the `seqeval` metric tests to fail. The installation fails because of this error: ``` Collecting seqeval Downloading seqeval-1.2.2.tar.gz (43 kB) |███████▌ | 10 kB 42.1 MB/s eta 0:00:01 |███████████████ | 20 kB 53.3 MB/s eta 0:00:01 |██████████████████████▌ | 30 kB 67.2 MB/s eta 0:00:01 |██████████████████████████████ | 40 kB 76.1 MB/s eta 0:00:01 |████████████████████████████████| 43 kB 10.0 MB/s Preparing metadata (setup.py) ... - error ERROR: Command errored out with exit status 1: command: /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pf54_vqy cwd: /tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/ Complete output (22 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py", line 56, in <module> 'Programming Language :: Python :: Implementation :: PyPy' File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/__init__.py", line 143, in setup return distutils.core.setup(**attrs) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/core.py", line 108, in setup _setup_distribution = dist = klass(attrs) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 442, in __init__ k: v for k, v in attrs.items() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/dist.py", line 281, in __init__ self.finalize_options() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 601, in finalize_options ep.load()(self, ep.name, value) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2346, in load return self.resolve() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2352, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/.eggs/setuptools_scm-7.0.2-py3.6.egg/setuptools_scm/__init__.py", line 5 from __future__ import annotations ^ SyntaxError: future feature annotations is not defined ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/9d/2d/233c79d5b4e5ab1dbf111242299153f3caddddbb691219f363ad55ce783d/seqeval-1.2.2.tar.gz#sha256=f28e97c3ab96d6fcd32b648f6438ff2e09cfba87f05939da9b3970713ec56e6f (from https://pypi.org/simple/seqeval/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ``` for example in https://app.circleci.com/pipelines/github/huggingface/datasets/12665/workflows/93878eb9-a923-4b35-b2e7-c5e9b22f10ad/jobs/75300 Here is a diff of the pip install logs until the error is reached: https://www.diffchecker.com/VkQDLeQT This could be caused by the latest updates of setuptools-scm
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4544/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4544/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4543/comments
https://api.github.com/repos/huggingface/datasets/issues/4543/events
https://github.com/huggingface/datasets/pull/4543
1,280,379,781
PR_kwDODunzps46IiEp
4,543
[CI] Fix upstream hub test url
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Remaining CI failures are unrelated to this fix, merging" ]
1,655,912,067,000
1,655,915,860,000
1,655,915,257,000
MEMBER
null
Some tests were still using moon-stagign instead of hub-ci. I also updated the token to use one dedicated to `datasets`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4543/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4543", "html_url": "https://github.com/huggingface/datasets/pull/4543", "diff_url": "https://github.com/huggingface/datasets/pull/4543.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4543.patch", "merged_at": 1655915257000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4542
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4542/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4542/comments
https://api.github.com/repos/huggingface/datasets/issues/4542/events
https://github.com/huggingface/datasets/issues/4542
1,280,269,445
I_kwDODunzps5MT1yF
4,542
[to_tf_dataset] Use Feather for better compatibility with TensorFlow ?
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
[ "This has so much potential to be great! Also I think you tagged some poor random dude on the internet whose name is also Joao, lol, edited that for you! ", "cc @sayakpaul here too, since he was interested in our new approaches to converting datasets!", "Noted and I will look into the thread in detail tomorrow once I log back in. ", "@lhoestq I have used TFRecords with `tf.data` for both vision and text and I can say that they are quite performant. I haven't worked with Feather yet as similarly as I have with TFRecords. If you haven't started the benchmarking script yet, I can prepare a Colab notebook that loads Feather files, converts them into a `tf.data` pipeline, and does some basic preprocessing. \r\n\r\nBut in my limited understanding, Feather might be better suited for CSV files. Not yet sure if it's good for modalities like images. ", "> Not yet sure if it's good for modalities like images.\r\n\r\nWe store images pretty much the same way as tensorflow_datasets (i.e. storing the encoded image bytes, or a path to the local image, so that the image can be decoded on-the-fly), so as long as we use something similar as TFDS for image decoding it should be ok", "So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly? But it introduces an I/O redundancy of having to read the images every time.\r\n\r\nWith caching it could be somewhat mitigated but it's not a good solution for bigger image datasets. ", "> So for image datasets, we could potentially store the paths in the feather format and decode and read them on the fly?\r\n\r\nhopefully yes :) \r\n\r\nI double-checked the TFDS source code and they always save the bytes actually, not the path. Anyway we'll see if we run into issues or not (as a first step we can require the bytes to be in the feather file)", "Yes. For images, TFDS actually prepares TFRecords first for encoding and then reuses them for every subsequent call. ", "@lhoestq @Rocketknight1 I worked on [this PoC](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59) that\r\n\r\n* Creates Feather files from a medium resolution dataset (`tf_flowers`).\r\n* Explores different options with TensorFlow IO to load the Feather files. \r\n\r\nI haven't benchmarked those different options yet. There's also a gotcha that I have noted in the PoC. I hope it gets us started but I'm sorry if this is redundant. ", "Cool thanks ! If I understand correctly in your PoC you store the flattened array of pixels in the feather file. This will take a lot of disk space.\r\n\r\nMaybe we could just save the encoded bytes and let users apply a `map` to decode/transform them into the format they need for training ? Users can use tf.image to do so for example", "@lhoestq this is what I tried:\r\n\r\n```py\r\ndef read_image(path):\r\n with open(path, \"rb\") as f:\r\n return f.read()\r\n\r\n\r\ntotal_images_written = 0\r\n\r\nfor step in tqdm.tnrange(int(math.ceil(len(image_paths) / batch_size))):\r\n batch_image_paths = image_paths[step * batch_size : (step + 1) * batch_size]\r\n batch_image_labels = all_integer_labels[step * batch_size : (step + 1) * batch_size]\r\n\r\n data = [read_image(path) for path in batch_image_paths]\r\n table = pa.Table.from_arrays([data, batch_image_labels], [\"data\", \"labels\"])\r\n write_feather(table, f\"/tmp/flowers_feather_{step}.feather\", chunksize=chunk_size)\r\n total_images_written += len(batch_image_paths)\r\n print(f\"Total images written: {total_images_written}.\")\r\n\r\n del data\r\n```\r\n\r\nI got the feather files done (no resizing required as you can see):\r\n\r\n```sh\r\nls -lh /tmp/*.feather\r\n\r\n-rw-r--r-- 1 sayakpaul wheel 64M Jun 24 09:28 /tmp/flowers_feather_0.feather\r\n-rw-r--r-- 1 sayakpaul wheel 59M Jun 24 09:28 /tmp/flowers_feather_1.feather\r\n-rw-r--r-- 1 sayakpaul wheel 51M Jun 24 09:28 /tmp/flowers_feather_2.feather\r\n-rw-r--r-- 1 sayakpaul wheel 45M Jun 24 09:28 /tmp/flowers_feather_3.feather\r\n```\r\n\r\nNow there seems to be a problem with `tfio.arrow`:\r\n\r\n```py\r\nimport tensorflow_io.arrow as arrow_io\r\n\r\n\r\ndataset = arrow_io.ArrowFeatherDataset(\r\n [\"/tmp/flowers_feather_0.feather\"],\r\n columns=(0, 1),\r\n output_types=(tf.string, tf.int64),\r\n output_shapes=([], []),\r\n batch_mode=\"auto\",\r\n)\r\n\r\nprint(dataset.element_spec) \r\n```\r\n\r\nPrints:\r\n\r\n```\r\n(TensorSpec(shape=(None,), dtype=tf.string, name=None),\r\n TensorSpec(shape=(None,), dtype=tf.int64, name=None))\r\n```\r\n\r\nBut when I do `sample = next(iter(dataset))` it goes into:\r\n\r\n```py\r\nInternalError Traceback (most recent call last)\r\nInput In [30], in <cell line: 1>()\r\n----> 1 sample = next(iter(dataset))\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:766, in OwnedIterator.__next__(self)\r\n 764 def __next__(self):\r\n 765 try:\r\n--> 766 return self._next_internal()\r\n 767 except errors.OutOfRangeError:\r\n 768 raise StopIteration\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/data/ops/iterator_ops.py:749, in OwnedIterator._next_internal(self)\r\n 746 # TODO(b/77291417): This runs in sync mode as iterators use an error status\r\n 747 # to communicate that there is no more data to iterate over.\r\n 748 with context.execution_mode(context.SYNC):\r\n--> 749 ret = gen_dataset_ops.iterator_get_next(\r\n 750 self._iterator_resource,\r\n 751 output_types=self._flat_output_types,\r\n 752 output_shapes=self._flat_output_shapes)\r\n 754 try:\r\n 755 # Fast path for the case `self._structure` is not a nested structure.\r\n 756 return self._element_spec._from_compatible_tensor_list(ret) # pylint: disable=protected-access\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/ops/gen_dataset_ops.py:3017, in iterator_get_next(iterator, output_types, output_shapes, name)\r\n 3015 return _result\r\n 3016 except _core._NotOkStatusException as e:\r\n-> 3017 _ops.raise_from_not_ok_status(e, name)\r\n 3018 except _core._FallbackException:\r\n 3019 pass\r\n\r\nFile ~/.local/bin/.virtualenvs/jax/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:7164, in raise_from_not_ok_status(e, name)\r\n 7162 def raise_from_not_ok_status(e, name):\r\n 7163 e.message += (\" name: \" + name if name is not None else \"\")\r\n-> 7164 raise core._status_to_exception(e) from None\r\n\r\nInternalError: Invalid: INVALID_ARGUMENT: arrow data type 0x7ff9899d8038 is not supported: Type error: Arrow data type is not supported [Op:IteratorGetNext]\r\n```\r\n\r\nSome additional notes:\r\n\r\n* I can actually decode an image encoded with `read_image()` (shown earlier):\r\n\r\n ```py\r\n sample_image_path = image_paths[0]\r\n encoded_image = read_image(sample_image_path)\r\n image = tf.image.decode_png(encoded_image, 3)\r\n print(image.shape)\r\n ```\r\n\r\n* If the above `tf.data.Dataset` object would have succeeded my plan was to just map the decoder like so:\r\n\r\n ```py\r\n autotune = tf.data.AUTOTUNE\r\n dataset = dataset.map(lambda x, y: (tf.image.decode_png(x, 3), y), num_parallel_calls=autotune)\r\n ```", "@lhoestq I think I was able to make it work in the way you were envisioning. Here's the PoC:\r\nhttps://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb\r\n\r\nSome details:\r\n\r\n* I am currently serializing the images as strings with `base64`). In comparison to the flattened arrays as before, the size of the individual feather files has reduced (144 MB -> 85 MB, largest).\r\n* When decoding, I am first decoding the base64 string and then decoding that string (with `tf.io.decode_base64`) as an image with `tf.image.decode_png()`. \r\n* The entire workflow (from generating the Feather files to loading them and preparing the batched `tf.data` pipeline) involves the following libraries: `pyarraow`, `tensorflow-io`, and `tensorflow`. \r\n\r\nCc: @Rocketknight1 @gante ", "Cool thanks ! Too bad the Arrow binary type doesn't seem to be supported in `arrow_io.ArrowFeatherDataset` :/ We would also need it to support Arrow struct type. Indeed images in `datasets` are represented using an Arrow type\r\n```python\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n```\r\nnot sure yet how hard it is to support this though.\r\n\r\nChanging the typing on our side would create concerning breaking changes, that's why it would be awesome if it could work using these types", "If the ArrowFeatherDataset doesn't yet support it, I guess our hands are a bit tied at the moment. \r\n\r\nIIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n\r\n```\r\npa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n``` \r\n\r\nIn that case, `pa.binary()` isn't yet supported.", "> IIUC, in my [latest PoC notebook](https://gist.github.com/sayakpaul/f7d5cc312cd01cb31098fad3fd9c6b59#file-feather-tf-poc-bytes-ipynb), you wanted to see each entry in the feather file to be represented like so?\r\n> \r\n> pa.struct({\"path\": pa.string(), \"bytes\": pa.binary()})\r\n\r\nYea because that's the data format we're using. If we were to use base64, then we would have to process the full dataset to convert it, which can take some time. Converting to TFRecords would be simpler than converting to base64 in Feather files.\r\n\r\nMaybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset. What do you think ? Any other alternative in mind ?", "> Maybe it would take too much time to be worth exploring, but according to https://github.com/tensorflow/io/issues/1361#issuecomment-819029002 it's possible to add support for binary type in ArrowFeatherDataset.\r\n\r\nShould be possible as per the comment but there hasn't been any progress and it's been more than a year. \r\n\r\n> If we were to use base64, then we would have to process the full dataset to convert it, which can take some time.\r\n\r\nI don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from. \r\n\r\n> What do you think ? Any other alternative in mind ?\r\n\r\nTFRecords since the TensorFlow ecosystem has developed good support for it over the years. ", "> I don't understand this. I would think TFRecords would also need something similar but I need the context you're coming from.\r\n\r\nUsers already have a copy of the dataset in Arrow format (we can change this to Feather). So to load the Arrow/feather files to a TF dataset we need TF IO or something like that. Otherwise the user has to convert all the files from Arrow to TFRecords to use TF data efficiently. But the conversion needs resources: CPU, disk, time. Converting the images to base64 require the same sort of resources.\r\n\r\nSo the issue we're trying to tackle is how to load the Arrow data in TF without having to convert anything ^^", "Yeah, it looks like in its current state the tfio support for `Feather` is incomplete, so we'd end up having to write a lot of it, or do a conversion that defeats the whole point (because if we're going to convert the whole dataset we might as well convert to `TFRecord`).", "Understood @lhoestq. Thanks for explaining!\r\n\r\nAgreed with @Rocketknight1. ", "@lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?", "> @lhoestq Although I think this is a dead-end for now unfortunately, because of the limitations at TF's end, we could still explore automatic conversion to TFRecord, or I could dive into refining `to_tf_dataset()` to yield unbatched samples and/or load samples with multiprocessing to improve throughput. Do you have any preferences there?\r\n\r\nHappy to take part there @Rocketknight1.", "If `to_tf_dataset` can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?", "@lhoestq why one would convert to TFRecords after unbatching? ", "> If to_tf_dataset can be unbatched, then it should be fairly easy for users to convert the TF dataset to TFRecords right ?\r\n\r\nSort of! A `tf.data.Dataset` is more like an iterator, and does not support sample indexing. `to_tf_dataset()` creates an iterator, but to convert that to `TFRecord`, the user would have to iterate over the whole thing and manually save the stream of samples to files. ", "Someone would like to try to dive into tfio to fix this ? Sounds like a good opportunity to learn what are the best ways to load a dataset for TF, and also the connections between Arrow and TF.\r\n\r\nIf we can at least have the Arrow `binary` type working for TF that would be awesome already (issue https://github.com/tensorflow/io/issues/1361)\r\n\r\nalso cc @nateraw in case you'd be interested ;)", "> Sounds like a good opportunity to learn what are the best ways to load a dataset for TF\r\n\r\nThe recommended way would likely be a combination of TFRecords and `tf.data`. \r\n\r\nExploring the connection between Arrow and TensorFlow is definitely worth pursuing though. But I am not sure about the implications of storing images in a format supported by Arrow. I guess we'll know more once we have at least figured out the support for `binary` type for TFIO. I will spend some time on it and keep this thread updated. ", "I am currently working on a fine-tuning notebook for the TFSegFormer model (Semantic Segmentation). The resolution is high for both the input images and the labels - (512, 512, 3). Here's the [Colab Notebook](https://colab.research.google.com/drive/1jAtR7Z0lYX6m6JsDI5VByh5vFaNhHIbP?usp=sharing) (it's a WIP so please bear that in mind).\r\n\r\nI think the current implementation of `to_tf_dataset()` does create a bottleneck here since the GPU utilization is quite low. ", "Here's a notebook showing the performance difference: https://colab.research.google.com/gist/sayakpaul/d7ca67c90beb47e354942c9d8c0bd8ef/scratchpad.ipynb. \r\n\r\nNote that I acknowledge that it's not an apples-to-apples comparison in many aspects (the dataset isn't the same, data serialization format isn't the same, etc.) but this is the best I could do. ", "Thanks ! I think the speed difference can be partly explained: you use ds.shuffle in your dataset, which is an exact shuffling (compared to TFDS which does buffer shuffling): it slows down query time by 2x to 10x since it has to play with data that are not contiguous.\r\n\r\nThe rest of the speed difference seems to be caused by image decoding (from 330µs/image to 30ms/image)", "Fair enough. Can do one without shuffling too. But it's an important one to consider I guess. " ]
1,655,908,920,000
1,658,135,143,000
null
MEMBER
null
To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory. It seems that using `tensorflow_io` we could have something similar for `to_tf_dataset` if we provide sharded Feather files: https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowFeatherDataset Feather is a format almost equivalent to the Arrow IPC Stream format we're using in `datasets`: Feather V2 is equivalent to Arrow IPC File format, which is an extension of the stream format (it has an extra footer). Therefore we could store datasets as Feather instead of Arrow IPC Stream format without breaking the whole library. Here are a few points to explore - [ ] check the performance of ArrowFeatherDataset in tf.data - [ ] check what would change if we were to switch to Feather if needed, in particular check that those are fine: memory mapping, typing, writing, reading to python objects, etc. We would also need to implement sharding when loading a dataset (this will be done anyway for #546) cc @Rocketknight1 @gante feel free to comment in case I missed anything ! I'll share some files and scripts, so that we can benchmark performance of Feather files with tf.data
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4542/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4542/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4541/comments
https://api.github.com/repos/huggingface/datasets/issues/4541/events
https://github.com/huggingface/datasets/pull/4541
1,280,161,436
PR_kwDODunzps46HyPK
4,541
Fix timestamp conversion from Pandas to Python datetime in streaming mode
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "CI failures are unrelated to this PR, merging" ]
1,655,905,201,000
1,655,915,967,000
1,655,915,349,000
MEMBER
null
Arrow accepts both pd.Timestamp and datetime.datetime objects to create timestamp arrays. However a timestamp array is always converted to datetime.datetime objects. This created an inconsistency between streaming in non-streaming. e.g. the `ett` dataset outputs datetime.datetime objects in non-streaming but pd.timestamp in streaming. I fixed this by always converting pd.Timestamp to datetime.datetime during the example encoding step. I fixed the same issue for pd.Timedelta as well. Finally I added an extra step of conversion for Series and DataFrame to take this into account in case such data are passed as Series or DataFrame. Fix https://github.com/huggingface/datasets/issues/4533 Related to https://github.com/huggingface/datasets-server/issues/397
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4541/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4541/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4541", "html_url": "https://github.com/huggingface/datasets/pull/4541", "diff_url": "https://github.com/huggingface/datasets/pull/4541.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4541.patch", "merged_at": 1655915349000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4540
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4540/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4540/comments
https://api.github.com/repos/huggingface/datasets/issues/4540/events
https://github.com/huggingface/datasets/issues/4540
1,280,142,942
I_kwDODunzps5MTW5e
4,540
Avoid splitting by` .py` for the file.
{ "login": "espoirMur", "id": 18573157, "node_id": "MDQ6VXNlcjE4NTczMTU3", "avatar_url": "https://avatars.githubusercontent.com/u/18573157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/espoirMur", "html_url": "https://github.com/espoirMur", "followers_url": "https://api.github.com/users/espoirMur/followers", "following_url": "https://api.github.com/users/espoirMur/following{/other_user}", "gists_url": "https://api.github.com/users/espoirMur/gists{/gist_id}", "starred_url": "https://api.github.com/users/espoirMur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/espoirMur/subscriptions", "organizations_url": "https://api.github.com/users/espoirMur/orgs", "repos_url": "https://api.github.com/users/espoirMur/repos", "events_url": "https://api.github.com/users/espoirMur/events{/privacy}", "received_events_url": "https://api.github.com/users/espoirMur/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
{ "login": "VijayKalmath", "id": 20517962, "node_id": "MDQ6VXNlcjIwNTE3OTYy", "avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VijayKalmath", "html_url": "https://github.com/VijayKalmath", "followers_url": "https://api.github.com/users/VijayKalmath/followers", "following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}", "gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}", "starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions", "organizations_url": "https://api.github.com/users/VijayKalmath/orgs", "repos_url": "https://api.github.com/users/VijayKalmath/repos", "events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}", "received_events_url": "https://api.github.com/users/VijayKalmath/received_events", "type": "User", "site_admin": false }
[ { "login": "VijayKalmath", "id": 20517962, "node_id": "MDQ6VXNlcjIwNTE3OTYy", "avatar_url": "https://avatars.githubusercontent.com/u/20517962?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VijayKalmath", "html_url": "https://github.com/VijayKalmath", "followers_url": "https://api.github.com/users/VijayKalmath/followers", "following_url": "https://api.github.com/users/VijayKalmath/following{/other_user}", "gists_url": "https://api.github.com/users/VijayKalmath/gists{/gist_id}", "starred_url": "https://api.github.com/users/VijayKalmath/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VijayKalmath/subscriptions", "organizations_url": "https://api.github.com/users/VijayKalmath/orgs", "repos_url": "https://api.github.com/users/VijayKalmath/repos", "events_url": "https://api.github.com/users/VijayKalmath/events{/privacy}", "received_events_url": "https://api.github.com/users/VijayKalmath/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @espoirMur, thanks for reporting.\r\n\r\nYou are right: that code line could be improved and made more generically valid.\r\n\r\nOn the other hand, I would suggest using `os.path.splitext` instead.\r\n\r\nAre you willing to open a PR? :)", "I will have a look.. \r\n\r\nThis weekend .. ", "@albertvillanova , Can you have a look at #4590. \r\n\r\nThanks ", "#self-assign" ]
1,655,904,415,000
1,657,199,864,000
1,657,199,864,000
NONE
null
https://github.com/huggingface/datasets/blob/90b3a98065556fc66380cafd780af9b1814b9426/src/datasets/load.py#L272 Hello, Thanks you for this library . I was using it and I had one edge case. my home folder name ends with `.py` it is `/home/espoir.py` so anytime I am running the code to load a local module this code here it is failing because after splitting it is trying to save the code to my home directory. Step to reproduce. - If you have a home folder which ends with `.py` - load a module with a local folder `qa_dataset = load_dataset("src/data/build_qa_dataset.py")` it is failed A possible workaround would be to use pathlib at the mentioned line ` meta_path = Path(importable_local_file).parent.joinpath("metadata.json")` this can alivate the issue . Let me what are your thought on this and I can try to fix it by A PR.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4540/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4540/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4539
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4539/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4539/comments
https://api.github.com/repos/huggingface/datasets/issues/4539/events
https://github.com/huggingface/datasets/pull/4539
1,279,779,829
PR_kwDODunzps46GfWv
4,539
Replace deprecated logging.warn with logging.warning
{ "login": "hugovk", "id": 1324225, "node_id": "MDQ6VXNlcjEzMjQyMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/1324225?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hugovk", "html_url": "https://github.com/hugovk", "followers_url": "https://api.github.com/users/hugovk/followers", "following_url": "https://api.github.com/users/hugovk/following{/other_user}", "gists_url": "https://api.github.com/users/hugovk/gists{/gist_id}", "starred_url": "https://api.github.com/users/hugovk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hugovk/subscriptions", "organizations_url": "https://api.github.com/users/hugovk/orgs", "repos_url": "https://api.github.com/users/hugovk/repos", "events_url": "https://api.github.com/users/hugovk/events{/privacy}", "received_events_url": "https://api.github.com/users/hugovk/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,655,886,749,000
1,655,905,403,000
1,655,902,311,000
CONTRIBUTOR
null
Replace `logging.warn` (deprecated in [Python 2.7, 2011](https://github.com/python/cpython/commit/04d5bc00a219860c69ea17eaa633d3ab9917409f)) with `logging.warning` (added in [Python 2.3, 2003](https://github.com/python/cpython/commit/6fa635df7aa88ae9fd8b41ae42743341316c90f7)). * https://docs.python.org/3/library/logging.html#logging.Logger.warning * https://github.com/python/cpython/issues/57444
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4539/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4539/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4539", "html_url": "https://github.com/huggingface/datasets/pull/4539", "diff_url": "https://github.com/huggingface/datasets/pull/4539.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4539.patch", "merged_at": 1655902311000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4538
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4538/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4538/comments
https://api.github.com/repos/huggingface/datasets/issues/4538/events
https://github.com/huggingface/datasets/issues/4538
1,279,409,786
I_kwDODunzps5MQj56
4,538
Dataset Viewer issue for Pile of Law
{ "login": "Breakend", "id": 1609857, "node_id": "MDQ6VXNlcjE2MDk4NTc=", "avatar_url": "https://avatars.githubusercontent.com/u/1609857?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Breakend", "html_url": "https://github.com/Breakend", "followers_url": "https://api.github.com/users/Breakend/followers", "following_url": "https://api.github.com/users/Breakend/following{/other_user}", "gists_url": "https://api.github.com/users/Breakend/gists{/gist_id}", "starred_url": "https://api.github.com/users/Breakend/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Breakend/subscriptions", "organizations_url": "https://api.github.com/users/Breakend/orgs", "repos_url": "https://api.github.com/users/Breakend/repos", "events_url": "https://api.github.com/users/Breakend/events{/privacy}", "received_events_url": "https://api.github.com/users/Breakend/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @Breakend, yes – we'll propose a solution today", "Thanks so much, I appreciate it!", "Thanks so much for adding the docs. I was able to successfully hide the viewer using the \r\n```\r\nviewer: false\r\n```\r\nflag in the README.md of the dataset. I'm closing the issue because this is resolved. Thanks again!", "Awesome! Thanks for confirming. cc @severo ", "Just for the record:\r\n\r\n- the doc\r\n \r\n<img width=\"1430\" alt=\"Capture d’écran 2022-06-27 à 09 29 27\" src=\"https://user-images.githubusercontent.com/1676121/175884089-bca6c0d5-6387-473e-98ca-86a910ede4bd.png\">\r\n\r\n- the dataset main page\r\n\r\n<img width=\"1134\" alt=\"Capture d’écran 2022-06-27 à 09 29 05\" src=\"https://user-images.githubusercontent.com/1676121/175884152-5f285bf0-3471-45de-927a-e141b00ebb33.png\">\r\n\r\n- the dataset viewer page\r\n\r\n<img width=\"567\" alt=\"Capture d’écran 2022-06-27 à 09 29 16\" src=\"https://user-images.githubusercontent.com/1676121/175884191-ab6a297b-1c11-417e-bbde-0b7623278a79.png\">\r\n" ]
1,655,866,120,000
1,656,315,023,000
1,656,282,382,000
NONE
null
### Link https://huggingface.co/datasets/pile-of-law/pile-of-law ### Description Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines and so would like to turn off dataset previews. But we do not want to collect user emails because it would violate single blind review, allowing us to deduce potential reviewers' identities. Is there a way that we can turn off the dataset viewer without collecting identity information? Thanks so much! ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4538/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4538/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4537
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4537/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4537/comments
https://api.github.com/repos/huggingface/datasets/issues/4537/events
https://github.com/huggingface/datasets/pull/4537
1,279,144,310
PR_kwDODunzps46ESJn
4,537
Fix WMT dataset loading issue and docs update
{ "login": "khushmeeet", "id": 8711912, "node_id": "MDQ6VXNlcjg3MTE5MTI=", "avatar_url": "https://avatars.githubusercontent.com/u/8711912?v=4", "gravatar_id": "", "url": "https://api.github.com/users/khushmeeet", "html_url": "https://github.com/khushmeeet", "followers_url": "https://api.github.com/users/khushmeeet/followers", "following_url": "https://api.github.com/users/khushmeeet/following{/other_user}", "gists_url": "https://api.github.com/users/khushmeeet/gists{/gist_id}", "starred_url": "https://api.github.com/users/khushmeeet/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/khushmeeet/subscriptions", "organizations_url": "https://api.github.com/users/khushmeeet/orgs", "repos_url": "https://api.github.com/users/khushmeeet/repos", "events_url": "https://api.github.com/users/khushmeeet/events{/privacy}", "received_events_url": "https://api.github.com/users/khushmeeet/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "The PR branch now has some commits unrelated to the changes, probably due to rebasing. Can you please close this PR and open a new one from a new branch? You can use `git cherry-pick` to preserve the relevant changes:\r\n```bash\r\ngit checkout master\r\ngit remote add upstream git@github.com:huggingface/datasets.git\r\ngit pull --ff-only upstream master\r\ngit checkout -b wmt-datasets-fix2\r\ngit cherry-pick f2d6c995d5153131168f64fc60fe33a7813739a4 a9fdead5f435aeb88c237600be28eb8d4fde4c55\r\n```", "Closing this PR due to unwanted commit changes. Will be opening new PR for the same issue." ]
1,655,848,082,000
1,656,054,343,000
1,656,054,310,000
CONTRIBUTOR
null
This PR is a fix for #4354 Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets. As I am on a M1 Mac, I am not able to create a virtual `dev` environment using `pip install -e ".[dev]"`. Issue is with `tensorflow-text` not supported on M1s and there is no supporting repo by Apple or Google. So, if I was needed to perform local testing, I am not able to do that. Let me know, if any additional changes are required. Thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4537/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4537", "html_url": "https://github.com/huggingface/datasets/pull/4537", "diff_url": "https://github.com/huggingface/datasets/pull/4537.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4537.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4536
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4536/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4536/comments
https://api.github.com/repos/huggingface/datasets/issues/4536/events
https://github.com/huggingface/datasets/pull/4536
1,278,734,727
PR_kwDODunzps46C2z6
4,536
Properly raise FileNotFound even if the dataset is private
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,831,150,000
1,656,413,211,000
1,656,412,570,000
MEMBER
null
`tests/test_load.py::test_load_streaming_private_dataset` was failing because the hub now returns 401 when getting the HfApi.dataset_info of a dataset without authentication. `load_dataset` was raising ConnectionError, while it should be FileNoteFoundError since it first checks for local files before checking the Hub. Moreover when use_auth_token is not set (default is False), we should not pass `token=None` to HfApi.dataset_info, or it will use the local token by default - instead it should use no token. It's currently not possible to ask for no token to be used, so as a workaround I simply set token="no-token"
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4536/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4536/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4536", "html_url": "https://github.com/huggingface/datasets/pull/4536", "diff_url": "https://github.com/huggingface/datasets/pull/4536.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4536.patch", "merged_at": 1656412570000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4535
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4535/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4535/comments
https://api.github.com/repos/huggingface/datasets/issues/4535/events
https://github.com/huggingface/datasets/pull/4535
1,278,365,039
PR_kwDODunzps46BnXq
4,535
Add `batch_size` parameter when calling `add_faiss_index` and `add_faiss_index_from_external_arrays`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Also, I had a doubt while checking the code related to the indices... \r\n\r\n@lhoestq, there's a value in `config.py` named `DATASET_INDICES_FILENAME` which has the arrow extension (which I assume it should be `indices.faiss`, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an `ArrowDataset` in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nhttps://github.com/huggingface/datasets/blob/aec86ea4b790ccccc9b2e0376a496728b1c914cc/src/datasets/config.py#L183\r\n\r\nhttps://github.com/huggingface/datasets/blob/aec86ea4b790ccccc9b2e0376a496728b1c914cc/src/datasets/arrow_dataset.py#L1079-L1092\r\n\r\nSo should I also remove that?\r\n\r\nP.S. I also edited the following code comment which I found misleading as it's not actually storing the indices.\r\n\r\nhttps://github.com/huggingface/datasets/blob/8ddc4bbeb1e2bd307b21f5d21f884649aa2bf640/src/datasets/arrow_dataset.py#L1122", "_The documentation is not available anymore as the PR was closed or merged._", "> @lhoestq, there's a value in config.py named DATASET_INDICES_FILENAME which has the arrow extension (which I assume it should be indices.faiss, as the Elastic Search indices are not stored in a file, but not sure), and it's just used before actually saving an ArrowDataset in disk, but since those indices are never stored AFAIK, is that actually required?\r\n\r\nThe arrow file is used to store an indices mapping (when you shuffle the dataset for example) - not for a faiss index ;)", "Ok cool thanks a lot for the explanation @lhoestq I was not sure about that :+1: I'll also add it there as you suggested!", "CI failures are unrelated to this PR and fixed on master, merging" ]
1,655,813,929,000
1,656,347,109,000
1,656,346,476,000
CONTRIBUTOR
null
Currently, even though the `batch_size` when adding vectors to the FAISS index can be tweaked in `FaissIndex.add_vectors()`, the function `ArrowDataset.add_faiss_index` doesn't have either the parameter `batch_size` to be propagated to the nested `FaissIndex.add_vectors` function or `*args, **kwargs`, so on, this PR adds the `batch_size` parameter to both `ArrowDataset.add_faiss_index` and `ArrowDataset.add_faiss_index_from_external_arrays`. This is useful so as to tweak the `batch_size` according to the VM specifications.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4535/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4535/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4535", "html_url": "https://github.com/huggingface/datasets/pull/4535", "diff_url": "https://github.com/huggingface/datasets/pull/4535.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4535.patch", "merged_at": 1656346476000 }
true
https://api.github.com/repos/huggingface/datasets/issues/4534
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4534/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4534/comments
https://api.github.com/repos/huggingface/datasets/issues/4534/events
https://github.com/huggingface/datasets/pull/4534
1,277,897,197
PR_kwDODunzps46AFK_
4,534
Add `tldr_news` dataset
{ "login": "JulesBelveze", "id": 32683010, "node_id": "MDQ6VXNlcjMyNjgzMDEw", "avatar_url": "https://avatars.githubusercontent.com/u/32683010?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JulesBelveze", "html_url": "https://github.com/JulesBelveze", "followers_url": "https://api.github.com/users/JulesBelveze/followers", "following_url": "https://api.github.com/users/JulesBelveze/following{/other_user}", "gists_url": "https://api.github.com/users/JulesBelveze/gists{/gist_id}", "starred_url": "https://api.github.com/users/JulesBelveze/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JulesBelveze/subscriptions", "organizations_url": "https://api.github.com/users/JulesBelveze/orgs", "repos_url": "https://api.github.com/users/JulesBelveze/repos", "events_url": "https://api.github.com/users/JulesBelveze/events{/privacy}", "received_events_url": "https://api.github.com/users/JulesBelveze/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hey @lhoestq, \r\nSorry for opening a PR, I was following the guide [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)! Thanks for the review anyway, I will follow the instructions you sent 😃 ", "Thanks, we will update the guide ;)" ]
1,655,787,763,000
1,655,994,834,000
1,655,821,271,000
NONE
null
This PR aims at adding support for a news dataset: `tldr news`. This dataset is based on the daily [tldr tech newsletter](https://tldr.tech/newsletter) and contains a `headline` as well as a `content` for every piece of news contained in a newsletter.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4534/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4534/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4534", "html_url": "https://github.com/huggingface/datasets/pull/4534", "diff_url": "https://github.com/huggingface/datasets/pull/4534.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4534.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4533
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4533/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4533/comments
https://api.github.com/repos/huggingface/datasets/issues/4533/events
https://github.com/huggingface/datasets/issues/4533
1,277,211,490
I_kwDODunzps5MILNi
4,533
Timestamp not returned as datetime objects in streaming mode
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[]
1,655,746,127,000
1,655,915,349,000
1,655,915,349,000
MEMBER
null
As reported in (internal) https://github.com/huggingface/datasets-server/issues/397 ```python >>> from datasets import load_dataset >>> dataset = load_dataset("ett", name="h2", split="test", streaming=True) >>> d = next(iter(dataset)) >>> d['start'] Timestamp('2016-07-01 00:00:00') ``` while loading in non-streaming mode it returns `datetime.datetime(2016, 7, 1, 0, 0)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4533/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4533/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4532
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4532/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4532/comments
https://api.github.com/repos/huggingface/datasets/issues/4532/events
https://github.com/huggingface/datasets/pull/4532
1,277,167,129
PR_kwDODunzps459kB7
4,532
Add Video feature
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4532). All of your documentation changes will be reflected on that endpoint." ]
1,655,743,001,000
1,657,120,794,000
null
CONTRIBUTOR
null
The following adds a `Video` feature for encoding/decoding videos on the fly from in memory bytes. It uses my own `encoded-video` library which is basically `pytorchvideo`'s encoded video but with all the `torch` specific stuff stripped out. Because of that, and because the tool I used under the hood is not very mature, I leave this as a draft idea that we can use to build off of.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4532/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4532/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4532", "html_url": "https://github.com/huggingface/datasets/pull/4532", "diff_url": "https://github.com/huggingface/datasets/pull/4532.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4532.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4531/comments
https://api.github.com/repos/huggingface/datasets/issues/4531/events
https://github.com/huggingface/datasets/issues/4531
1,277,054,172
I_kwDODunzps5MHkzc
4,531
Dataset Viewer issue for CSV datasets
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "this should now be fixed", "Confirmed, it's fixed now. Thanks for reporting, and thanks @coyotte508 for fixing it\r\n\r\n<img width=\"1123\" alt=\"Capture d’écran 2022-06-21 à 10 28 05\" src=\"https://user-images.githubusercontent.com/1676121/174753833-1b453a5a-6a90-4717-bca1-1b5fc6b75e4a.png\">\r\n" ]
1,655,736,984,000
1,655,800,126,000
1,655,800,107,000
CONTRIBUTOR
null
### Link https://huggingface.co/datasets/scikit-learn/breast-cancer-wisconsin ### Description I'm populating CSV datasets [here](https://huggingface.co/scikit-learn) but the viewer is not enabled and it looks for a dataset loading script, the datasets aren't on queue as well. You can replicate the problem by simply uploading any CSV dataset. ### Owner Yes
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4531/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4531/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4530
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4530/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4530/comments
https://api.github.com/repos/huggingface/datasets/issues/4530/events
https://github.com/huggingface/datasets/pull/4530
1,276,884,962
PR_kwDODunzps458n_S
4,530
Add AudioFolder packaged loader
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4530). All of your documentation changes will be reflected on that endpoint.", "@lhoestq @mariosasko I don't know what to do with the test, do you have any ideas? :)", "also it's passed in `pyarrow_latest_WIN`", "If the error only happens on 3.6, maybe #4460 can help ^^' It seems to work in 3.7 on the windows CI\r\n\r\n> inferring labels is not the default behavior (drop_labels is set to True in config)\r\n\r\nI think it a missed opportunity to have a consistent API between imagefolder and audiofolder, since they do everything the same way. Can you give more details why you think we should drop the labels by default ?", "Considering audio classification in audio is not as common as image classification in image, I'm ok with having different config defaults as long as they are properly documented (check [Papers With Code](https://paperswithcode.com/datasets) for stats and compare the classification numbers to the other tasks, do this for both modalities)\r\n\r\nAlso, WDYT about creating a generic folder loader that ImageFolder and AudioFolder then subclass to avoid having to update both of them when there is something to update/fix?", "@lhoestq I think it doesn't change the API itself, it just doesn't infer labels by default, but you can **still** set `drop_labels=False` to `load_dataset` and the labels will be inferred. \r\nSuppose that one has data structured as follows:\r\n```\r\ndata/\r\n train/\r\n audio/\r\n file1.wav\r\n file2.wav\r\n file3.wav\r\n metadata.jsonl\r\n test/\r\n audio/\r\n file1.wav\r\n file2.wav\r\n file3.wav\r\n metadata.jsonl\r\n```\r\nIf users load this dataset with `load_dataset(\"audiofolder\", data_dir=\"data\")` (the most native way), they will get a `label` feature that will always be equal to 0 (= \"audio\"). To mitigate this, they will have to always specify `load_dataset(\"audiofolder\", data_dir=\"data\", drop_labels=True)` explicitly and I believe it's not convenient. \r\n\r\nAt the same time, `label` column can be added just as easy as adding one argument:` load_dataset(\"audiofolder\", data_dir=\"data\", drop_labels=False)`. As classification task is not as common, I think it should require more symbols to be added to the code :D \r\n\r\nBut this is definitely should be explained in the docs, which I've forgotten to update... I'll add this section soon.\r\n\r\nAlso +to the generic loader, will work on it. \r\n\r\n", "If a metadata.jsonl file is present, then it doesn't have to infer the labels I agree. Note that this is already the case for imagefolder ;) in your case `load_dataset(\"audiofolder\", data_dir=\"data\")` won't return labels !\r\n\r\nLabels are only inferred if there are no metadata.jsonl", "Feel free to merge the `main` branch into yours after updating your fork of `datasets`: https://github.com/huggingface/datasets/issues/4629\r\n\r\nThis should fix some errors in the CI", "@mariosasko could you please review this PR again? :)\r\n\r\nmost of the tests for AutoFolder (base class for AudioFolder and ImageFolder) are now basically copied from Image/AudioFolder (their tests are also almost identical too) and adapted to test other methods. it should be refactored but i think this is not that important for now and might be done in the future PR, wdyt?" ]
1,655,729,642,000
1,660,157,402,000
null
CONTRIBUTOR
null
will close #3964 AudioFolder is almost identical to ImageFolder except for inferring labels is not the default behavior (`drop_labels` is set to True in config), the option of inferring them is preserved though. The weird thing is happening with the `test_data_files_with_metadata_and_archives` when `streaming` is `True`. Here is the log from the CI: ``` ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/features/audio.py:237: in _decode_non_mp3_path_like array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/librosa/util/decorators.py:88: in inner_f return f(*args, **kwargs) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/librosa/core/audio.py:176: in load raise (exc) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/librosa/core/audio.py:155: in load context = sf.SoundFile(path) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/soundfile.py:629: in __init__ self._file = self._open(file, mode_int, closefd) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/soundfile.py:1184: in _open "Error opening {0!r}: ".format(self.name)) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ err = 72 prefix = "Error opening <zipfile.ZipExtFile name='audio_file.wav' mode='r' compress_type=deflate>: " def _error_check(err, prefix=""): """Pretty-print a numerical error code if there is an error.""" if err != 0: err_str = _snd.sf_error_number(err) > raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace')) E RuntimeError: Error opening <zipfile.ZipExtFile name='audio_file.wav' mode='r' compress_type=deflate>: Error in WAV file. No 'data' chunk marker. ``` I hadn't been able to reproduce this locally until I created the same test environment (I mean with `pip install .[tests]`) with python3.6. The same env but with python3.8 passes the test! I didn't manage to figure out what's wrong, I also tried simply to replace the test wav file and still got the same error. Versions of `soundfile`, `librosa` and `libsndfile` are identical. Might it be something with zip compression? Sounds weird but I don't have any other ideas... TODO: - [x] align with #4622 - [x] documentation - [x] tests for AutoFolder?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4530/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4530/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4530", "html_url": "https://github.com/huggingface/datasets/pull/4530", "diff_url": "https://github.com/huggingface/datasets/pull/4530.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4530.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/4529
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4529/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4529/comments
https://api.github.com/repos/huggingface/datasets/issues/4529/events
https://github.com/huggingface/datasets/issues/4529
1,276,729,303
I_kwDODunzps5MGVfX
4,529
Ecoset
{ "login": "DiGyt", "id": 34550289, "node_id": "MDQ6VXNlcjM0NTUwMjg5", "avatar_url": "https://avatars.githubusercontent.com/u/34550289?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DiGyt", "html_url": "https://github.com/DiGyt", "followers_url": "https://api.github.com/users/DiGyt/followers", "following_url": "https://api.github.com/users/DiGyt/following{/other_user}", "gists_url": "https://api.github.com/users/DiGyt/gists{/gist_id}", "starred_url": "https://api.github.com/users/DiGyt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DiGyt/subscriptions", "organizations_url": "https://api.github.com/users/DiGyt/orgs", "repos_url": "https://api.github.com/users/DiGyt/repos", "events_url": "https://api.github.com/users/DiGyt/events{/privacy}", "received_events_url": "https://api.github.com/users/DiGyt/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Hi! Very cool dataset! I answered your questions on the forum. Also, feel free to comment `#self-assign` on this issue to self-assign it." ]
1,655,721,574,000
1,655,828,236,000
null
NONE
null
## Adding a Dataset - **Name:** *Ecoset* - **Description:** *https://www.kietzmannlab.org/ecoset/* - **Paper:** *https://doi.org/10.1073/pnas.2011417118* - **Data:** *https://codeocean.com/capsule/9570390/tree/v1* - **Motivation:** **Ecoset** was created as a clean and ecologically valid alternative to **Imagenet**. It is a large image recognition dataset, similar to Imagenet in size and structure. However, the authors of ecoset claim several improvements over Imagenet, like: - more ecologically valid classes (e.g. not over-focussed on distinguishing different dog breeds) - less NSFW content - 'pre-packed image recognition models' that come with the dataset and can be used for validation of other models. I am working for one of the authors of the paper with the aim of bringing Ecoset to huggingface datasets. Therefore I can work on this issue personally, but could use some help from devs and experienced users if the dataset is of interest to them. I phrased some of my questions on [discuss.huggingface](https://discuss.huggingface.co/t/handling-large-image-datasets/19373).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4529/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4529/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4528
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4528/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4528/comments
https://api.github.com/repos/huggingface/datasets/issues/4528/events
https://github.com/huggingface/datasets/issues/4528
1,276,679,155
I_kwDODunzps5MGJPz
4,528
Memory leak when iterating a Dataset
{ "login": "NouamaneTazi", "id": 29777165, "node_id": "MDQ6VXNlcjI5Nzc3MTY1", "avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NouamaneTazi", "html_url": "https://github.com/NouamaneTazi", "followers_url": "https://api.github.com/users/NouamaneTazi/followers", "following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}", "gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}", "starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions", "organizations_url": "https://api.github.com/users/NouamaneTazi/orgs", "repos_url": "https://api.github.com/users/NouamaneTazi/repos", "events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}", "received_events_url": "https://api.github.com/users/NouamaneTazi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,655,719,394,000
1,655,719,483,000
null
MEMBER
null
## Describe the bug It seems that memory never gets freed after iterating a `Dataset` (using `.map()` or a simple `for` loop) ## Steps to reproduce the bug ```python import gc import logging import time import pyarrow from datasets import load_dataset from tqdm import trange import os, psutil logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) process = psutil.Process(os.getpid()) print(process.memory_info().rss) # output: 633507840 bytes corpus = load_dataset("BeIR/msmarco", 'corpus', keep_in_memory=False, streaming=False)['corpus'] print(process.memory_info().rss) # output: 698601472 bytes logger.info("Applying method to all examples in all splits") for i in trange(0, len(corpus), 1000): batch = corpus[i:i+1000] data = pyarrow.total_allocated_bytes() if data > 0: logger.info(f"{i}/{len(corpus)}: {data}") print(process.memory_info().rss) # output: 3788247040 bytes del batch gc.collect() print(process.memory_info().rss) # output: 3788247040 bytes logger.info("Done...") time.sleep(100) ``` ## Expected results Limited memory usage, and memory to be freed after processing ## Actual results Memory leak ![test](https://user-images.githubusercontent.com/29777165/174578276-f2c37e6c-b5d8-4985-b4d8-8413eb2b3241.png) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4528/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4528/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4527
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4527/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4527/comments
https://api.github.com/repos/huggingface/datasets/issues/4527/events
https://github.com/huggingface/datasets/issues/4527
1,276,583,536
I_kwDODunzps5MFx5w
4,527
Dataset Viewer issue for vadis/sv-ident
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
closed
false
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false } ]
null
[ "Fixed, thanks!\r\n![Uploading Capture d’écran 2022-06-21 à 18.42.40.png…]()\r\n\r\n" ]
1,655,714,862,000
1,655,829,766,000
1,655,829,765,000
MEMBER
null
### Link https://huggingface.co/datasets/vadis/sv-ident ### Description The dataset preview does not work: ``` Server Error Status code: 400 Exception: Status400Error Message: The dataset does not exist. ``` However, the dataset is streamable and works locally: ```python In [1]: from datasets import load_dataset; ds = load_dataset("sv-ident.py", split="train", streaming=True); item = next(iter(ds)); item Using custom data configuration default Out[1]: {'sentence': 'Our point, however, is that so long as downward (favorable) comparisons overwhelm the potential for unfavorable comparisons, system justification should be a likely outcome amongst the disadvantaged.', 'is_variable': 1, 'variable': ['exploredata-ZA5400_VarV66', 'exploredata-ZA5400_VarV53'], 'research_data': ['ZA5400'], 'doc_id': '73106', 'uuid': 'b9fbb80f-3492-4b42-b9d5-0254cc33ac10', 'lang': 'en'} ``` CC: @e-tornike ### Owner No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4527/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4527/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4526
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4526/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4526/comments
https://api.github.com/repos/huggingface/datasets/issues/4526/events
https://github.com/huggingface/datasets/issues/4526
1,276,580,185
I_kwDODunzps5MFxFZ
4,526
split cache used when processing different split
{ "login": "gpucce", "id": 32967787, "node_id": "MDQ6VXNlcjMyOTY3Nzg3", "avatar_url": "https://avatars.githubusercontent.com/u/32967787?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gpucce", "html_url": "https://github.com/gpucce", "followers_url": "https://api.github.com/users/gpucce/followers", "following_url": "https://api.github.com/users/gpucce/following{/other_user}", "gists_url": "https://api.github.com/users/gpucce/gists{/gist_id}", "starred_url": "https://api.github.com/users/gpucce/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gpucce/subscriptions", "organizations_url": "https://api.github.com/users/gpucce/orgs", "repos_url": "https://api.github.com/users/gpucce/repos", "events_url": "https://api.github.com/users/gpucce/events{/privacy}", "received_events_url": "https://api.github.com/users/gpucce/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "I was not able to reproduce this behavior (I tried without using pytorch lightning though, since I don't know what code you ran in pytorch lightning to get this).\r\n\r\nIf you can provide a MWE that would be perfect ! :)", "Hi, I think the issue happened because I was loading datasets under an `if` ... `else` statement and the condition would change the dataset I would need to load but instead the cached one was always returned. However, I believe that is expected behaviour, if so I'll close the issue.\r\n\r\nOtherwise I will try to provide a MWE" ]
1,655,714,698,000
1,656,425,098,000
null
NONE
null
## Describe the bug` ``` ds1 = load_dataset('squad', split='validation') ds2 = load_dataset('squad', split='train') ds1 = ds1.map(some_function) ds2 = ds2.map(some_function) assert ds1 == ds2 ``` This happens when ds1 and ds2 are created in `pytorch_lightning.DataModule` through ``` class myDataModule: def train_dataloader(self): ds = load_dataset('squad', split='train') ds = ds.map(some_function) return [ds] def val_dataloader(self): ds = load_dataset('squad', split="validation") ds = ds.map(some_function) return [ds] ``` I don't know if it depends on `pytorch_lightning` or `datasets` but setting `ds.map(some_function, load_from_cache_file=False)` fixes the issue. If this is not enough to replicate I will try and provide and MWE, I don't have time now so I thought I wuld open the issue first!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4526/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4526/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4525
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4525/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4525/comments
https://api.github.com/repos/huggingface/datasets/issues/4525/events
https://github.com/huggingface/datasets/issues/4525
1,276,491,386
I_kwDODunzps5MFbZ6
4,525
Out of memory error on workers while running Beam+Dataflow
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Some naive ideas to cope with this:\r\n- enable more RAM on each worker\r\n- force the spanning of more workers\r\n- others?", "@albertvillanova We were finally able to process the full NQ dataset on our machines using 600 gb with 5 workers. Maybe these numbers will work for you as well.", "Thanks a lot for the hint, @seirasto.\r\n\r\nI have one question: what runner did you use? Direct, Apache Flink/Nemo/Samza/Spark, Google Dataflow...? Thank you.", "I asked my colleague who ran the code and he said apache beam.", "@albertvillanova Since we have already processed the NQ dataset on our machines can we upload it to datasets so the NQ PR can be merged?", "Maybe @lhoestq can give a more accurate answer as I am not sure about the authentication requirements to upload those files to our cloud bucket.\r\n\r\nAnyway I propose to continue this discussion on the dedicated PR for Natural questions dataset:\r\n- #4368", "> I asked my colleague who ran the code and he said apache beam.\r\n\r\nHe looked into it further and he just used DirectRunner. @albertvillanova ", "OK, thank you @seirasto for your hint.\r\n\r\nThat explains why you did not encounter the out of memory error: this only appears when the processing is distributed (on workers memory) and DirectRunner does not distribute the processing (all is done in a single machine). " ]
1,655,710,092,000
1,656,581,637,000
null
MEMBER
null
## Describe the bug While running the preprocessing of the natural_question dataset (see PR #4368), there is an issue for the "default" config (train+dev files). Previously we ran the preprocessing for the "dev" config (only dev files) with success. Train data files are larger than dev ones and apparently workers run out of memory while processing them. Any help/hint is welcome! Error message: ``` Data channel closed, unable to receive additional data from SDK sdk-0-0 ``` Info from the Diagnostics tab: ``` Out of memory: Killed process 1882 (python) total-vm:6041764kB, anon-rss:3290928kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:9520kB oom_score_adj:900 The worker VM had to shut down one or more processes due to lack of memory. ``` ## Additional information ### Stack trace ``` Traceback (most recent call last): File "/home/albert_huggingface_co/natural_questions/venv/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/run_beam.py", line 127, in run builder.download_and_prepare( File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 1389, in _download_and_prepare pipeline_results.wait_until_finish() File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 1667, in wait_until_finish raise DataflowRuntimeException( apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error: Data channel closed, unable to receive additional data from SDK sdk-0-0 ``` ### Logs ``` Error message from worker: Data channel closed, unable to receive additional data from SDK sdk-0-0 Workflow failed. Causes: S30:train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/Read+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/GroupByWindow+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/FlatMap(restore_timestamps)+train/ReadAllFromText/ReadAllFiles/Reshard/RemoveRandomKeys+train/ReadAllFromText/ReadAllFiles/ReadRange+train/Map(_parse_example)+train/Encode+train/Count N. Examples+train/Get values/Values+train/Save to parquet/Write/WriteImpl/WindowInto(WindowIntoFn)+train/Save to parquet/Write/WriteImpl/WriteBundles+train/Save to parquet/Write/WriteImpl/Pair+train/Save to parquet/Write/WriteImpl/GroupByKey/Write failed., The job failed because a work item has failed 4 times. Look in previous log entries for the cause of each one of the 4 failures. For more information, see https://cloud.google.com/dataflow/docs/guides/common-errors. The work item was attempted on these workers: beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: Data channel closed, unable to receive additional data from SDK sdk-0-0, beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-bwsj Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-5052 Root cause: The worker lost contact with the service. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4525/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4524/comments
https://api.github.com/repos/huggingface/datasets/issues/4524/events
https://github.com/huggingface/datasets/issues/4524
1,275,909,186
I_kwDODunzps5MDNRC
4,524
Downloading via Apache Pipeline, client cancelled (org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException)
{ "login": "dan-the-meme-man", "id": 45244059, "node_id": "MDQ6VXNlcjQ1MjQ0MDU5", "avatar_url": "https://avatars.githubusercontent.com/u/45244059?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dan-the-meme-man", "html_url": "https://github.com/dan-the-meme-man", "followers_url": "https://api.github.com/users/dan-the-meme-man/followers", "following_url": "https://api.github.com/users/dan-the-meme-man/following{/other_user}", "gists_url": "https://api.github.com/users/dan-the-meme-man/gists{/gist_id}", "starred_url": "https://api.github.com/users/dan-the-meme-man/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dan-the-meme-man/subscriptions", "organizations_url": "https://api.github.com/users/dan-the-meme-man/orgs", "repos_url": "https://api.github.com/users/dan-the-meme-man/repos", "events_url": "https://api.github.com/users/dan-the-meme-man/events{/privacy}", "received_events_url": "https://api.github.com/users/dan-the-meme-man/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @dan-the-meme-man, thanks for reporting.\r\n\r\nWe are investigating a similar issue but with Beam+Dataflow (instead of Beam+Flink): \r\n- #4525\r\n\r\nIn order to go deeper into the root cause, we need as much information as possible: logs from the main process + logs from the workers are very informative.\r\n\r\nIn the case of the issue with Beam+Dataflow, the logs from the workers report an out of memory issue.", "As I continued working on this today, I came to suspect that it is in fact an out of memory issue - I have a few more notebooks that I've left running, and if they produce the same error, I will try to get the logs. In the meantime, if there's any chance that there is a repo out there with those three languages already as .arrow files, or if you know about how much memory would be needed to actually download those sets, please let me know!" ]
1,655,595,405,000
1,655,771,900,000
null
NONE
null
## Describe the bug When downloading some `wikipedia` languages (in particular, I'm having a hard time with Spanish, Cebuano, and Russian) via FlinkRunner, I encounter the exception in the title. I have been playing with package versions a lot, because unfortunately, the different dependencies required by these packages seem to be incompatible in terms of versions (dill and requests, for instance). It should be noted that the following code runs for several hours without issue, executing the `load_dataset()` function, before the exception occurs. ## Steps to reproduce the bug ```python # bash commands !pip install datasets !pip install apache-beam[interactive] !pip install mwparserfromhell !pip install dill==0.3.5.1 !pip install requests==2.23.0 # imports import os from datasets import load_dataset import apache_beam as beam import mwparserfromhell from google.colab import drive import dill import requests # mount drive drive_dir = os.path.join(os.getcwd(), 'drive') drive.mount(drive_dir) # confirming the versions of these two packages are the ones that are suggested by the outputs from the bash commands print(dill.__version__) print(requests.__version__) lang = 'es' # or 'ru' or 'ceb' - these are the ones causing the issue lang_dir = os.path.join(drive_dir, 'path/to/my/folder', lang) if not os.path.exists(lang_dir): x = None x = load_dataset('wikipedia', '20220301.' + lang, beam_runner='Flink', split='train') x.save_to_disk(lang_dir) ``` ## Expected results Although some warnings are generally produced by this code (run in Colab Notebook), most languages I've tried have been successfully downloaded. It should simply go through without issue, but for these languages, I am continually encountering this error. ## Actual results Traceback below: ``` Exception in thread run_worker_3-1: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 234, in run for work_request in self._control_stub.Control(get_responses()): File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__ return self._next() File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next raise self grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "Socket closed" debug_error_string = "{"created":"@1655593643.871830638","description":"Error received from peer ipv4:127.0.0.1:44441","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Socket closed","grpc_status":14}" > Traceback (most recent call last): File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 267, in _execute response = task() File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 340, in <lambda> lambda: self.create_worker().do_instruction(request), request) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 581, in do_instruction getattr(request, request_type), request.instruction_id) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 618, in process_bundle bundle_processor.process_bundle(instruction_id)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 996, in process_bundle element.data) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 221, in process_encoded self.output(decoded_value) File "apache_beam/runners/worker/operations.py", line 346, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 348, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive File "apache_beam/runners/worker/operations.py", line 707, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/worker/operations.py", line 708, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/common.py", line 1200, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ERROR:apache_beam.runners.worker.sdk_worker:Error processing instruction 26. Original traceback is Traceback (most recent call last): File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 267, in _execute response = task() File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 340, in <lambda> lambda: self.create_worker().do_instruction(request), request) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 581, in do_instruction getattr(request, request_type), request.instruction_id) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 618, in process_bundle bundle_processor.process_bundle(instruction_id)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 996, in process_bundle element.data) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 221, in process_encoded self.output(decoded_value) File "apache_beam/runners/worker/operations.py", line 346, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 348, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive File "apache_beam/runners/worker/operations.py", line 707, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/worker/operations.py", line 708, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/common.py", line 1200, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ERROR:root:org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: CANCELLED: client cancelled ERROR:apache_beam.runners.worker.data_plane:Failed to read inputs in the data plane. Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 634, in _read_inputs for elements in elements_iterator: File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__ return self._next() File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next raise self grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.CANCELLED details = "Multiplexer hanging up" debug_error_string = "{"created":"@1655593654.436885887","description":"Error received from peer ipv4:127.0.0.1:43263","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Multiplexer hanging up","grpc_status":1}" > Exception in thread read_grpc_client_inputs: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 651, in <lambda> target=lambda: self._read_inputs(elements_iterator), File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 634, in _read_inputs for elements in elements_iterator: File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__ return self._next() File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next raise self grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.CANCELLED details = "Multiplexer hanging up" debug_error_string = "{"created":"@1655593654.436885887","description":"Error received from peer ipv4:127.0.0.1:43263","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Multiplexer hanging up","grpc_status":1}" > --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [/tmp/ipykernel_219/3869142325.py](https://localhost:8080/#) in <module> 18 x = None 19 x = load_dataset('wikipedia', '20220301.' + lang, beam_runner='Flink', ---> 20 split='train') 21 x.save_to_disk(lang_dir) 3 frames [/usr/local/lib/python3.7/dist-packages/apache_beam/runners/portability/portable_runner.py](https://localhost:8080/#) in wait_until_finish(self, duration) 604 605 if self._runtime_exception: --> 606 raise self._runtime_exception 607 608 return self._state RuntimeError: Pipeline BeamApp-root-0618220708-b3b59a0e_d8efcf67-9119-4f76-b013-70de7b29b54d failed in state FAILED: org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: CANCELLED: client cancelled ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4524/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4523
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4523/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4523/comments
https://api.github.com/repos/huggingface/datasets/issues/4523/events
https://github.com/huggingface/datasets/pull/4523
1,275,002,639
PR_kwDODunzps452hgh
4,523
Update download url and improve card of `cats_vs_dogs` dataset
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,655,470,784,000
1,655,821,406,000
1,655,820,788,000
CONTRIBUTOR
null
Improve the download URL (reported here: https://huggingface.co/datasets/cats_vs_dogs/discussions/1), remove the `image_file_path` column (not used in Transformers, so it should be safe) and add more info to the card.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/4523/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/4523/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/4523", "html_url": "https://github.com/huggingface/datasets/pull/4523", "diff_url": "https://github.com/huggingface/datasets/pull/4523.diff", "patch_url": "https://github.com/huggingface/datasets/pull/4523.patch", "merged_at": 1655820788000 }
true