url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.07B
| node_id
stringlengths 18
32
| number
int64 1
3.39k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 1
value | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,639B
| updated_at
int64 1,587B
1,639B
| closed_at
int64 1,587B
1,639B
| author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
β | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3037 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3037/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3037/comments | https://api.github.com/repos/huggingface/datasets/issues/3037/events | https://github.com/huggingface/datasets/pull/3037 | 1,018,091,919 | PR_kwDODunzps4syi15 | 3,037 | SberQuad | {
"login": "Alenush",
"id": 13781234,
"node_id": "MDQ6VXNlcjEzNzgxMjM0",
"avatar_url": "https://avatars.githubusercontent.com/u/13781234?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Alenush",
"html_url": "https://github.com/Alenush",
"followers_url": "https://api.github.com/users/Alenush/followers",
"following_url": "https://api.github.com/users/Alenush/following{/other_user}",
"gists_url": "https://api.github.com/users/Alenush/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Alenush/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Alenush/subscriptions",
"organizations_url": "https://api.github.com/users/Alenush/orgs",
"repos_url": "https://api.github.com/users/Alenush/repos",
"events_url": "https://api.github.com/users/Alenush/events{/privacy}",
"received_events_url": "https://api.github.com/users/Alenush/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,519,268,000 | 1,633,519,988,000 | 1,633,519,988,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3037/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3037",
"html_url": "https://github.com/huggingface/datasets/pull/3037",
"diff_url": "https://github.com/huggingface/datasets/pull/3037.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3037.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3036/comments | https://api.github.com/repos/huggingface/datasets/issues/3036/events | https://github.com/huggingface/datasets/issues/3036 | 1,017,687,944 | I_kwDODunzps48qK-I | 3,036 | Protect master branch to force contributions via Pull Requests | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"It would be nice to protect the master from direct commits, but still having a way to merge our own PRs when no review is required (for example when updating a dataset_infos.json file, or minor bug fixes - things that happen quite often actually).\r\nDo you know if there's a way ?",
"you can if you're an admin of the repo",
"This is done. Now the master branch is protected:\r\n- [x] Require a pull request before merging: all commits must be made to a non-protected branch and submitted via a pull request\r\n - Required number of approvals before merging: 1 \r\n- [x] Require linear history: prevent merge commits from being pushed\r\n- [x] These requirements are not enforced for administrators\r\n- [x] Additionally, the master branch is also protected against deletion and force pushes\r\n\r\nCC: @lhoestq @julien-c @thomwolf "
] | 1,633,505,657,000 | 1,633,589,507,000 | 1,633,589,392,000 | MEMBER | null | In order to have a clearer Git history in the master branch, I propose to protect it so that all contributions must be done through a Pull Request and no direct commits to master are allowed.
- The Pull Request allows to give context, discuss any potential issues and improve the quality of the contribution
- The Pull Request will eventually be squashed and merged into master with a single commit that links to the Pull Request page (with all the context/discussions)
Note that we already implemented a protection in the master branch to avoid *merge* commits and ensure a linear history. This proposal goes one step further by avoiding all kind of direct commits and forcing contributions **only** through Pull Requests.
Please note that we can temporarily deactivate this protection if we need to make a direct commit, e.g. at each new version release.
The only way GitHub allows this kind or protection is by requiring a minimal number (at least one) of approvals of the Pull Request. The inconvenient is that the PR creator cannot approve their own PR: another person must approve it before it can be merged into master. To circumvent this, we could eventually disable this protection in the master branch when an urgent commit is needed (e.g. for a hotfix) and there is no other person available at that time to approve the PR.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3036/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3036/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3033/comments | https://api.github.com/repos/huggingface/datasets/issues/3033/events | https://github.com/huggingface/datasets/pull/3033 | 1,016,619,572 | PR_kwDODunzps4std7u | 3,033 | Actual "proper" install of ruamel.yaml in the windows CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,456,327,000 | 1,633,456,497,000 | 1,633,456,497,000 | MEMBER | null | It was impossible to update the package directly with `pip`. Indeed it was installed with `distutils` which prevents `pip` or `conda` to uninstall it.
I had to `rm` a directory from the `site-packages` python directory, and then do `pip install ruamel.yaml`
It's not that "proper" but I couldn't find better solutions | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3033/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3033",
"html_url": "https://github.com/huggingface/datasets/pull/3033",
"diff_url": "https://github.com/huggingface/datasets/pull/3033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3033.patch",
"merged_at": 1633456496000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3032/comments | https://api.github.com/repos/huggingface/datasets/issues/3032/events | https://github.com/huggingface/datasets/issues/3032 | 1,016,488,475 | I_kwDODunzps48lmIb | 3,032 | Error when loading private dataset with "data_files" arg | {
"login": "borisdayma",
"id": 715491,
"node_id": "MDQ6VXNlcjcxNTQ5MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/borisdayma",
"html_url": "https://github.com/borisdayma",
"followers_url": "https://api.github.com/users/borisdayma/followers",
"following_url": "https://api.github.com/users/borisdayma/following{/other_user}",
"gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions",
"organizations_url": "https://api.github.com/users/borisdayma/orgs",
"repos_url": "https://api.github.com/users/borisdayma/repos",
"events_url": "https://api.github.com/users/borisdayma/events{/privacy}",
"received_events_url": "https://api.github.com/users/borisdayma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"We'll do a release tomorrow or on wednesday to make the fix available :)\r\n\r\nThanks for reproting !"
] | 1,633,448,787,000 | 1,634,052,382,000 | 1,634,052,346,000 | CONTRIBUTOR | null | ## Describe the bug
A clear and concise description of what the bug is.
Private datasets with no loading script can't be loaded using `data_files` parameter.
## Steps to reproduce the bug
```python
from datasets import load_dataset
data_files = {"train": "**/train/*/*.jsonl", "valid": "**/valid/*/*.jsonl"}
dataset = load_dataset('dalle-mini/encoded', data_files=data_files, use_auth_token=True, streaming=True)
```
Same error happens in non-streaming mode.
## Expected results
Files should be loaded (whether in streaming or not).
## Actual results
Error:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
539 try:
--> 540 local_path = cached_path(file_path, download_config=download_config)
541 except FileNotFoundError:
8 frames
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/dalle-mini/encoded/resolve/main/encoded.py
During handling of the above exception, another exception occurred:
HTTPError Traceback (most recent call last)
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/datasets/dalle-mini/encoded?full=true
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
547 except Exception:
548 raise FileNotFoundError(
--> 549 f"Couldn't find a directory or a {resource_type} named '{path}'. "
550 f"It doesn't exist locally at {expected_dir_for_combined_path_abs} or remotely on {hf_api.endpoint}/datasets"
551 )
FileNotFoundError: Couldn't find a directory or a dataset named 'dalle-mini/encoded'. It doesn't exist locally at /content/dalle-mini/encoded or remotely on https://huggingface.co/datasets
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3032/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3032/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3031 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3031/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3031/comments | https://api.github.com/repos/huggingface/datasets/issues/3031/events | https://github.com/huggingface/datasets/pull/3031 | 1,016,458,496 | PR_kwDODunzps4ss9jn | 3,031 | Align tqdm control with cache control | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Could you add this function to the documentation please ?\r\n\r\nYou can add it in `main_classes.rst`, and maybe add a `Tip` section in the `map` section in the `process.rst`"
] | 1,633,447,129,000 | 1,634,569,221,000 | 1,634,569,170,000 | CONTRIBUTOR | null | Currently, once disabled with `disable_progress_bar`, progress bars cannot be re-enabled again. To overcome this limitation, this PR introduces the `set_progress_bar_enabled` function that accepts a boolean indicating whether to display progress bars. The goal is to provide a similar API to the existing cache control API. Following the Zen of Python (π), there should be one and preferably only one obvious way to do it, so I'm also deprecating the aforementioned `disable_progress_bar` function. Additionally, I justify the deprecation with the fact that this function has never been in the docs.
Moreover, similar API changes have recently been introduced to [`tfds`](https://github.com/tensorflow/datasets/blob/a1e8b98f45b0214082b546cc967c67c43fffda55/tensorflow_datasets/core/utils/tqdm_utils.py#L98-L112).
Considering the popularity of the [comment](https://github.com/huggingface/datasets/issues/1627#issuecomment-751383559) I made a while ago, this API (`set_progress_bar_enabled` and `is_progress_bar_enabled`) should be mentioned in the docs, but I'm not sure where to put it exactly. Maybe we can replace the `logging_methods` page under `package_reference` with `utility_methods` and then introduce two subsections on that page: `Logging methods` and `tqdm control`.
Additionally, this PR:
* adds the `disable_tqdm` keyword arg of `Dataset._map_single` to the `ignore_kwargs` list to ignore it when computing the fingerprint (forgot to add it in #2696)
* deletes the unused components in `tqdm_utils.py`, which seem to be inherited from `tfds`
* disables the tqdm output in the test suite. As I see it, this output doesn't seem informative, but let me know if this is not a good idea | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3031/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3031",
"html_url": "https://github.com/huggingface/datasets/pull/3031",
"diff_url": "https://github.com/huggingface/datasets/pull/3031.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3031.patch",
"merged_at": 1634569170000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3030/comments | https://api.github.com/repos/huggingface/datasets/issues/3030/events | https://github.com/huggingface/datasets/pull/3030 | 1,016,435,324 | PR_kwDODunzps4ss41W | 3,030 | Add `remove_columns` to `IterableDataset` | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks ! That looks all good :)\r\n\r\nI don't think that batching would help. Indeed we're dealing with python iterators that yield elements one by one, so batched `map` needs to accumulate a batch, apply the function, and then yield examples from the batch.\r\n\r\nThough once we have parallel processing in `map`, we can reconsider it\r\n\r\nAlso feel free to check the CI failure - apparently the import of `Union` is missing",
"Thanks for the review and explaining that! \r\nOn top of what you said, I think `remove_columns` is very unlikely to be a bottleneck, so it doesn't matter anyways.",
"Thank you for reviewing! @mariosasko \r\n\r\nI wonder how the checking would work. Is there any checking present in `IterableDataset ` now? What if `.remove_columns()` is applied after some arbitrary `.map()`?",
"> I wonder how the checking would work. Is there any checking present in IterableDataset now? What if .remove_columns() is applied after some arbitrary .map()?\r\n\r\nThat's the challenge here indeed ^^ In this case it's not trivial to know the names of the columns. Feel free to open an issue so we can discuss this"
] | 1,633,445,913,000 | 1,633,707,195,000 | 1,633,707,113,000 | CONTRIBUTOR | null | Fixes #2944
WIP
* Not tested yet.
* We might want to allow batched remove for efficiency.
@lhoestq Do you think it should have `batched=` and `batch_size=`? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3030/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3030",
"html_url": "https://github.com/huggingface/datasets/pull/3030",
"diff_url": "https://github.com/huggingface/datasets/pull/3030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3030.patch",
"merged_at": 1633707113000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3029/comments | https://api.github.com/repos/huggingface/datasets/issues/3029/events | https://github.com/huggingface/datasets/pull/3029 | 1,016,389,901 | PR_kwDODunzps4ssvkr | 3,029 | Use standard open-domain validation split in nq_open | {
"login": "craffel",
"id": 417568,
"node_id": "MDQ6VXNlcjQxNzU2OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/417568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/craffel",
"html_url": "https://github.com/craffel",
"followers_url": "https://api.github.com/users/craffel/followers",
"following_url": "https://api.github.com/users/craffel/following{/other_user}",
"gists_url": "https://api.github.com/users/craffel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/craffel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/craffel/subscriptions",
"organizations_url": "https://api.github.com/users/craffel/orgs",
"repos_url": "https://api.github.com/users/craffel/repos",
"events_url": "https://api.github.com/users/craffel/events{/privacy}",
"received_events_url": "https://api.github.com/users/craffel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I had to run datasets-cli with --ignore_verifications the first time since it was complaining about a missing file, but now it runs without that flag fine. I moved dummy_data.zip to the new folder, but also had to modify the filename of the test file in the zip (should I not have done that?). Finally, I added the pretty name tag.",
"Great, thanks for the help."
] | 1,633,443,567,000 | 1,633,445,806,000 | 1,633,445,805,000 | CONTRIBUTOR | null | The nq_open dataset originally drew the validation set from this file:
https://github.com/google-research-datasets/natural-questions/blob/master/nq_open/NQ-open.efficientqa.dev.1.1.sample.jsonl
However, that's the dev set used specifically and only for the efficientqa competition, and it's not the same dev set as is used in every open-domain question answering paper (including the Lee et al paper that introduced the open-domain variant of NQ, cited at the top of the dataset file). This PR changes nq_open to use the standard validation split and bumps the version to 2.0.0 since this is a breaking change. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3029/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3029",
"html_url": "https://github.com/huggingface/datasets/pull/3029",
"diff_url": "https://github.com/huggingface/datasets/pull/3029.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3029.patch",
"merged_at": 1633445805000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3028 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3028/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3028/comments | https://api.github.com/repos/huggingface/datasets/issues/3028/events | https://github.com/huggingface/datasets/pull/3028 | 1,016,230,272 | PR_kwDODunzps4ssO4s | 3,028 | Properly install ruamel-yaml for windows CI | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq I would say this does not \"properly\" install `ruamel-yaml`, but the contrary, you overwrite the previous version without desinstalling it first.\r\n\r\nAccording to `pip` docs:\r\n> This can break your system if the existing package is of a different version or was installed with a different package manager!\r\n\r\nNote that our case fulfills both conditions:\r\n- the installing version (`0.17.16`) is different from the existing one (`0.15.87`)\r\n- you are installing using `pip` (`setuptools`), whereas the exisitng version was installed using `distutils`\r\n\r\nThat is why I did not fix the issue this way, made a hotfix pinning `huggingface_hub` (#3025), while looking for a permanent solution for the issue.",
"Yea I did this because we need the latest version of `huggingface_hub` for #2986 and because I didn't want to ssh to the windows worker x)\r\nMaybe it can be fixed by installing it with conda - let me try",
"Oh yea it may not work since it was first installed with distutils"
] | 1,633,434,675,000 | 1,633,442,532,000 | 1,633,434,682,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3028/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3028",
"html_url": "https://github.com/huggingface/datasets/pull/3028",
"diff_url": "https://github.com/huggingface/datasets/pull/3028.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3028.patch",
"merged_at": 1633434682000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3027 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3027/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3027/comments | https://api.github.com/repos/huggingface/datasets/issues/3027/events | https://github.com/huggingface/datasets/issues/3027 | 1,016,150,117 | I_kwDODunzps48kThl | 3,027 | Resolve data_files by split name | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Awesome @lhoestq I like the proposal and it works great on my JSON community dataset. Here is the [log](https://gist.github.com/vblagoje/714babc325bcbdd5de579fd8e1648892). ",
"From my discussion with @borisdayma it would be more general the files match if their paths contains the split name - not only if the filename contains the split name. For example for a dataset like this:\r\n```\r\ntrain/\r\nβββ data.csv\r\ntest/\r\nβββ data.csv\r\n```\r\n\r\nBut IMO the default should be \r\n```\r\ndata/\r\nβββ train.csv\r\nβββ test.csv\r\n```\r\nbecause it allows people to have other directories if they have different subsets of their data (different configurations, not splits)",
"I just created a PR for this at https://github.com/huggingface/datasets/pull/3221, let me know what you think :)"
] | 1,633,429,476,000 | 1,636,134,598,000 | 1,636,134,597,000 | MEMBER | null | This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example:
```python
load_dataset("lhoestq/demo1")
```
should return two splits "train" and "test" since the dataset repostiory is like
```
data/
βββ train.csv
βββ test.csv
```
Currently it returns only one split "train" which contains the data of both files
I started playing with this idea on this branch btw: `resolve-data_files-by-split-name`
Basically the idea is that if you named you data files after split names then the default pattern is
```python
{
"train": ["*train*"],
"test": ["*test*"],
"validation": ["*dev*", "valid"],
}
```
otherwise it's
```python
{
"train": ["*"]
}
```
Let me know what you think !
cc @albertvillanova @LysandreJik @vblagoje | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3027/reactions",
"total_count": 3,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3027/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3026/comments | https://api.github.com/repos/huggingface/datasets/issues/3026/events | https://github.com/huggingface/datasets/pull/3026 | 1,016,067,794 | PR_kwDODunzps4srtyc | 3,026 | added arxiv paper inswiss_judgment_prediction dataset card | {
"login": "JoelNiklaus",
"id": 3775944,
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoelNiklaus",
"html_url": "https://github.com/JoelNiklaus",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,424,521,000 | 1,633,708,904,000 | 1,633,708,884,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3026/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3026",
"html_url": "https://github.com/huggingface/datasets/pull/3026",
"diff_url": "https://github.com/huggingface/datasets/pull/3026.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3026.patch",
"merged_at": 1633708884000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3025/comments | https://api.github.com/repos/huggingface/datasets/issues/3025/events | https://github.com/huggingface/datasets/pull/3025 | 1,016,061,222 | PR_kwDODunzps4srsgG | 3,025 | Fix Windows test suite | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,424,122,000 | 1,633,427,908,000 | 1,633,427,907,000 | MEMBER | null | Try a hotfix to restore Windows test suite.
Fix #3024. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3025/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3025/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3025",
"html_url": "https://github.com/huggingface/datasets/pull/3025",
"diff_url": "https://github.com/huggingface/datasets/pull/3025.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3025.patch",
"merged_at": 1633427907000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3024/comments | https://api.github.com/repos/huggingface/datasets/issues/3024/events | https://github.com/huggingface/datasets/issues/3024 | 1,016,052,911 | I_kwDODunzps48j7yv | 3,024 | Windows test suite fails | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,633,423,606,000 | 1,633,427,907,000 | 1,633,427,907,000 | MEMBER | null | ## Describe the bug
There is an error during installation of tests dependencies for Windows: https://app.circleci.com/pipelines/github/huggingface/datasets/7981/workflows/9b6a0114-2b8e-4069-94e5-e844dbbdba4e/jobs/49206
```
ERROR: Cannot uninstall 'ruamel-yaml'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3024/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3023/comments | https://api.github.com/repos/huggingface/datasets/issues/3023/events | https://github.com/huggingface/datasets/pull/3023 | 1,015,923,031 | PR_kwDODunzps4srQ4i | 3,023 | Fix typo | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,413,971,000 | 1,633,435,015,000 | 1,633,435,015,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3023/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3023",
"html_url": "https://github.com/huggingface/datasets/pull/3023",
"diff_url": "https://github.com/huggingface/datasets/pull/3023.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3023.patch",
"merged_at": 1633435015000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3022/comments | https://api.github.com/repos/huggingface/datasets/issues/3022/events | https://github.com/huggingface/datasets/pull/3022 | 1,015,750,221 | PR_kwDODunzps4sqve6 | 3,022 | MeDAL dataset: Add further description and update download URL | {
"login": "xhlulu",
"id": 21180505,
"node_id": "MDQ6VXNlcjIxMTgwNTA1",
"avatar_url": "https://avatars.githubusercontent.com/u/21180505?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xhlulu",
"html_url": "https://github.com/xhlulu",
"followers_url": "https://api.github.com/users/xhlulu/followers",
"following_url": "https://api.github.com/users/xhlulu/following{/other_user}",
"gists_url": "https://api.github.com/users/xhlulu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/xhlulu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xhlulu/subscriptions",
"organizations_url": "https://api.github.com/users/xhlulu/orgs",
"repos_url": "https://api.github.com/users/xhlulu/repos",
"events_url": "https://api.github.com/users/xhlulu/events{/privacy}",
"received_events_url": "https://api.github.com/users/xhlulu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq I'm a bit confused by the error message. I haven't touched the YAML code at all - do you have any insight on that?",
"I just added the missing `pretty_name` tag in the YAML - sorry about that ;)",
"Thanks! Seems like it did the trick since the tests are passing. Let me know if there's anything else I can do in this PR!",
"It's all good thank you :)\r\n\r\nmerging !"
] | 1,633,392,808,000 | 1,634,115,789,000 | 1,634,115,789,000 | CONTRIBUTOR | null | Added more details in the following sections:
* Dataset Structure
* Data Instances
* Data Splits
* Source Data
* Annotations
* Discussions of Biases
* LIcensing Information | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3022/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3022",
"html_url": "https://github.com/huggingface/datasets/pull/3022",
"diff_url": "https://github.com/huggingface/datasets/pull/3022.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3022.patch",
"merged_at": 1634115789000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3021/comments | https://api.github.com/repos/huggingface/datasets/issues/3021/events | https://github.com/huggingface/datasets/pull/3021 | 1,015,444,094 | PR_kwDODunzps4spzJU | 3,021 | Support loading dataset from multiple zipped CSV data files | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,368,837,000 | 1,633,509,406,000 | 1,633,509,405,000 | MEMBER | null | Fix partially #3018.
CC: @lewtun | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3021/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3021/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3021",
"html_url": "https://github.com/huggingface/datasets/pull/3021",
"diff_url": "https://github.com/huggingface/datasets/pull/3021.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3021.patch",
"merged_at": 1633509405000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3020/comments | https://api.github.com/repos/huggingface/datasets/issues/3020/events | https://github.com/huggingface/datasets/pull/3020 | 1,015,406,105 | PR_kwDODunzps4sprfa | 3,020 | Add a metric for the MATH dataset (competition_math). | {
"login": "hacobe",
"id": 91226467,
"node_id": "MDQ6VXNlcjkxMjI2NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/91226467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hacobe",
"html_url": "https://github.com/hacobe",
"followers_url": "https://api.github.com/users/hacobe/followers",
"following_url": "https://api.github.com/users/hacobe/following{/other_user}",
"gists_url": "https://api.github.com/users/hacobe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hacobe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hacobe/subscriptions",
"organizations_url": "https://api.github.com/users/hacobe/orgs",
"repos_url": "https://api.github.com/users/hacobe/repos",
"events_url": "https://api.github.com/users/hacobe/events{/privacy}",
"received_events_url": "https://api.github.com/users/hacobe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I believe the only failed test related to this PR is tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math. It gives the following error:\r\n\r\nImportError: To be able to use this dataset, you need to install the following dependencies['math_equivalence'] using 'pip install git+https://github.com/hendrycks/math.git' for instance'\r\n\r\nIt fails along with (these fail with ImportError as well):\r\ntest_load_metric_bertscore\r\ntest_load_metric_bleurt\r\ntest_load_metric_comet\r\ntest_load_metric_coval\r\n\r\nLet me know if there is anything I need to change.",
"Hi ! The script looks all good thanks :)\r\n\r\nTo fix the CI you just need to merge `master` into your branch\r\n```\r\ngit fetch upstream/master\r\ngit merge upstream/master\r\n```\r\n\r\nThen you also need to add `math_equivalence` to the list of git packages installed for the tests in `additional-tests-requirements.txt`\r\nhttps://github.com/huggingface/datasets/blob/ba831e4bcd175ae3d52afbf7d12c4f625bf541b0/additional-tests-requirements.txt#L1-L3",
"I ran:\r\n\r\ngit fetch upstream\r\ngit merge upstream/master\r\n\r\nAnd I also added math_equivalence to the list of git packages installed for the tests in additional-tests-requirements.txt\r\n\r\ntests/test_metric_common.py fails with the same errors as before. tests/test_dataset_cards.py also fails, but it doesn't look related to this PR (it's an issue datasets/ami/README.md).",
"@lhoestq Anything else I can do? I re-merged again and am getting the same test failures as described in the previous comment."
] | 1,633,366,336,000 | 1,634,898,571,000 | 1,634,898,571,000 | CONTRIBUTOR | null | This metric computes accuracy for the MATH dataset (https://arxiv.org/abs/2103.03874) after canonicalizing the prediction and the reference (e.g., converting "1/2" to "\\\\frac{1}{2}"). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3020/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3020",
"html_url": "https://github.com/huggingface/datasets/pull/3020",
"diff_url": "https://github.com/huggingface/datasets/pull/3020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3020.patch",
"merged_at": 1634898571000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3019 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3019/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3019/comments | https://api.github.com/repos/huggingface/datasets/issues/3019/events | https://github.com/huggingface/datasets/pull/3019 | 1,015,339,983 | PR_kwDODunzps4speOB | 3,019 | Fix filter leaking | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,362,178,000 | 1,635,847,098,000 | 1,633,422,787,000 | MEMBER | null | If filter is called after using a first transform `shuffle`, `select`, `shard`, `train_test_split`, or `filter`, then it could not work as expected and return examples from before the first transform. This is because the indices mapping was not taken into account when saving the indices to keep when doing the filtering
Affected versions: 1.12.0 and 1.12.1
This should fix issue https://github.com/huggingface/datasets/issues/3010 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3019/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3019",
"html_url": "https://github.com/huggingface/datasets/pull/3019",
"diff_url": "https://github.com/huggingface/datasets/pull/3019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3019.patch",
"merged_at": 1633422787000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3017/comments | https://api.github.com/repos/huggingface/datasets/issues/3017/events | https://github.com/huggingface/datasets/pull/3017 | 1,015,215,528 | PR_kwDODunzps4spE9m | 3,017 | Remove unused parameter in xdirname | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,355,753,000 | 1,633,433,821,000 | 1,633,433,820,000 | MEMBER | null | Minor fix to remove unused args `*p` in `xdirname`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3017/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3017/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3017",
"html_url": "https://github.com/huggingface/datasets/pull/3017",
"diff_url": "https://github.com/huggingface/datasets/pull/3017.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3017.patch",
"merged_at": 1633433820000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3016/comments | https://api.github.com/repos/huggingface/datasets/issues/3016/events | https://github.com/huggingface/datasets/pull/3016 | 1,015,208,654 | PR_kwDODunzps4spDlX | 3,016 | Fix Windows paths in LJ Speech dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,355,377,000 | 1,633,360,985,000 | 1,633,360,984,000 | MEMBER | null | Minor fix in LJ Speech dataset for Windows pathname component separator.
Related to #1878. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3016/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3016",
"html_url": "https://github.com/huggingface/datasets/pull/3016",
"diff_url": "https://github.com/huggingface/datasets/pull/3016.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3016.patch",
"merged_at": 1633360984000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3015/comments | https://api.github.com/repos/huggingface/datasets/issues/3015/events | https://github.com/huggingface/datasets/pull/3015 | 1,015,130,845 | PR_kwDODunzps4so0GX | 3,015 | Extend support for streaming datasets that use glob.glob | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,351,357,000 | 1,633,441,599,000 | 1,633,441,598,000 | MEMBER | null | This PR extends the support in streaming mode for datasets that use `glob`, by patching the function `glob.glob`.
Related to #2880, #2876, #2874 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3015/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3015",
"html_url": "https://github.com/huggingface/datasets/pull/3015",
"diff_url": "https://github.com/huggingface/datasets/pull/3015.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3015.patch",
"merged_at": 1633441598000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3014/comments | https://api.github.com/repos/huggingface/datasets/issues/3014/events | https://github.com/huggingface/datasets/pull/3014 | 1,015,070,751 | PR_kwDODunzps4son8A | 3,014 | Fix Windows path in MATH dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,347,667,000 | 1,633,351,604,000 | 1,633,351,604,000 | MEMBER | null | Minor fix in MATH dataset for Windows pathname component separator.
Related to #2982. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3014/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3014",
"html_url": "https://github.com/huggingface/datasets/pull/3014",
"diff_url": "https://github.com/huggingface/datasets/pull/3014.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3014.patch",
"merged_at": 1633351604000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3012/comments | https://api.github.com/repos/huggingface/datasets/issues/3012/events | https://github.com/huggingface/datasets/pull/3012 | 1,014,958,931 | PR_kwDODunzps4soRTu | 3,012 | Replace item with float in metrics | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,340,728,000 | 1,633,347,034,000 | 1,633,347,033,000 | MEMBER | null | As pointed out by @mariosasko in #3001, calling `float()` instad of `.item()` is faster.
Moreover, it might avoid potential issues if any of the third-party functions eventually returns a `float` instead of an `np.float64`.
Related to #3001. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3012/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3012",
"html_url": "https://github.com/huggingface/datasets/pull/3012",
"diff_url": "https://github.com/huggingface/datasets/pull/3012.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3012.patch",
"merged_at": 1633347033000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3009 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3009/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3009/comments | https://api.github.com/repos/huggingface/datasets/issues/3009/events | https://github.com/huggingface/datasets/pull/3009 | 1,014,868,235 | PR_kwDODunzps4sn_YG | 3,009 | Fix Windows paths in SUPERB benchmark datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,335,229,000 | 1,633,355,005,000 | 1,633,355,005,000 | MEMBER | null | Minor fix in SUPERB benchmark datasets for Windows pathname component separator.
Related to #2884, #2783 and #2619. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3009/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3009",
"html_url": "https://github.com/huggingface/datasets/pull/3009",
"diff_url": "https://github.com/huggingface/datasets/pull/3009.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3009.patch",
"merged_at": 1633355004000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3008 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3008/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3008/comments | https://api.github.com/repos/huggingface/datasets/issues/3008/events | https://github.com/huggingface/datasets/pull/3008 | 1,014,849,163 | PR_kwDODunzps4sn7iU | 3,008 | Fix precision/recall metrics with None average | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,334,055,000 | 1,633,339,777,000 | 1,633,339,776,000 | MEMBER | null | Related to issue #2979 and PR #2992. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3008/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3008",
"html_url": "https://github.com/huggingface/datasets/pull/3008",
"diff_url": "https://github.com/huggingface/datasets/pull/3008.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3008.patch",
"merged_at": 1633339776000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3007 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3007/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3007/comments | https://api.github.com/repos/huggingface/datasets/issues/3007/events | https://github.com/huggingface/datasets/pull/3007 | 1,014,775,450 | PR_kwDODunzps4sns-n | 3,007 | Correct a typo | {
"login": "Yann21",
"id": 35955430,
"node_id": "MDQ6VXNlcjM1OTU1NDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/35955430?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yann21",
"html_url": "https://github.com/Yann21",
"followers_url": "https://api.github.com/users/Yann21/followers",
"following_url": "https://api.github.com/users/Yann21/following{/other_user}",
"gists_url": "https://api.github.com/users/Yann21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yann21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yann21/subscriptions",
"organizations_url": "https://api.github.com/users/Yann21/orgs",
"repos_url": "https://api.github.com/users/Yann21/repos",
"events_url": "https://api.github.com/users/Yann21/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yann21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,328,147,000 | 1,633,339,677,000 | 1,633,339,677,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3007/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3007",
"html_url": "https://github.com/huggingface/datasets/pull/3007",
"diff_url": "https://github.com/huggingface/datasets/pull/3007.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3007.patch",
"merged_at": 1633339677000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3006/comments | https://api.github.com/repos/huggingface/datasets/issues/3006/events | https://github.com/huggingface/datasets/pull/3006 | 1,014,770,821 | PR_kwDODunzps4snsBm | 3,006 | Fix Windows paths in CommonLanguage dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,327,738,000 | 1,633,338,478,000 | 1,633,338,478,000 | MEMBER | null | Minor fix in CommonLanguage dataset for Windows pathname component separator.
Related to #2989. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3006/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3006",
"html_url": "https://github.com/huggingface/datasets/pull/3006",
"diff_url": "https://github.com/huggingface/datasets/pull/3006.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3006.patch",
"merged_at": 1633338478000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3005 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3005/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3005/comments | https://api.github.com/repos/huggingface/datasets/issues/3005/events | https://github.com/huggingface/datasets/issues/3005 | 1,014,615,420 | I_kwDODunzps48ec18 | 3,005 | DatasetDict.filter and Dataset.filter crashes with any "fn_kwargs" argument | {
"login": "DrMatters",
"id": 22641583,
"node_id": "MDQ6VXNlcjIyNjQxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrMatters",
"html_url": "https://github.com/DrMatters",
"followers_url": "https://api.github.com/users/DrMatters/followers",
"following_url": "https://api.github.com/users/DrMatters/following{/other_user}",
"gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions",
"organizations_url": "https://api.github.com/users/DrMatters/orgs",
"repos_url": "https://api.github.com/users/DrMatters/repos",
"events_url": "https://api.github.com/users/DrMatters/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrMatters/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @DrMatters, thanks for reporting.\r\n\r\nThis issue was fixed 14 days ago: #2950.\r\n\r\nCurrently, the fix is only in the master branch and will be made available in our next library release.\r\n\r\nIn the meantime, you can incorporate the fix by installing datasets from the master branch:\r\n```shell\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```shell\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```",
"Thanks, sorry for bothering"
] | 1,633,308,569,000 | 1,633,947,481,000 | 1,633,337,173,000 | NONE | null | ## Describe the bug
The ".filter" method of DatasetDict or Dataset objects fails when passing any "fn_kwargs" argument
## Steps to reproduce the bug
```python
import datasets
example_dataset = datasets.Dataset.from_dict({"a": {1, 2, 3, 4}})
def filter_value(example, value):
return example['a'] == value
filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3})
```
## Expected results
`filtered` is a dataset containing {"a": {3}}
## Actual results
> Traceback (most recent call last):
> File "C:\Users\qsemi\Documents\git\nlp_experiments\gpt_celebrity\src\test_faulty_filter.py", line 8, in <module>
> filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3})
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper
> out = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2169, in filter
> indices = self.map(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1686, in map
> return self._map_single(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper
> out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper
> out = func(self, *args, **kwargs)
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2048, in _map_single
> batch = apply_function_on_filtered_inputs(
> File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1939, in apply_function_on_filtered_inputs
> function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
> TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'value'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19042-SP0
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3005/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3004 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3004/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3004/comments | https://api.github.com/repos/huggingface/datasets/issues/3004/events | https://github.com/huggingface/datasets/pull/3004 | 1,014,336,617 | PR_kwDODunzps4smfPF | 3,004 | LexGLUE: A Benchmark Dataset for Legal Language Understanding in English. | {
"login": "iliaschalkidis",
"id": 1626984,
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iliaschalkidis",
"html_url": "https://github.com/iliaschalkidis",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Please wait until Tuesday. Arxiv pre-print is pending. π€ ",
"Hi @lhoestq, I updated the README with the Arxiv publication info and now the tests are not passing.\r\n\r\nIt seems that the error is completely irrelevant to my code:\r\n\r\n```\r\n Attempting uninstall: ruamel.yaml\r\n Found existing installation: ruamel-yaml 0.15.87\r\nERROR: Cannot uninstall 'ruamel-yaml'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.\r\n```",
"Hi ! Thanks for adding this one. Amazing work :o \r\n\r\nYea you can ignore the `ruamel-yaml` error, it's unrelated to your PR and fixed on `master`",
"Hi @lhoestq, \r\n\r\n- I fetched and merged the upstream master to get the `ruamel-yaml` fix.\r\n- I updated the README.md file including more information for the inputs and labels, while I also turned all tables in HTML format hoping that they will render nicely in the dataset card in the HF website.\r\n- I also simplified the CaseHOLD input, excl. the unused `question` field and the `context` replicas, as suggested.\r\n"
] | 1,633,255,405,000 | 1,634,132,222,000 | 1,634,132,221,000 | CONTRIBUTOR | null | Inspired by the recent widespread use of the GLUE multi-task benchmark NLP dataset (Wang et al., 2018), the subsequent more difficult SuperGLUE (Wang et al., 2019), other previous multi-task NLP benchmarks (Conneau and Kiela, 2018; McCann et al., 2018), and similar initiatives in other domains (Peng et al., 2019), we introduce the Legal General Language Understanding Evaluation (LexGLUE) benchmark, a benchmark dataset to evaluate the performance of NLP methods in legal tasks. LexGLUE is based on seven existing legal NLP datasets, selected using criteria largely from SuperGLUE.
As in GLUE and SuperGLUE (Wang et al., 2019b,a), one of our goals is to push towards generic (or βfoundationβ) models that can cope with multiple NLP tasks, in our case legal NLP tasks possibly with limited task-specific fine-tuning. Another goal is to provide a convenient and informative entry point for NLP researchers and practitioners wishing to explore or develop methods for legalNLP. Having these goals in mind, the datasets we include in LexGLUE and the tasks they address have been simplified in several ways to make it easier for newcomers and generic models to address all tasks.
LexGLUE benchmark is accompanied by experimental infrastructure that relies on Hugging Face Transformers library and resides at: https://github.com/coastalcph/lex-glue. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3004/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3004/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3004",
"html_url": "https://github.com/huggingface/datasets/pull/3004",
"diff_url": "https://github.com/huggingface/datasets/pull/3004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3004.patch",
"merged_at": 1634132221000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3003 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3003/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3003/comments | https://api.github.com/repos/huggingface/datasets/issues/3003/events | https://github.com/huggingface/datasets/pull/3003 | 1,014,137,933 | PR_kwDODunzps4smExP | 3,003 | common_language: Fix license in README.md | {
"login": "jimregan",
"id": 227350,
"node_id": "MDQ6VXNlcjIyNzM1MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/227350?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jimregan",
"html_url": "https://github.com/jimregan",
"followers_url": "https://api.github.com/users/jimregan/followers",
"following_url": "https://api.github.com/users/jimregan/following{/other_user}",
"gists_url": "https://api.github.com/users/jimregan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jimregan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jimregan/subscriptions",
"organizations_url": "https://api.github.com/users/jimregan/orgs",
"repos_url": "https://api.github.com/users/jimregan/repos",
"events_url": "https://api.github.com/users/jimregan/events{/privacy}",
"received_events_url": "https://api.github.com/users/jimregan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,200,457,000 | 1,633,339,621,000 | 1,633,339,621,000 | CONTRIBUTOR | null | ...it's correct elsewhere | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3003/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3003/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3003",
"html_url": "https://github.com/huggingface/datasets/pull/3003",
"diff_url": "https://github.com/huggingface/datasets/pull/3003.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3003.patch",
"merged_at": 1633339621000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3002/comments | https://api.github.com/repos/huggingface/datasets/issues/3002/events | https://github.com/huggingface/datasets/pull/3002 | 1,014,120,524 | PR_kwDODunzps4smCNO | 3,002 | Remove a reference to the open Arrow file when deleting a TF dataset created with to_tf_dataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq The test passes even without the try/except block!",
"Hey, I'm a little late because I was caught up in the course work, but I double-checked this and it looks great. Thanks for fixing!"
] | 1,633,196,649,000 | 1,634,125,680,000 | 1,634,115,803,000 | CONTRIBUTOR | null | This [comment](https://github.com/huggingface/datasets/issues/2934#issuecomment-922970919) explains the issue. This PR fixes that with a `weakref` callback, and additionally:
* renames `TensorflowDatasetMixIn` to `TensorflowDatasetMixin` for consistency
* correctly indents `TensorflowDatasetMixin`'s docstring
* replaces `tf.data.AUTOTUNE` with `tf.data.experimental.AUTOTUNE` (we support TF>=2.2 according to the [setup.py](https://github.com/huggingface/datasets/blob/fc46bba66ba4f432cc10501c16a677112e13984c/setup.py#L188) and `AUTOTUNE` has been moved to the experimental part of `tf.data` in 1.X if I'm not mistaken)
Fixes #2934 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3002/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3002",
"html_url": "https://github.com/huggingface/datasets/pull/3002",
"diff_url": "https://github.com/huggingface/datasets/pull/3002.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3002.patch",
"merged_at": 1634115803000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3001/comments | https://api.github.com/repos/huggingface/datasets/issues/3001/events | https://github.com/huggingface/datasets/pull/3001 | 1,014,024,982 | PR_kwDODunzps4sl0BY | 3,001 | Fix cast to Python scalar in Matthews Correlation metric | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,175,099,000 | 1,633,341,244,000 | 1,633,339,572,000 | CONTRIBUTOR | null | This PR is motivated by issue #2964.
The Matthews Correlation metric relies on sklearn's `matthews_corrcoef` function to compute the result. This function returns either `float` or `np.float64` (see the [source](https://github.com/scikit-learn/scikit-learn/blob/844b4be24d20fc42cc13b957374c718956a0db39/sklearn/metrics/_classification.py#L906-L909)). Obviously, calling `.item()` on the float value will fail, so I'm fixing this with the built-in `float()` function, which covers both cases. Surprisingly, on my machine, casting `np.float64` to a Python scalar with `float()` is even faster than with the `.item()` method. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3001/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3001",
"html_url": "https://github.com/huggingface/datasets/pull/3001",
"diff_url": "https://github.com/huggingface/datasets/pull/3001.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3001.patch",
"merged_at": 1633339572000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/3000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3000/comments | https://api.github.com/repos/huggingface/datasets/issues/3000/events | https://github.com/huggingface/datasets/pull/3000 | 1,013,613,219 | PR_kwDODunzps4skusL | 3,000 | Fix json loader when conversion not implemented | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"And we're already at PR number 3,000 ! ^^",
"Thank you so much for fixing this @lhoestq π ! I just tested the branch out and it works like a charm!"
] | 1,633,110,442,000 | 1,633,111,500,000 | 1,633,110,863,000 | MEMBER | null | Sometimes the arrow json parser fails if the `block_size` is too small and returns an `ArrowNotImplementedError: JSON conversion to struct...` error.
By increasing the block size it makes it work again.
Hopefully it should help with https://github.com/huggingface/datasets/issues/2799
I tried with the file mentioned in the issue and it worked for me
cc @lewtun can you try again from this branch ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/3000/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/3000/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3000",
"html_url": "https://github.com/huggingface/datasets/pull/3000",
"diff_url": "https://github.com/huggingface/datasets/pull/3000.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/3000.patch",
"merged_at": 1633110863000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2999/comments | https://api.github.com/repos/huggingface/datasets/issues/2999/events | https://github.com/huggingface/datasets/pull/2999 | 1,013,536,933 | PR_kwDODunzps4skgCm | 2,999 | Set trivia_qa writer batch size | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,105,406,000 | 1,633,106,095,000 | 1,633,106,095,000 | MEMBER | null | Save some RAM when generating trivia_qa | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2999/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2999",
"html_url": "https://github.com/huggingface/datasets/pull/2999",
"diff_url": "https://github.com/huggingface/datasets/pull/2999.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2999.patch",
"merged_at": 1633106095000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2997 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2997/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2997/comments | https://api.github.com/repos/huggingface/datasets/issues/2997/events | https://github.com/huggingface/datasets/issues/2997 | 1,013,270,069 | I_kwDODunzps48ZUY1 | 2,997 | Dataset has incorrect labels | {
"login": "marshmellow77",
"id": 63367770,
"node_id": "MDQ6VXNlcjYzMzY3Nzcw",
"avatar_url": "https://avatars.githubusercontent.com/u/63367770?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marshmellow77",
"html_url": "https://github.com/marshmellow77",
"followers_url": "https://api.github.com/users/marshmellow77/followers",
"following_url": "https://api.github.com/users/marshmellow77/following{/other_user}",
"gists_url": "https://api.github.com/users/marshmellow77/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marshmellow77/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marshmellow77/subscriptions",
"organizations_url": "https://api.github.com/users/marshmellow77/orgs",
"repos_url": "https://api.github.com/users/marshmellow77/repos",
"events_url": "https://api.github.com/users/marshmellow77/events{/privacy}",
"received_events_url": "https://api.github.com/users/marshmellow77/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @marshmellow77, thanks for reporting.\r\n\r\nThat issue is fixed since `datasets` version 1.9.0 (see 16bc665f2753677c765011ef79c84e55486d4347).\r\n\r\nPlease, update `datasets` with: `pip install -U datasets`",
"Thanks. Please note that the dataset explorer (https://huggingface.co/datasets/viewer/?dataset=turkish_product_reviews) still shows the incorrect state. The sentiment for the first few customer reviews is actually negative and should be labelled with \"0\", see screenshot:\r\n\r\n![Capture1](https://user-images.githubusercontent.com/63367770/135637150-93d9b09b-f1dd-4701-97a5-5cb2672ec0c7.PNG)\r\n\r\n\r\n",
"Thanks @marshmellow77, good catch! I'm transferring this issue to https://github.com/huggingface/datasets-viewer. "
] | 1,633,090,146,000 | 1,633,102,320,000 | 1,633,096,474,000 | NONE | null | The dataset https://huggingface.co/datasets/turkish_product_reviews has incorrect labels - all reviews are labelled with "1" (positive sentiment). None of the reviews is labelled with "0". See screenshot attached:
![Capture](https://user-images.githubusercontent.com/63367770/135617428-14ce0b27-5208-4e66-a3ee-71542e3257b4.PNG)
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2997/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2996/comments | https://api.github.com/repos/huggingface/datasets/issues/2996/events | https://github.com/huggingface/datasets/pull/2996 | 1,013,266,373 | PR_kwDODunzps4sjrP6 | 2,996 | Remove all query parameters when extracting protocol | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Beware of cases like: `http://ufal.ms.mff.cuni.cz/umc/005-en-ur/download.php?f=umc005-corpus.zip` or `gzip://bg-cs.xml::https://opus.nlpl.eu/download.php?f=Europarl/v8/xml/bg-cs.xml.gz`. I see these URLs in the errors (https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading?collection=@huggingface/datasets), but not in the `Extraction protocol for file at xxx is not implemented yet` error, so I'm not sure if they would break now or not.\r\n\r\nMaybe: first try to find an extension, and if none, try to remove the `?...` part and retry to find the extension.\r\n\r\nBy the way, here is the list of URLs for errors of this type, with a '?' in the URL:\r\n\r\n```\r\nhttps://dl.orangedox.com/WyaCpL?dl=1\r\nhttps://drive.google.com/u/0/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download\r\nhttps://drive.google.com/u/0/uc?id=1-CaP3xHgZxOGjQ3pXC5tr9YnIajmel-t&export=download\r\nhttps://drive.google.com/u/0/uc?id=11EBGHMAswT5JDO60xh7gnZfYjpMQs7h7&export=download\r\nhttps://drive.google.com/u/0/uc?id=13JCCr-IjZK7uhbLXeufptr_AxvsKinVl&export=download\r\nhttps://drive.google.com/u/0/uc?id=13ZyFc2qepAYSg9WIFaeJ9y402gblsl2e&export=download\r\nhttps://drive.google.com/u/0/uc?id=15auwrFAlq52JJ61u7eSfnhT9rZtI5sjk&export=download\r\nhttps://drive.google.com/u/0/uc?id=16OgJ_OrfzUF_i3ftLjFn9kpcyoi7UJeO&export=download\r\nhttps://drive.google.com/u/0/uc?id=1BFYF05rx-DK9Eb5hgoIgd6EcB8zOI-zu&export=download\r\nhttps://drive.google.com/u/0/uc?id=1Cz1Un9p8Xn9IpEMMrg2kXSDt0dnjxc4z&export=download\r\nhttps://drive.google.com/u/0/uc?id=1H7FphKVVCYoH49sUXl79CuztEfJLaKoF&export=download\r\nhttps://drive.google.com/u/0/uc?id=1NAeuWLgYBzLwU5jCdkrtj4_PRUocuvlb&export=download\r\nhttps://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download\r\nhttps://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/\r\nhttps://drive.google.com/u/0/uc?id=1R1jR4DcH2UEaM1ZwDSRHdfTGvkCNu6NW&export=download\r\nhttps://drive.google.com/u/0/uc?id=1hDHeoFIfQzJec1NgZNXh3CTNbchiIvuG&export=download\r\nhttps://drive.google.com/u/0/uc?id=1wxwqnWGRzwvc_-ugRoFX8BPgpO3Q7sch&export=download\r\nhttps://drive.google.com/u/0/uc?id=1ydsOTvBZXKqcRvXawOuePrJ99slOEbkk&export=download\r\nhttps://drive.google.com/uc?export=download&id=0BwmD_VLjROrfTHk4NFg2SndKcjQ\r\nhttps://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k\r\nhttps://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbZlU4dXhHTFhZQU0\r\nhttps://drive.google.com/uc?export=download&id=0Bz8a_Dbh9Qhbd2JNdDBsQUdocVU\r\nhttps://drive.google.com/uc?export=download&id=1-w-0uqaC6hnRn1F_3XqJEvi09zlcTIhX\r\nhttps://drive.google.com/uc?export=download&id=11wMGqNVSwwk6zUnDaJEgm3qT71kAHeff\r\nhttps://drive.google.com/uc?export=download&id=17FGi8KI9N9SuGe7elM8qU8_3fx4sfgTr\r\nhttps://drive.google.com/uc?export=download&id=1AHUm1-_V9GCtGuDcc8XrMUCJE8B-HHoL\r\nhttps://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U\r\nhttps://drive.google.com/uc?export=download&id=1Ev4RqWcPsLI9rgOGAKh-_dFKqcEZ1u-G\r\nhttps://drive.google.com/uc?export=download&id=1GTHUJxxmjLmG2lnF9dwRgIDRFZaOY3-F\r\nhttps://drive.google.com/uc?export=download&id=1GcUN6mytEcOMBBOvjJOQzBmEkc-LdgQg\r\nhttps://drive.google.com/uc?export=download&id=1J3mucMFTWrgAYa3LuBZoLRR3CzzYD3fa\r\nhttps://drive.google.com/uc?export=download&id=1Jjhbal535VVz2ap4v4r_rN1UEHTdLK5P\r\nhttps://drive.google.com/uc?export=download&id=1L7aoUXzHPzyzQ0ns4ApBbYepsjFOtXil\r\nhttps://drive.google.com/uc?export=download&id=1M1M5yIOyjKWGprc3LUeVVwxgKXxgpqxm\r\nhttps://drive.google.com/uc?export=download&id=1Nug7-Sri50mkJL4GrWw6C2ZIbfeU-6Am\r\nhttps://drive.google.com/uc?export=download&id=1PGa8j1_IqxiGTc3SU6NMB38sAzxCPS34\r\nhttps://drive.google.com/uc?export=download&id=1QsV8C5EPJrQl37mwva_5-IJOrCaOi2tH\r\nhttps://drive.google.com/uc?export=download&id=1RsGLINVce-0GsDkCLDuLZmoLuzfmoCuQ\r\nhttps://drive.google.com/uc?export=download&id=1TuWH7uwu6V90QWmZn25qhou1rm97Egmn\r\nhttps://drive.google.com/uc?export=download&id=1U7WdBpd9kJ85S7BbBhWUSiy9NnXrKdO6\r\nhttps://drive.google.com/uc?export=download&id=1USoQ8lJgN8kAWnUnRrupMGrPMLlDVqlV\r\nhttps://drive.google.com/uc?export=download&id=1Uit4Og1pk-br_0UJIO5sdhApyhTuHzqo\r\nhttps://drive.google.com/uc?export=download&id=1Z2ty5hU0tIGRZRDlFQZLO7b5vijRfvo0\r\nhttps://drive.google.com/uc?export=download&id=1ZyFGufe4puX3vjGPbp4xg9Hca3Gwq22g\r\nhttps://drive.google.com/uc?export=download&id=1ZzlIQvw1KNBG97QQCfdatvVrrbeLaM1u\r\nhttps://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0\r\nhttps://drive.google.com/uc?export=download&id=1__EjA6oZsgXQpggPm-h54jZu3kP6Y6zu\r\nhttps://drive.google.com/uc?export=download&id=1aHPVfC5TrlnUjehtagVZoDfq4VccgaNT\r\nhttps://drive.google.com/uc?export=download&id=1cqu_YAgvlyVSzzjcUyP1Cz7q0k8Pw7vN\r\nhttps://drive.google.com/uc?export=download&id=1dUIqVwvoZAtbX_-z5axCoe97XNcFo1No\r\nhttps://drive.google.com/uc?export=download&id=1eTtRs5cUlBP5dXsx-FTAlmXuB6JQi2qj\r\nhttps://drive.google.com/uc?export=download&id=1fUR3MqJ8jTMka6owA0S-Fe6aHmiophc_\r\nhttps://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF\r\nhttps://drive.google.com/uc?export=download&id=1g89WgFHMRbr4QrvA0ngh26PY081Nv3lx\r\nhttps://drive.google.com/uc?export=download&id=1meSNZHxd_0TZLKCRCYGN-Ke3IA5c1qOE\r\nhttps://drive.google.com/uc?export=download&id=1okwGJiOZmTpNRNgJLCnjFF4Q0H1z4l6_\r\nhttps://drive.google.com/uc?export=download&id=1phryJg4FjCFkn0mSCqIOP2-FscAeKGV0\r\nhttps://drive.google.com/uc?export=download&id=1s8NSFT4Kz0caKZ4VybPNzt88F8ZanprY\r\nhttps://drive.google.com/uc?export=download&id=1vRY2wM6rlOZrf9exGTm5pXj5ExlVwJ0C\r\nhttps://drive.google.com/uc?export=download&id=1ytVZ4AhubFDOEL7o7XrIRIyhU8g9wvKA\r\nhttps://drive.google.com/uc?id=12Uz59TYg_NtxOy7SXraYeXPMRT7oaO7X\r\nhttps://drive.google.com/uc?id=1PGH5H_oW7wUvMw_5xaXvbEN7DFll-wDX\r\nhttps://github.com/MaazAmjad/Datasets-for-Urdu-news/blob/master/Urdu%20Fake%20News%20Dataset.zip?raw=true\r\nhttps://github.com/TevenLeScao/glucose/blob/master/GLUCOSE_training_data.zip?raw=true\r\nhttps://github.com/TevenLeScao/what-time-is-it/blob/master/gutenberg_time_phrases.zip?raw=true\r\nhttps://github.com/aviaefrat/cryptonite/blob/main/data/cryptonite-official-split.zip?raw=true\r\nhttps://github.com/facebookresearch/Imppres/blob/master/dataset/IMPPRES.zip?raw=true\r\nhttps://github.com/ljos/navnkjenner/blob/master/data/bokmaal/no_bokmaal-ud-train.bioes?raw=true\r\nhttps://github.com/ljos/navnkjenner/blob/master/data/nynorsk/no_nynorsk-ud-train.bioes?raw=true\r\nhttps://github.com/ljos/navnkjenner/blob/master/data/samnorsk/no_samnorsk-ud-train.bioes?raw=true\r\nhttps://github.com/mirfan899/Urdu/blob/master/sentiment/imdb_urdu_reviews.csv.tar.gz?raw=true\r\nhttps://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew/blob/master/data/morph_train.tsv?raw=true\r\nhttps://github.com/omilab/Neural-Sentiment-Analyzer-for-Modern-Hebrew/blob/master/data/token_train.tsv?raw=true\r\nhttps://lindat.mff.cuni.cz/repository/xmlui/bitstream/handle/11858/00-097C-0000-0023-625F-0/hindencorp05.plaintext.gz?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/299/nchlt_afrikaans_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/312/nchlt_isixhosa_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/319/nchlt_isizulu_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/328/nchlt_sepedi_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/334/nchlt_sesotho_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/341/nchlt_setswana_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://repo.sadilar.org/bitstream/handle/20.500.12185/346/nchlt_siswati_named_entity_annotated_corpus.zip?sequence=3&isAllowed=y\r\nhttps://www.dropbox.com/s/tohrsllcfy7rch4/SimpleQuestions_v2.tgz?dl=1\r\nhttps://zenodo.org/record/1043504/files/corpus-webis-tldr-17.zip?download=1\r\nhttps://zenodo.org/record/1489920/files/articles-training-byarticle-20181122.zip?download=1\r\nhttps://zenodo.org/record/1489920/files/articles-training-bypublisher-20181122.zip?download=1\r\nhttps://zenodo.org/record/2787612/files/SICK.zip?download=1\r\nhttps://zenodo.org/record/3553423/files/Swahili%20data.zip?download=1\r\nhttps://zenodo.org/record/3707949/files/tapaco_v1.0.zip?download=1\r\nhttps://zenodo.org/record/4300294/files/train.csv?download=1\r\n```\r\n\r\n",
"Hi @severo, I just saw your comment. Thank you.\r\n\r\nFinally I just swapped the 2 parsings: first I extract extension and then I remove query parameters. π ",
"OK :) Maybe we should add some unit tests to ensure we improve the detection without regressions (it's Friday afternoon, I trust the unit tests more than my analysis of the code)",
"Great! For the tests, I think we should also add some URLs in the form: `http://ufal.ms.mff.cuni.cz/umc/005-en-ur/download.php?f=umc005-corpus.zip` to be sure they are still correctly detected."
] | 1,633,089,934,000 | 1,633,337,293,000 | 1,633,337,293,000 | MEMBER | null | Fix `_get_extraction_protocol` to remove all query parameters, like `?raw=true`, `?dl=1`,... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2996/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2996",
"html_url": "https://github.com/huggingface/datasets/pull/2996",
"diff_url": "https://github.com/huggingface/datasets/pull/2996.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2996.patch",
"merged_at": 1633337293000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2995/comments | https://api.github.com/repos/huggingface/datasets/issues/2995/events | https://github.com/huggingface/datasets/pull/2995 | 1,013,143,868 | PR_kwDODunzps4sjThd | 2,995 | Fix trivia_qa unfiltered | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"CI fails due to missing tags, but they will be added in https://github.com/huggingface/datasets/pull/2949"
] | 1,633,082,023,000 | 1,633,082,651,000 | 1,633,082,650,000 | MEMBER | null | Fix https://github.com/huggingface/datasets/issues/2993 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2995/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2995/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2995",
"html_url": "https://github.com/huggingface/datasets/pull/2995",
"diff_url": "https://github.com/huggingface/datasets/pull/2995.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2995.patch",
"merged_at": 1633082650000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2994/comments | https://api.github.com/repos/huggingface/datasets/issues/2994/events | https://github.com/huggingface/datasets/pull/2994 | 1,013,000,475 | PR_kwDODunzps4si4I2 | 2,994 | Fix loading compressed CSV without streaming | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,073,339,000 | 1,633,103,596,000 | 1,633,103,596,000 | MEMBER | null | When implementing support to stream CSV files (https://github.com/huggingface/datasets/commit/ad489d4597381fc2d12c77841642cbeaecf7a2e0#diff-6f60f8d0552b75be8b3bfd09994480fd60dcd4e7eb08d02f721218c3acdd2782), a regression was introduced preventing loading compressed CSV files in non-streaming mode.
This PR fixes it, allowing loading compressed/uncompressed CSV files in streaming/non-streaming mode.
Fix #2977. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2994/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2994/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2994",
"html_url": "https://github.com/huggingface/datasets/pull/2994",
"diff_url": "https://github.com/huggingface/datasets/pull/2994.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2994.patch",
"merged_at": 1633103595000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2993/comments | https://api.github.com/repos/huggingface/datasets/issues/2993/events | https://github.com/huggingface/datasets/issues/2993 | 1,012,702,665 | I_kwDODunzps48XJ3J | 2,993 | Can't download `trivia_qa/unfiltered` | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"wooo that was fast! thank you @lhoestq !\r\nit is able to process now, though it's ignoring all files and ending up with 0 examples now haha :/\r\n\r\nFor subset \"unfiltered\":\r\n```python\r\n>>> load_dataset(\"trivia_qa\", \"unfiltered\")\r\nDownloading and preparing dataset trivia_qa/unfiltered (download: 3.07 GiB, generated: 27.23 GiB, post-processed: Unknown size, total: 30.30 GiB) to /gpfsscratch/rech/six/commun/datasets/trivia_qa/unfiltered/1.1.0/910043a609bb2bdf62b4874f68e0c24fb648cf81e40a358f4bd54c919d72c9ab...\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 1354.53it/s]\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 2/2 [00:00<00:00, 40.60it/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/load.py\", line 1198, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py\", line 647, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py\", line 748, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=2906575347, num_examples=10832, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}, {'expected': SplitInfo(name='validation', num_bytes=3038966234, num_examples=11313, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='validation', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}]\r\n```\r\nFor subset \"rc\":\r\n```python\r\n>>> load_dataset(\"trivia_qa\", \"rc\")\r\nDownloading and preparing dataset trivia_qa/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to /gpfsscratch/rech/six/commun/datasets/trivia_qa/rc/1.1.0/910043a609bb2bdf62b4874f68e0c24fb648cf81e40a358f4bd54c919d72c9ab...\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 3806.08it/s]\r\n100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 51.57it/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/load.py\", line 1198, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py\", line 647, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py\", line 748, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=1577814583, num_examples=17210, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}, {'expected': SplitInfo(name='train', num_bytes=12750976012, num_examples=138384, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}, {'expected': SplitInfo(name='validation', num_bytes=1688535379, num_examples=18669, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='validation', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}]\r\n```\r\n\r\nCould you look into that when you get a chance?\r\nI wonder if it's not something they changed on the file to download? i couldn't find any information",
"@VictorSanh have you tried passing `download_mode=\"force_redownload\"`?\r\n```python\r\nds = load_dataset(\"trivia_qa\", \"unfiltered\", download_mode=\"force_redownload\")\r\n```",
"I aggressively rmed caches, especially rming the `datasets/downloads/extracted/c3d265fa20d99a147a76e4f5e...` solved the issue.\r\nthank you both!\r\n"
] | 1,633,042,818,000 | 1,633,115,243,000 | 1,633,115,242,000 | MEMBER | null | ## Describe the bug
For some reason, I can't download `trivia_qa/unfilted`. A file seems to be missing... I am able to see it fine though the viewer tough...
## Steps to reproduce the bug
```python
>>> from datasets import load_dataset
>>> load_dataset("trivia_qa", "unfiltered")
Downloading and preparing dataset trivia_qa/unfiltered (download: 3.07 GiB, generated: 27.23 GiB, post-processed: Unknown size, total: 30.30 GiB) to /gpfsscratch/rech/six/commun/datasets/trivia_qa/unfiltered/1.1.0/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6...
Traceback (most recent call last):
File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 251, in _add_context
with open(os.path.join(file_dir, fname), encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/gpfsscratch/rech/six/commun/datasets/downloads/extracted/9fcb7eddc6afd46fd074af3c5128931dfe4b548f933c925a23847faf4c1995ad/evidence/wikipedia/Peanuts.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/load.py", line 852, in load_dataset
use_auth_token=use_auth_token,
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 616, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split
disable=bool(logging.get_verbosity() == logging.NOTSET),
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 303, in _generate_examples
example = parse_example(article)
File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 274, in parse_example
_add_context(article.get("EntityPages", []), "WikiContext", wiki_dir),
File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 253, in _add_context
except (IOError, datasets.Value("errors").NotFoundError):
File "<string>", line 5, in __init__
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/features.py", line 265, in __post_init__
self.pa_type = string_to_arrow(self.dtype)
File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/features.py", line 134, in string_to_arrow
f"Neither {datasets_dtype} nor {datasets_dtype + '_'} seems to be a pyarrow data type. "
ValueError: Neither errors nor errors_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions
```
## Expected results
I am able to load another subset (`rc`), but unable to load.
I am not sure why the try/except doesn't catch it...
https://github.com/huggingface/datasets/blob/9675a5a1e7b99a86f9c250f6ea5fa5d1e6d5cc7d/datasets/trivia_qa/trivia_qa.py#L253
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: Linux-4.18.0-147.51.2.el8_1.x86_64-x86_64-with-redhat-8.1-Ootpa
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2993/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2992/comments | https://api.github.com/repos/huggingface/datasets/issues/2992/events | https://github.com/huggingface/datasets/pull/2992 | 1,012,325,594 | PR_kwDODunzps4sg4ZP | 2,992 | Fix f1 metric with None average | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,015,917,000 | 1,633,097,859,000 | 1,633,097,858,000 | MEMBER | null | Fix #2979. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2992/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2992/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2992",
"html_url": "https://github.com/huggingface/datasets/pull/2992",
"diff_url": "https://github.com/huggingface/datasets/pull/2992.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2992.patch",
"merged_at": 1633097858000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2990/comments | https://api.github.com/repos/huggingface/datasets/issues/2990/events | https://github.com/huggingface/datasets/pull/2990 | 1,012,097,418 | PR_kwDODunzps4sgLt5 | 2,990 | Make Dataset.map accept list of np.array | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,633,003,734,000 | 1,633,096,666,000 | 1,633,096,666,000 | MEMBER | null | Fix #2987. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2990/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2990",
"html_url": "https://github.com/huggingface/datasets/pull/2990",
"diff_url": "https://github.com/huggingface/datasets/pull/2990.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2990.patch",
"merged_at": 1633096665000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2989/comments | https://api.github.com/repos/huggingface/datasets/issues/2989/events | https://github.com/huggingface/datasets/pull/2989 | 1,011,220,375 | PR_kwDODunzps4sdlt1 | 2,989 | Add CommonLanguage | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,936,090,000 | 1,633,109,799,000 | 1,633,107,603,000 | CONTRIBUTOR | null | This PR adds the Common Language dataset (https://zenodo.org/record/5036977)
The dataset is intended for language-identification speech classifiers and is already used by models on the Hub:
* https://huggingface.co/speechbrain/lang-id-commonlanguage_ecapa
* https://huggingface.co/anton-l/wav2vec2-base-langid
cc @patrickvonplaten | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2989/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2989/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2989",
"html_url": "https://github.com/huggingface/datasets/pull/2989",
"diff_url": "https://github.com/huggingface/datasets/pull/2989.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2989.patch",
"merged_at": 1633107603000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2987/comments | https://api.github.com/repos/huggingface/datasets/issues/2987/events | https://github.com/huggingface/datasets/issues/2987 | 1,011,026,141 | I_kwDODunzps48Qwjd | 2,987 | ArrowInvalid: Can only convert 1-dimensional array values | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @NielsRogge, thanks for reporting!\r\n\r\nIn `datasets`, we were handling N-dimensional arrays only when passed as an instance of `np.array`, not when passed as a list of `np.array`s.\r\n\r\nI'm fixing it."
] | 1,632,925,132,000 | 1,633,096,665,000 | 1,633,096,665,000 | NONE | null | ## Describe the bug
For the ViT and LayoutLMv2 demo notebooks in my [Transformers-Tutorials repo](https://github.com/NielsRogge/Transformers-Tutorials), people reported an ArrowInvalid issue after applying the following function to a Dataset:
```
def preprocess_data(examples):
images = [Image.open(path).convert("RGB") for path in examples['image_path']]
words = examples['words']
boxes = examples['bboxes']
word_labels = examples['ner_tags']
encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels,
padding="max_length", truncation=True)
return encoded_inputs
```
```
Full trace:
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-8-0fc3efc6f0c2> in <module>()
27
28 train_dataset = datasets['train'].map(preprocess_data, batched=True, remove_columns=datasets['train'].column_names,
---> 29 features=features)
30 test_dataset = datasets['test'].map(preprocess_data, batched=True, remove_columns=datasets['test'].column_names,
31 features=features)
13 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1701 new_fingerprint=new_fingerprint,
1702 disable_tqdm=disable_tqdm,
-> 1703 desc=desc,
1704 )
1705 else:
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
183 }
184 # apply actual function
--> 185 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
186 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
187 # re-apply format to the output
/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
396 # Call actual function
397
--> 398 out = func(self, *args, **kwargs)
399
400 # Update fingerprint of in-place transforms + update in-place history of transforms
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2063 writer.write_table(batch)
2064 else:
-> 2065 writer.write_batch(batch)
2066 if update_data and writer is not None:
2067 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
409 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
410 typed_sequence_examples[col] = typed_sequence
--> 411 pa_table = pa.Table.from_pydict(typed_sequence_examples)
412 self.write_table(pa_table, writer_batch_size)
413
/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
106 storage = numpy_to_pyarrow_listarray(self.data, type=type.value_type)
107 else:
--> 108 storage = pa.array(self.data, type.storage_dtype)
109 out = pa.ExtensionArray.from_storage(type, storage)
110 elif isinstance(self.data, np.ndarray):
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Can only convert 1-dimensional array values
```
It can be fixed by adding the following line:
```diff
def preprocess_data(examples):
images = [Image.open(path).convert("RGB") for path in examples['image_path']]
words = examples['words']
boxes = examples['bboxes']
word_labels = examples['ner_tags']
encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels,
padding="max_length", truncation=True)
+ encoded_inputs["image"] = np.array(encoded_inputs["image"])
return encoded_inputs
```
However, would be great if this can be fixed within Datasets itself. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2987/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2986/comments | https://api.github.com/repos/huggingface/datasets/issues/2986/events | https://github.com/huggingface/datasets/pull/2986 | 1,010,792,783 | PR_kwDODunzps4scSHR | 2,986 | Refac module factory + avoid etag requests for hub datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> One thing is that I still don't know at what extent we want to keep backward compatibility for prepare_module. For now I just kept it (except I removed two parameters) just in case, but it's not used anywhere anymore.\r\n\r\nFYI, various other projects currently use it, thus clearly a major version would be required:\r\n\r\nhttps://github.com/search?q=org%3Ahuggingface+prepare_module&type=code",
"Yea so I kept `prepare_module` and changed it to use all the factories I added, so all the use cases in the link you shared are still working. The only two parameters I removed are minor IMO and were a bit hacky anyway (return_resolved_file_path and return_associated_base_path). I think they were only used internally in `datasets` but let me know if you're aware of a use case I didn't think of.",
"I think I'm done with the tests :) I'll do the comments/docs and then we just wait for https://github.com/huggingface/huggingface_hub/pull/373 to get merged",
"When there's a new release of `huggingface_hub` (probably on monday), it will fix the CI.\r\n\r\nThe PR is ready for review. Let me know if I need to clarify some parts",
"One additional change I did: the tests won't affect the number of downloads on the website anymore. And users can choose to not update the number of downloads with `HF_UPDATE_DOWNLOAD_COUNTS=0`",
"CI failures are simply due to RAM issues with circleci workers.\r\nAnd on windows there is an issue with installing `ruamel.yaml` from the bump of `huggingface_hub` (fixed on master)"
] | 1,632,912,120,000 | 1,633,950,353,000 | 1,633,950,352,000 | MEMBER | null | ## Refactor the module factory
When trying to extend the `data_files` logic to avoid doing unnecessary ETag requests, I noticed that the module preparation mechanism needed a refactor:
- the function was 600 lines long
- it was not readable
- it contained many different cases that made it complex to maintain
- it was hard to properly test it
- it was hard to extend without breaking anything
The module preparation mechanism is in charge of taking the name of a dataset or a metric given by the user (ex: "squad", "accuracy", "lhoestq/test", "path/to/my/script.py", "path/to/my/data/directory", "json", "csv") and return a module (possibly downloaded from the Hub) that contains the dataset builder or the metric class to use.
### Implementation details
I decided to separate all these use cases into different dataset/metric module factories.
First, the metric module factories:
- **CanonicalMetricModuleFactory**: "accuracy", "rouge", ...
- **LocalMetricModuleFactory**: "path/to/my/metric.py"
Then, the dataset module factories:
- **CanonicalDatasetModuleFactory**: "squad", "glue", ...
- **CommunityDatasetModuleFactoryWithScript**: "lhoestq/test"
- **CommunityDatasetModuleFactoryWithoutScript**: "lhoestq/demo1"
- **PackagedDatasetModuleFactory**: "json", "csv", ...
- **LocalDatasetModuleFactoryWithScript**: "path/to/my/script.py"
- **LocalDatasetModuleFactoryWithoutScript**: "path/to/my/data/directory"
And finally, additional factories when users have no internet:
- **CachedDatasetModuleFactory**
- **CachedMetricModuleFactory**
### Breaking changes
One thing is that I still don't know at what extent we want to keep backward compatibility for `prepare_module`. For now I just kept it (except I removed two parameters) just in case, but it's not used anywhere anymore.
## Avoid etag requests for hub datasets
To do this I added a class `DataFilesDict` that can be hashed to define the cache directory of the dataset.
It contains the usual data files formatted as `{"train": ["train.txt"]}` for example.
But each list of file is a `DataFilesList` that also has a `origin_metadata` attribute that contains metadata about the origin of each file:
- for URLs: it stores the ETags of the files
- for local files: it stores the last modification data
- for files from a Hugging Face repository on the Hub: it stores the pattern (`*`, `*.csv`, "train.txt", etc.) and the commit sha of the repository (so there're no ETag requests !)
This way if any file changes, the hash of the `DataFilesDict` changes too !
You can instantiate a `DataFilesDict` by using patterns for local/remote files or files in a HF repository:
- for local/remote files: `DataFilesDict.from_local_or_remote(patterns)`
- for files in a HF repository: `DataFilesDict.from_hf_repo(patterns, dataset_info)`
Fix #2859
## TODO
Fix the latest test:
- [x] fix the call to dataset_info in offline mode (related to https://github.com/huggingface/huggingface_hub/issues/372)
Add some more tests:
- [x] test all the factories
- [x] test the new data files logic
Other:
- [x] docstrings
- [x] comments | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2986/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2986/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2986",
"html_url": "https://github.com/huggingface/datasets/pull/2986",
"diff_url": "https://github.com/huggingface/datasets/pull/2986.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2986.patch",
"merged_at": 1633950351000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2985/comments | https://api.github.com/repos/huggingface/datasets/issues/2985/events | https://github.com/huggingface/datasets/pull/2985 | 1,010,500,433 | PR_kwDODunzps4sbbbo | 2,985 | add new dataset kan_hope | {
"login": "adeepH",
"id": 46108405,
"node_id": "MDQ6VXNlcjQ2MTA4NDA1",
"avatar_url": "https://avatars.githubusercontent.com/u/46108405?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adeepH",
"html_url": "https://github.com/adeepH",
"followers_url": "https://api.github.com/users/adeepH/followers",
"following_url": "https://api.github.com/users/adeepH/following{/other_user}",
"gists_url": "https://api.github.com/users/adeepH/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adeepH/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adeepH/subscriptions",
"organizations_url": "https://api.github.com/users/adeepH/orgs",
"repos_url": "https://api.github.com/users/adeepH/repos",
"events_url": "https://api.github.com/users/adeepH/events{/privacy}",
"received_events_url": "https://api.github.com/users/adeepH/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,892,828,000 | 1,633,107,319,000 | 1,633,107,319,000 | CONTRIBUTOR | null | ## Adding a Dataset
- **Name:** *KanHope*
- **Description:** *A code-mixed English-Kannada dataset for Hope speech detection*
- **Task:** *Binary Text Classification*
- **Paper:** *https://arxiv.org/abs/2108.04616*
- **Data:** *https://github.com/adeepH/kan_hope/tree/main/dataset*
- **Motivation:** *The dataset is amongst the very few resources available for code-mixed low-resourced Dravidian languages of India* | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2985/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2985/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2985",
"html_url": "https://github.com/huggingface/datasets/pull/2985",
"diff_url": "https://github.com/huggingface/datasets/pull/2985.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2985.patch",
"merged_at": 1633107319000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2984/comments | https://api.github.com/repos/huggingface/datasets/issues/2984/events | https://github.com/huggingface/datasets/issues/2984 | 1,010,484,326 | I_kwDODunzps48OsRm | 2,984 | Exceeded maximum rows when reading large files | {
"login": "zijwang",
"id": 25057983,
"node_id": "MDQ6VXNlcjI1MDU3OTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/25057983?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zijwang",
"html_url": "https://github.com/zijwang",
"followers_url": "https://api.github.com/users/zijwang/followers",
"following_url": "https://api.github.com/users/zijwang/following{/other_user}",
"gists_url": "https://api.github.com/users/zijwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zijwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zijwang/subscriptions",
"organizations_url": "https://api.github.com/users/zijwang/orgs",
"repos_url": "https://api.github.com/users/zijwang/repos",
"events_url": "https://api.github.com/users/zijwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/zijwang/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @zijwang, thanks for reporting this issue.\r\n\r\nYou did not mention which `datasets` version you are using, but looking at the code in the stack trace, it seems you are using an old version.\r\n\r\nCould you please update `datasets` (`pip install -U datasets`) and check if the problem persists?"
] | 1,632,890,962,000 | 1,634,018,742,000 | 1,634,018,742,000 | NONE | null | ## Describe the bug
A clear and concise description of what the bug is.
When using `load_dataset` with json files, if the files are too large, there will be "Exceeded maximum rows" error.
## Steps to reproduce the bug
```python
dataset = load_dataset('json', data_files=data_files) # data files have 3M rows in a single file
```
## Expected results
No error
## Actual results
```
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
134 with open(file, encoding="utf-8") as f:
--> 135 dataset = json.load(f)
136 except json.JSONDecodeError:
~/anaconda3/envs/python/lib/python3.9/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
292 """
--> 293 return loads(fp.read(),
294 cls=cls, object_hook=object_hook,
~/anaconda3/envs/python/lib/python3.9/json/__init__.py in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
~/anaconda3/envs/python/lib/python3.9/json/decoder.py in decode(self, s, _w)
339 if end != len(s):
--> 340 raise JSONDecodeError("Extra data", s, end)
341 return obj
JSONDecodeError: Extra data: line 2 column 1 (char 20321)
During handling of the above exception, another exception occurred:
ArrowInvalid Traceback (most recent call last)
<ipython-input-20-ab3718a6482f> in <module>
----> 1 dataset = load_dataset('json', data_files=data_files)
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs)
841
842 # Download and prepare data
--> 843 builder_instance.download_and_prepare(
844 download_config=download_config,
845 download_mode=download_mode,
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
606 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
607 if not downloaded_from_gcs:
--> 608 self._download_and_prepare(
609 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
610 )
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
684 try:
685 # Prepare split will record examples associated to the split
--> 686 self._prepare_split(split_generator, **prepare_split_kwargs)
687 except OSError as e:
688 raise OSError(
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in _prepare_split(self, split_generator)
1153 generator = self._generate_tables(**split_generator.gen_kwargs)
1154 with ArrowWriter(features=self.info.features, path=fpath) as writer:
-> 1155 for key, table in utils.tqdm(
1156 generator, unit=" tables", leave=False, disable=bool(logging.get_verbosity() == logging.NOTSET)
1157 ):
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
135 dataset = json.load(f)
136 except json.JSONDecodeError:
--> 137 raise e
138 raise ValueError(
139 f"Not able to read records in the JSON file at {file}. "
~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
114 while True:
115 try:
--> 116 pa_table = paj.read_json(
117 BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size)
118 )
~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json()
~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Exceeded maximum rows
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version: 3.9
- PyArrow version: 4.0.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2984/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2984/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2983/comments | https://api.github.com/repos/huggingface/datasets/issues/2983/events | https://github.com/huggingface/datasets/pull/2983 | 1,010,263,058 | PR_kwDODunzps4saw_v | 2,983 | added SwissJudgmentPrediction dataset | {
"login": "JoelNiklaus",
"id": 3775944,
"node_id": "MDQ6VXNlcjM3NzU5NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3775944?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JoelNiklaus",
"html_url": "https://github.com/JoelNiklaus",
"followers_url": "https://api.github.com/users/JoelNiklaus/followers",
"following_url": "https://api.github.com/users/JoelNiklaus/following{/other_user}",
"gists_url": "https://api.github.com/users/JoelNiklaus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JoelNiklaus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JoelNiklaus/subscriptions",
"organizations_url": "https://api.github.com/users/JoelNiklaus/orgs",
"repos_url": "https://api.github.com/users/JoelNiklaus/repos",
"events_url": "https://api.github.com/users/JoelNiklaus/events{/privacy}",
"received_events_url": "https://api.github.com/users/JoelNiklaus/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,867,476,000 | 1,633,104,185,000 | 1,633,104,185,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2983/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2983",
"html_url": "https://github.com/huggingface/datasets/pull/2983",
"diff_url": "https://github.com/huggingface/datasets/pull/2983.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2983.patch",
"merged_at": 1633104185000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2982/comments | https://api.github.com/repos/huggingface/datasets/issues/2982/events | https://github.com/huggingface/datasets/pull/2982 | 1,010,118,418 | PR_kwDODunzps4saVLh | 2,982 | Add the Math Aptitude Test of Heuristics dataset. | {
"login": "hacobe",
"id": 91226467,
"node_id": "MDQ6VXNlcjkxMjI2NDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/91226467?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hacobe",
"html_url": "https://github.com/hacobe",
"followers_url": "https://api.github.com/users/hacobe/followers",
"following_url": "https://api.github.com/users/hacobe/following{/other_user}",
"gists_url": "https://api.github.com/users/hacobe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hacobe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hacobe/subscriptions",
"organizations_url": "https://api.github.com/users/hacobe/orgs",
"repos_url": "https://api.github.com/users/hacobe/repos",
"events_url": "https://api.github.com/users/hacobe/events{/privacy}",
"received_events_url": "https://api.github.com/users/hacobe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,856,717,000 | 1,633,117,883,000 | 1,633,090,860,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2982/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2982/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2982",
"html_url": "https://github.com/huggingface/datasets/pull/2982",
"diff_url": "https://github.com/huggingface/datasets/pull/2982.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2982.patch",
"merged_at": 1633090860000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2979 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2979/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2979/comments | https://api.github.com/repos/huggingface/datasets/issues/2979/events | https://github.com/huggingface/datasets/issues/2979 | 1,009,634,147 | I_kwDODunzps48Lctj | 2,979 | ValueError when computing f1 metric with average None | {
"login": "asofiaoliveira",
"id": 74454835,
"node_id": "MDQ6VXNlcjc0NDU0ODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asofiaoliveira",
"html_url": "https://github.com/asofiaoliveira",
"followers_url": "https://api.github.com/users/asofiaoliveira/followers",
"following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}",
"gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions",
"organizations_url": "https://api.github.com/users/asofiaoliveira/orgs",
"repos_url": "https://api.github.com/users/asofiaoliveira/repos",
"events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}",
"received_events_url": "https://api.github.com/users/asofiaoliveira/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @asofiaoliveira, thanks for reporting.\r\n\r\nI'm fixing it."
] | 1,632,828,893,000 | 1,633,097,858,000 | 1,633,097,858,000 | NONE | null | ## Describe the bug
When I try to compute the f1 score for each class in a multiclass classification problem, I get a ValueError. The same happens with recall and precision. I traced the error to the `.item()` in these scripts, which is probably there for the other averages. E.g. from f1.py:
```python
return {
"f1": f1_score(
references,
predictions,
labels=labels,
pos_label=pos_label,
average=average,
sample_weight=sample_weight,
).item(),
}
```
Since the result is an array with more than one item, the `.item()` throws the error. I didn't submit a PR because this might be needed for the other averages, I'm not very familiar with the library
## Steps to reproduce the bug
```python
from datasets import load_metric
metric = load_metric("f1")
metric.add_batch(predictions=[2,34,1,34,1,2,3], references=[23,52,1,3,523,5,8])
metric.compute(average=None)
```
## Expected results
`array([0.66666667, 0. , 0. , 0. , 0. ,
0. , 0. , 0. , 0. ])`
## Actual results
ValueError: can only convert an array of size 1 to a Python scalar
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.5
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2979/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2977/comments | https://api.github.com/repos/huggingface/datasets/issues/2977/events | https://github.com/huggingface/datasets/issues/2977 | 1,009,378,692 | I_kwDODunzps48KeWE | 2,977 | Impossible to load compressed csv | {
"login": "Valahaar",
"id": 19476123,
"node_id": "MDQ6VXNlcjE5NDc2MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Valahaar",
"html_url": "https://github.com/Valahaar",
"followers_url": "https://api.github.com/users/Valahaar/followers",
"following_url": "https://api.github.com/users/Valahaar/following{/other_user}",
"gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions",
"organizations_url": "https://api.github.com/users/Valahaar/orgs",
"repos_url": "https://api.github.com/users/Valahaar/repos",
"events_url": "https://api.github.com/users/Valahaar/events{/privacy}",
"received_events_url": "https://api.github.com/users/Valahaar/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @Valahaar, thanks for reporting and for your investigation about the source cause.\r\n\r\nYou are right and that commit prevents `pandas` from inferring the compression. On the other hand, @lhoestq did that change to support loading that dataset in streaming mode. \r\n\r\nI'm fixing it."
] | 1,632,813,534,000 | 1,633,103,596,000 | 1,633,103,595,000 | CONTRIBUTOR | null | ## Describe the bug
It is not possible to load from a compressed csv anymore.
## Steps to reproduce the bug
```python
load_dataset('csv', data_files=['/path/to/csv.bz2'])
```
## Problem and possible solution
This used to work, but the commit that broke it is [this one](https://github.com/huggingface/datasets/commit/ad489d4597381fc2d12c77841642cbeaecf7a2e0#diff-6f60f8d0552b75be8b3bfd09994480fd60dcd4e7eb08d02f721218c3acdd2782).
`pandas` usually gets the compression information from the filename itself (which was previously directly passed). Now, since it gets a file descriptor, it might be good to auto-infer the compression or let the user pass the `compression` kwarg to `load_dataset` (or maybe warn the user if the file ends with a commonly known compression scheme?).
## Environment info
- `datasets` version: 1.10.0 (and over)
- Platform: Linux-5.8.0-45-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2977/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2977/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2976 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2976/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2976/comments | https://api.github.com/repos/huggingface/datasets/issues/2976/events | https://github.com/huggingface/datasets/issues/2976 | 1,008,647,889 | I_kwDODunzps48Hr7R | 2,976 | Can't load dataset | {
"login": "mskovalova",
"id": 77006774,
"node_id": "MDQ6VXNlcjc3MDA2Nzc0",
"avatar_url": "https://avatars.githubusercontent.com/u/77006774?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mskovalova",
"html_url": "https://github.com/mskovalova",
"followers_url": "https://api.github.com/users/mskovalova/followers",
"following_url": "https://api.github.com/users/mskovalova/following{/other_user}",
"gists_url": "https://api.github.com/users/mskovalova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mskovalova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mskovalova/subscriptions",
"organizations_url": "https://api.github.com/users/mskovalova/orgs",
"repos_url": "https://api.github.com/users/mskovalova/repos",
"events_url": "https://api.github.com/users/mskovalova/events{/privacy}",
"received_events_url": "https://api.github.com/users/mskovalova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @mskovalova, \r\n\r\nSome datasets have multiple configurations. Therefore, in order to load them, you have to specify both the *dataset name* and the *configuration name*.\r\n\r\nIn the error message you got, you have a usage example:\r\n- To load the 'wikitext-103-raw-v1' configuration of the 'wikitext' dataset, you should use: \r\n ```python\r\n load_dataset('wikitext', 'wikitext-103-raw-v1')\r\n ```\r\n\r\nIn your case, if you would like to load the 'wikitext-2-v1' configuration of the 'wikitext' dataset, please use:\r\n```python\r\nraw_datasets = load_dataset(\"wikitext\", \"wikitext-2-v1\")\r\n```"
] | 1,632,778,694,000 | 1,632,811,981,000 | 1,632,811,981,000 | NONE | null | I'm trying to load a wikitext dataset
```
from datasets import load_dataset
raw_datasets = load_dataset("wikitext")
```
ValueError: Config name is missing.
Please pick one among the available configs: ['wikitext-103-raw-v1', 'wikitext-2-raw-v1', 'wikitext-103-v1', 'wikitext-2-v1']
Example of usage:
`load_dataset('wikitext', 'wikitext-103-raw-v1')`.
If I try
```
from datasets import load_dataset
raw_datasets = load_dataset("wikitext-2-v1")
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.12.1/datasets/wikitext-2-v1/wikitext-2-v1.py
#### Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic (colab)
- Python version: 3.7.12
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2976/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2976/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2975 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2975/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2975/comments | https://api.github.com/repos/huggingface/datasets/issues/2975/events | https://github.com/huggingface/datasets/pull/2975 | 1,008,444,654 | PR_kwDODunzps4sVAOt | 2,975 | ignore dummy folder and dataset_infos.json | {
"login": "Ishan-Kumar2",
"id": 46553104,
"node_id": "MDQ6VXNlcjQ2NTUzMTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/46553104?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ishan-Kumar2",
"html_url": "https://github.com/Ishan-Kumar2",
"followers_url": "https://api.github.com/users/Ishan-Kumar2/followers",
"following_url": "https://api.github.com/users/Ishan-Kumar2/following{/other_user}",
"gists_url": "https://api.github.com/users/Ishan-Kumar2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ishan-Kumar2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ishan-Kumar2/subscriptions",
"organizations_url": "https://api.github.com/users/Ishan-Kumar2/orgs",
"repos_url": "https://api.github.com/users/Ishan-Kumar2/repos",
"events_url": "https://api.github.com/users/Ishan-Kumar2/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ishan-Kumar2/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,766,143,000 | 1,632,908,738,000 | 1,632,906,338,000 | CONTRIBUTOR | null | Fixes #2877
Added the `dataset_infos.json` to the ignored files list and also added check to ignore files which have parent directory as `dummy`.
Let me know if it is correct. Thanks :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2975/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2975/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2975",
"html_url": "https://github.com/huggingface/datasets/pull/2975",
"diff_url": "https://github.com/huggingface/datasets/pull/2975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2975.patch",
"merged_at": 1632906338000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2974/comments | https://api.github.com/repos/huggingface/datasets/issues/2974/events | https://github.com/huggingface/datasets/pull/2974 | 1,008,247,787 | PR_kwDODunzps4sUZCX | 2,974 | Actually disable dummy labels by default | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,754,220,000 | 1,632,906,282,000 | 1,632,906,281,000 | CONTRIBUTOR | null | So I might have just changed the docstring instead of the actual default argument value and not realized.
@lhoestq I'm sorry >.> | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2974/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2974",
"html_url": "https://github.com/huggingface/datasets/pull/2974",
"diff_url": "https://github.com/huggingface/datasets/pull/2974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2974.patch",
"merged_at": 1632906281000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2973/comments | https://api.github.com/repos/huggingface/datasets/issues/2973/events | https://github.com/huggingface/datasets/pull/2973 | 1,007,894,592 | PR_kwDODunzps4sTRvk | 2,973 | Fix JSON metadata of masakhaner dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,733,748,000 | 1,632,747,599,000 | 1,632,747,599,000 | MEMBER | null | Fix #2971. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2973/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2973/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2973",
"html_url": "https://github.com/huggingface/datasets/pull/2973",
"diff_url": "https://github.com/huggingface/datasets/pull/2973.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2973.patch",
"merged_at": 1632747598000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2972 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2972/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2972/comments | https://api.github.com/repos/huggingface/datasets/issues/2972/events | https://github.com/huggingface/datasets/issues/2972 | 1,007,808,714 | I_kwDODunzps48EfDK | 2,972 | OSError: Not enough disk space. | {
"login": "qqaatw",
"id": 24835382,
"node_id": "MDQ6VXNlcjI0ODM1Mzgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qqaatw",
"html_url": "https://github.com/qqaatw",
"followers_url": "https://api.github.com/users/qqaatw/followers",
"following_url": "https://api.github.com/users/qqaatw/following{/other_user}",
"gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions",
"organizations_url": "https://api.github.com/users/qqaatw/orgs",
"repos_url": "https://api.github.com/users/qqaatw/repos",
"events_url": "https://api.github.com/users/qqaatw/events{/privacy}",
"received_events_url": "https://api.github.com/users/qqaatw/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Maybe we can change the disk space calculating API from `shutil.disk_usage` to `os.statvfs` in UNIX-like system, which can provide correct results.\r\n```\r\nstatvfs = os.statvfs('path')\r\navail_space_bytes = statvfs.f_frsize * statvfs.f_bavail\r\n```",
"Hi @qqaatw, thanks for reporting.\r\n\r\nCould you please try:\r\n```python\r\ndataset = load_dataset(\"natural_questions\", cache_dir=os.path.abspath(args.dataset_cache_dir))\r\n```",
"@albertvillanova it works! Thanks for your suggestion. Is that a bug of `DownloadConfig`?",
"`DownloadConfig` only sets the location to download the files. On the other hand, `cache_dir` sets the location for both downloading and caching the data. You can find more information here: https://huggingface.co/docs/datasets/loading_datasets.html#cache-directory"
] | 1,632,728,482,000 | 1,632,811,527,000 | 1,632,811,395,000 | CONTRIBUTOR | null | ## Describe the bug
I'm trying to download `natural_questions` dataset from the Internet, and I've specified the cache_dir which locates in a mounted disk and has enough disk space. However, even though the space is enough, the disk space checking function still reports the space of root `/` disk having no enough space.
The file system structure is like below. The root `/` has `115G` disk space available, and the `sda1` is mounted to `/mnt`, which has `1.2T` disk space available:
```
/
/mnt/sda1/path/to/args.dataset_cache_dir
```
## Steps to reproduce the bug
```python
dataset_config = DownloadConfig(
cache_dir=os.path.abspath(args.dataset_cache_dir),
resume_download=True,
)
dataset = load_dataset("natural_questions", download_config=dataset_config)
```
## Expected results
Can download the dataset without an error.
## Actual results
The following error raised:
```
OSError: Not enough disk space. Needed: 134.92 GiB (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Ubuntu 18.04
- Python version: 3.8.10
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2972/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2971/comments | https://api.github.com/repos/huggingface/datasets/issues/2971/events | https://github.com/huggingface/datasets/issues/2971 | 1,007,696,522 | I_kwDODunzps48EDqK | 2,971 | masakhaner dataset load problem | {
"login": "ontocord",
"id": 8900094,
"node_id": "MDQ6VXNlcjg5MDAwOTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ontocord",
"html_url": "https://github.com/ontocord",
"followers_url": "https://api.github.com/users/ontocord/followers",
"following_url": "https://api.github.com/users/ontocord/following{/other_user}",
"gists_url": "https://api.github.com/users/ontocord/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ontocord/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ontocord/subscriptions",
"organizations_url": "https://api.github.com/users/ontocord/orgs",
"repos_url": "https://api.github.com/users/ontocord/repos",
"events_url": "https://api.github.com/users/ontocord/events{/privacy}",
"received_events_url": "https://api.github.com/users/ontocord/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @ontocord. We are fixing the wrong metadata."
] | 1,632,718,747,000 | 1,632,747,599,000 | 1,632,747,599,000 | CONTRIBUTOR | null | ## Describe the bug
Masakhaner dataset is not loading
## Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("masakhaner",'amh')
```
## Expected results
Expected the return of a dataset
## Actual results
```
NonMatchingSplitsSizesError Traceback (most recent call last)
<ipython-input-3-a6abc1161d4c> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("masakhaner",'amh')
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py
in verify_splits(expected_splits, recorded_splits)
72 ]
73 if len(bad_splits) > 0:
---> 74 raise NonMatchingSplitsSizesError(str(bad_splits))
75 logger.info("All the splits matched successfully.")
76
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=639927, num_examples=1751, dataset_name='masakhaner'), 'recorded': SplitInfo(name='train', num_bytes=639911, num_examples=1750, dataset_name='masakhaner')}, {'expected': SplitInfo(name='validation', num_bytes=92768, num_examples=251, dataset_name='masakhaner'), 'recorded': SplitInfo(name='validation', num_bytes=92753, num_examples=250, dataset_name='masakhaner')}, {'expected': SplitInfo(name='test', num_bytes=184286, num_examples=501, dataset_name='masakhaner'), 'recorded': SplitInfo(name='test', num_bytes=184271, num_examples=500, dataset_name='masakhaner')}]
```
## Environment info
Google Colab
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2971/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2970 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2970/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2970/comments | https://api.github.com/repos/huggingface/datasets/issues/2970/events | https://github.com/huggingface/datasets/issues/2970 | 1,007,340,089 | I_kwDODunzps48Cso5 | 2,970 | Magnetβs | {
"login": "rcacho172",
"id": 90449239,
"node_id": "MDQ6VXNlcjkwNDQ5MjM5",
"avatar_url": "https://avatars.githubusercontent.com/u/90449239?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rcacho172",
"html_url": "https://github.com/rcacho172",
"followers_url": "https://api.github.com/users/rcacho172/followers",
"following_url": "https://api.github.com/users/rcacho172/following{/other_user}",
"gists_url": "https://api.github.com/users/rcacho172/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rcacho172/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rcacho172/subscriptions",
"organizations_url": "https://api.github.com/users/rcacho172/orgs",
"repos_url": "https://api.github.com/users/rcacho172/repos",
"events_url": "https://api.github.com/users/rcacho172/events{/privacy}",
"received_events_url": "https://api.github.com/users/rcacho172/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,632,649,829,000 | 1,632,652,739,000 | 1,632,652,739,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2970/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2969 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2969/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2969/comments | https://api.github.com/repos/huggingface/datasets/issues/2969/events | https://github.com/huggingface/datasets/issues/2969 | 1,007,217,867 | I_kwDODunzps48COzL | 2,969 | medical-dialog error | {
"login": "smeyerhot",
"id": 43877130,
"node_id": "MDQ6VXNlcjQzODc3MTMw",
"avatar_url": "https://avatars.githubusercontent.com/u/43877130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/smeyerhot",
"html_url": "https://github.com/smeyerhot",
"followers_url": "https://api.github.com/users/smeyerhot/followers",
"following_url": "https://api.github.com/users/smeyerhot/following{/other_user}",
"gists_url": "https://api.github.com/users/smeyerhot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/smeyerhot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/smeyerhot/subscriptions",
"organizations_url": "https://api.github.com/users/smeyerhot/orgs",
"repos_url": "https://api.github.com/users/smeyerhot/repos",
"events_url": "https://api.github.com/users/smeyerhot/events{/privacy}",
"received_events_url": "https://api.github.com/users/smeyerhot/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @smeyerhot, thanks for reporting.\r\n\r\nYou are right: there is an issue with the dataset metadata. I'm fixing it.\r\n\r\nIn the meantime, you can circumvent the issue by passing `ignore_verifications=True`:\r\n```python\r\nraw_datasets = load_dataset(\"medical_dialog\", \"en\", split=\"train\", download_mode=\"force_redownload\", data_dir=\"./Medical-Dialogue-Dataset-English\", ignore_verifications=True)\r\n```"
] | 1,632,611,324,000 | 1,633,938,402,000 | 1,633,938,402,000 | NONE | null | ## Describe the bug
A clear and concise description of what the bug is.
When I attempt to download the huggingface datatset medical_dialog it errors out midway through
## Steps to reproduce the bug
```python
raw_datasets = load_dataset("medical_dialog", "en", split="train", download_mode="force_redownload", data_dir="./Medical-Dialogue-Dataset-English")
```
## Expected results
A clear and concise description of the expected results.
No error
## Actual results
```
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_splits(expected_splits, recorded_splits)
72 ]
73 if len(bad_splits) > 0:
---> 74 raise NonMatchingSplitsSizesError(str(bad_splits))
75 logger.info("All the splits matched successfully.")
76
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=295097913, num_examples=229674, dataset_name='medical_dialog')}]
```
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.21.1
- Platform: colab
- Python version: colab 3.7
- PyArrow version: N/A
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2969/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2969/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2968/comments | https://api.github.com/repos/huggingface/datasets/issues/2968/events | https://github.com/huggingface/datasets/issues/2968 | 1,007,209,488 | I_kwDODunzps48CMwQ | 2,968 | `DatasetDict` cannot be exported to parquet if the splits have different features | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"This is because you have to specify which split corresponds to what file:\r\n```python\r\ndata_files = {\"train\": \"train/split.parquet\", \"validation\": \"validation/split.parquet\"}\r\nbrand_new_dataset_2 = load_dataset(\"ds\", data_files=data_files)\r\n```\r\n\r\nOtherwise it tries to concatenate the two splits, and it fails because they don't have the same features.\r\n\r\nIt works with save_to_disk/load_from_disk because it also stores json files that contain the information about which files goes into which split",
"Wonderful, thanks for the help!",
"I may be mistaken but I think the following doesn't work either:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"lhoestq/custom_squad\")\r\n\r\n\r\ndef identical_answers(e):\r\n e['identical_answers'] = len(set(e['answers']['text'])) == 1\r\n return e\r\n\r\n\r\nds['validation'] = ds['validation'].map(identical_answers)\r\nds['train'].to_parquet(\"./ds/train/split.parquet\")\r\nds['validation'].to_parquet(\"./ds/validation/split.parquet\")\r\n\r\ndata_files = {\"train\": \"train/split.parquet\", \"validation\": \"validation/split.parquet\"}\r\nbrand_new_dataset_2 = load_dataset(\"ds\", data_files=data_files)\r\n```",
"It works on my side as soon as the directories named `ds/train` and `ds/validation` exist (otherwise it returns a FileNotFoundError). What error are you getting ?",
"Also we may introduce a default mapping for the data files:\r\n```python\r\n{\r\n \"train\": [\"*train*\"],\r\n \"test\": [\"*test*\"],\r\n \"validation\": [\"*dev*\", \"valid\"],\r\n}\r\n```\r\nthis way if you name your files according to the splits you won't have to specify the data_files parameter. What do you think ?\r\n\r\nI moved this discussion to #3027 ",
"I'm getting the following error:\r\n\r\n```\r\nDownloading and preparing dataset custom_squad/plain_text to /home/lysandre/.cache/huggingface/datasets/lhoestq___custom_squad)/plain_text/1.0.0/397916d1ae99584877e0fb4f5b8b6f01e66fcbbeff4d178afb30c933a8d0d93a...\r\n100%|ββββββββββ| 2/2 [00:00<00:00, 7760.04it/s]\r\n100%|ββββββββββ| 2/2 [00:00<00:00, 2020.38it/s]\r\n 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"<input>\", line 1, in <module>\r\n File \"/opt/pycharm-professional/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py\", line 198, in runfile\r\n pydev_imports.execfile(filename, global_vars, local_vars) # execute the script\r\n File \"/opt/pycharm-professional/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py\", line 18, in execfile\r\n exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\r\n File \"/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py\", line 12, in <module>\r\n ds = load_dataset(\"lhoestq/custom_squad\")\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py\", line 1207, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py\", line 823, in as_dataset\r\n datasets = utils.map_nested(\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py\", line 207, in map_nested\r\n mapped = [\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py\", line 208, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True))\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py\", line 143, in _single_map_nested\r\n return function(data_struct)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py\", line 854, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py\", line 924, in _as_dataset\r\n dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 217, in read\r\n return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 238, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 173, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 308, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 327, in read_table\r\n return table_cls.from_file(filename)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py\", line 458, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py\", line 45, in _memory_mapped_arrow_table_from_file\r\n pa_table = opened_stream.read_all()\r\n File \"pyarrow/ipc.pxi\", line 563, in pyarrow.lib.RecordBatchReader.read_all\r\n File \"pyarrow/error.pxi\", line 114, in pyarrow.lib.check_status\r\nOSError: Header-type of flatbuffer-encoded Message is not RecordBatch.\r\n```\r\n\r\nTried on current master, after updating latest dependencies and obtained the same result",
"The proposal in #3027 sounds good to me!",
"I just tried again on colab by installing `datasets` from source with pyarrow 3.0.0 and didn't get any error.\r\n\r\nYou error seems to happen when doing\r\n```python\r\nds = load_dataset(\"lhoestq/custom_squad\")\r\n```\r\n\r\nMore specifically it fails when trying to read the arrow file that just got generated. I haven't issues like this before. Can you make sure you have a recent version of `pyarrow` ? Maybe it was an old version that wrote the arrow file and some header was missing.",
"Thank you for your pointer! This seems to have been linked to Python 3.9.7: it works flawlessly with Python 3.8.6. This can be closed, thanks a lot for your help."
] | 1,632,608,319,000 | 1,633,646,862,000 | 1,633,646,846,000 | MEMBER | null | ## Describe the bug
I'm trying to use parquet as a means of serialization for both `Dataset` and `DatasetDict` objects. Using `to_parquet` alongside `from_parquet` or `load_dataset` for a `Dataset` works perfectly.
For `DatasetDict`, I use `to_parquet` on each split to save the parquet files in individual folders representing individual splits. This works too, as long as the splits have identical features. If a split has different features to neighboring splits, then loading the dataset will fail: a single schema is used to load both splits, resulting in a failure to load the second parquet file.
## Steps to reproduce the bug
The following works as expected:
```python
from datasets import load_dataset
ds = load_dataset("lhoestq/custom_squad")
ds['train'].to_parquet("./ds/train/split.parquet")
ds['validation'].to_parquet("./ds/validation/split.parquet")
brand_new_dataset = load_dataset("ds")
```
Modifying a single split to add a new feature ends up in a crash:
```python
from datasets import load_dataset
ds = load_dataset("lhoestq/custom_squad")
def identical_answers(e):
e['identical_answers'] = len(set(e['answers']['text'])) == 1
return e
ds['validation'] = ds['validation'].map(identical_answers)
ds['train'].to_parquet("./ds/train/split.parquet")
ds['validation'].to_parquet("./ds/validation/split.parquet")
brand_new_dataset = load_dataset("ds")
```
```
File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 26, in <module>
brand_new_dataset = load_dataset("ds")
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1151, in load_dataset
builder_instance.download_and_prepare(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 642, in download_and_prepare
self._download_and_prepare(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 732, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 1194, in _prepare_split
writer.write_table(table)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in write_table
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in <listcomp>
pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema)
File "pyarrow/table.pxi", line 1257, in pyarrow.lib.Table.__getitem__
File "pyarrow/table.pxi", line 1833, in pyarrow.lib.Table.column
File "pyarrow/table.pxi", line 1808, in pyarrow.lib.Table._ensure_integer_index
KeyError: 'Field "identical_answers" does not exist in table schema'
```
It does work, however, to use the `save_to_disk` and `load_from_disk` methods:
```py
from datasets import load_from_disk
ds = load_dataset("lhoestq/custom_squad")
def identical_answers(e):
e['identical_answers'] = len(set(e['answers']['text'])) == 1
return e
ds['validation'] = ds['validation'].map(identical_answers)
ds.save_to_disk("local_path")
brand_new_dataset = load_from_disk("local_path")
```
## Expected results
The saving works correctly - but the loading fails. I would expect either an error when saving or an error-less instantiation of the dataset through the parquet files.
If it's helpful, I've traced a possible patch to the `write_table` method here:
https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L424-L425
The writer is built only if the parquet writer is `None`, but I expect we would want to build a new writer as the table schema has changed. Furthermore, it relies on having the property `update_features` set to `True` in order to update the features:
https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L254-L255
but the `ArrowWriter` is instantiated without that option in the `_prepare_split` method of the `ArrowBasedBuilder`:
https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/builder.py#L1190
Updating these two parts to recreate a schema on each split results in an error that is, unfortunately, out of my expertise:
```
File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 27, in <module>
brand_new_dataset = load_dataset("ds")
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1163, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 819, in as_dataset
datasets = utils.map_nested(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 207, in map_nested
mapped = [
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 208, in <listcomp>
_single_map_nested((function, obj, types, None, True))
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 143, in _single_map_nested
return function(data_struct)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 850, in _build_single_dataset
ds = self._as_dataset(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 920, in _as_dataset
dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 217, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 238, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 173, in _read_files
pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 308, in _get_table_from_filename
table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 327, in read_table
return table_cls.from_file(filename)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 458, in from_file
table = _memory_mapped_arrow_table_from_file(filename)
File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 45, in _memory_mapped_arrow_table_from_file
pa_table = opened_stream.read_all()
File "pyarrow/ipc.pxi", line 563, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow/error.pxi", line 114, in pyarrow.lib.check_status
OSError: Header-type of flatbuffer-encoded Message is not RecordBatch.
```
## Environment info
- `datasets` version: 1.12.2.dev0
- Platform: Linux-5.14.7-arch1-1-x86_64-with-glibc2.33
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2968/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2968/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2967/comments | https://api.github.com/repos/huggingface/datasets/issues/2967/events | https://github.com/huggingface/datasets/issues/2967 | 1,007,194,837 | I_kwDODunzps48CJLV | 2,967 | Adding vision-and-language datasets (e.g., VQA, VCR) to Datasets | {
"login": "WadeYin9712",
"id": 42200725,
"node_id": "MDQ6VXNlcjQyMjAwNzI1",
"avatar_url": "https://avatars.githubusercontent.com/u/42200725?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/WadeYin9712",
"html_url": "https://github.com/WadeYin9712",
"followers_url": "https://api.github.com/users/WadeYin9712/followers",
"following_url": "https://api.github.com/users/WadeYin9712/following{/other_user}",
"gists_url": "https://api.github.com/users/WadeYin9712/gists{/gist_id}",
"starred_url": "https://api.github.com/users/WadeYin9712/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WadeYin9712/subscriptions",
"organizations_url": "https://api.github.com/users/WadeYin9712/orgs",
"repos_url": "https://api.github.com/users/WadeYin9712/repos",
"events_url": "https://api.github.com/users/WadeYin9712/events{/privacy}",
"received_events_url": "https://api.github.com/users/WadeYin9712/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [] | 1,632,603,495,000 | 1,633,293,262,000 | 1,633,293,262,000 | NONE | null | **Is your feature request related to a problem? Please describe.**
Would you like to add any vision-and-language datasets (e.g., VQA, VCR) to Huggingface Datasets?
**Describe the solution you'd like**
N/A
**Describe alternatives you've considered**
N/A
**Additional context**
This is Da Yin at UCLA. Recently, we have published an EMNLP 2021 paper about geo-diverse visual commonsense reasoning (https://arxiv.org/abs/2109.06860). We propose a new dataset called GD-VCR, a vision-and-language dataset to evaluate how well V&L models perform on scenarios involving geo-location-specific commonsense. We hope to have our V&L dataset incorporated into Huggingface to further promote our project, but I haven't seen much V&L datasets in the current package. Is it possible to add V&L datasets, and if so, how should we prepare for the loading? Thank you very much!
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2967/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2966/comments | https://api.github.com/repos/huggingface/datasets/issues/2966/events | https://github.com/huggingface/datasets/pull/2966 | 1,007,142,233 | PR_kwDODunzps4sRRMs | 2,966 | Upload greek-legal-code dataset | {
"login": "christospi",
"id": 9130406,
"node_id": "MDQ6VXNlcjkxMzA0MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9130406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/christospi",
"html_url": "https://github.com/christospi",
"followers_url": "https://api.github.com/users/christospi/followers",
"following_url": "https://api.github.com/users/christospi/following{/other_user}",
"gists_url": "https://api.github.com/users/christospi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/christospi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/christospi/subscriptions",
"organizations_url": "https://api.github.com/users/christospi/orgs",
"repos_url": "https://api.github.com/users/christospi/repos",
"events_url": "https://api.github.com/users/christospi/events{/privacy}",
"received_events_url": "https://api.github.com/users/christospi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@albertvillanova @lhoestq thank you very much for reviewing! :hugs: \r\n\r\nI 've pushed some updates/changes as requested."
] | 1,632,588,735,000 | 1,634,132,250,000 | 1,634,132,250,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2966/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2966/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2966",
"html_url": "https://github.com/huggingface/datasets/pull/2966",
"diff_url": "https://github.com/huggingface/datasets/pull/2966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2966.patch",
"merged_at": 1634132250000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2964/comments | https://api.github.com/repos/huggingface/datasets/issues/2964/events | https://github.com/huggingface/datasets/issues/2964 | 1,006,605,904 | I_kwDODunzps47_5ZQ | 2,964 | Error when calculating Matthews Correlation Coefficient loaded with `load_metric` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"After some more tests I've realized that this \"issue\" is due to the `numpy.float64` to `float` conversion, but when defining a function named `compute_metrics` as it follows:\r\n\r\n```python\r\ndef compute_metrics(eval_preds):\r\n metric = load_metric(\"matthews_correlation\")\r\n logits, labels = eval_preds\r\n predictions = np.argmax(logits, axis=1)\r\n return metric.compute(predictions=predictions, references=labels)\r\n```\r\n\r\nIt fails when the evaluation metrics are computed in the `Trainer` with the same error code `AttributeError: 'float' object has no attribute 'item'` as the output is not a `numpy.float64`... Maybe I'm doing something wrong, not sure!",
"Ok after some more experiments I've realized that it's an issue from my side, at first I thought it was due to `fp16=True` in `TrainingArguments`, but in the end that may not be the issue, so I'll close this for now and check later, since the mistake is on my side :weary: Sorry for the inconvenience!"
] | 1,632,498,921,000 | 1,632,557,167,000 | 1,632,557,167,000 | NONE | null | ## Describe the bug
After loading the metric named "[Matthews Correlation Coefficient](https://huggingface.co/metrics/matthews_correlation)" from `π€datasets`, the `.compute` method fails with the following exception `AttributeError: 'float' object has no attribute 'item'` (complete stack trace can be provided if required).
## Steps to reproduce the bug
```python
import torch
predictions = torch.ones((10,))
references = torch.zeros((10,))
from datasets import load_metric
METRIC = load_metric("matthews_correlation")
result = METRIC.compute(predictions=predictions, references=references)
```
## Expected results
We should expect a Python `dict` as it follows:
```
{
"matthews_correlation": float()
}
```
as defined in https://github.com/huggingface/datasets/blob/master/metrics/matthews_correlation/matthews_correlation.py, so the fix will imply removing `.item()`, since the value returned by the `scikit-learn` function is not a `torch.Tensor` but a `float`, which means that the `.item()` will fail.
## Actual results
```
Traceback (most recent call last):
File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 59, in main
app()
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1137, in __call__
return self.main(*args, **kwargs)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1062, in main
rv = self.invoke(ctx)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1668, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1404, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 763, in invoke
return __callback(*args, **kwargs)
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 500, in wrapper
return callback(**use_params) # type: ignore
File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 43, in train
metrics = trainer.evaluate()
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2051, in evaluate
output = eval_loop(
File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2292, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "/home/alvaro.bartolome/XXX/xxx/metrics.py", line 20, in compute_metrics
res = METRIC.compute(predictions=predictions, references=eval_preds.label_ids)
File "/home/alvaro.bartolome/miniconda3/envs/lang/lib/python3.9/site-packages/datasets/metric.py", line 402, in compute
output = self._compute(predictions=predictions, references=references, **kwargs)
File "/home/alvaro.bartolome/.cache/huggingface/modules/datasets_modules/metrics/matthews_correlation/0275f1e9a4d318e3ea8cdd87547ee0d58d894966616052e3d18444ac8ddd2357/matthews_correlation.py", line 88, in _compute
"matthews_correlation": matthews_corrcoef(references, predictions, sample_weight=sample_weight).item(),
AttributeError: 'float' object has no attribute 'item'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.15.0-1113-azure-x86_64-with-glibc2.23
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2964/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2963/comments | https://api.github.com/repos/huggingface/datasets/issues/2963/events | https://github.com/huggingface/datasets/issues/2963 | 1,006,588,605 | I_kwDODunzps47_1K9 | 2,963 | raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects. | {
"login": "keloemma",
"id": 40454218,
"node_id": "MDQ6VXNlcjQwNDU0MjE4",
"avatar_url": "https://avatars.githubusercontent.com/u/40454218?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/keloemma",
"html_url": "https://github.com/keloemma",
"followers_url": "https://api.github.com/users/keloemma/followers",
"following_url": "https://api.github.com/users/keloemma/following{/other_user}",
"gists_url": "https://api.github.com/users/keloemma/gists{/gist_id}",
"starred_url": "https://api.github.com/users/keloemma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keloemma/subscriptions",
"organizations_url": "https://api.github.com/users/keloemma/orgs",
"repos_url": "https://api.github.com/users/keloemma/repos",
"events_url": "https://api.github.com/users/keloemma/events{/privacy}",
"received_events_url": "https://api.github.com/users/keloemma/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,632,497,711,000 | 1,632,497,904,000 | 1,632,497,904,000 | NONE | null | ## Describe the bug
A clear and concise description of what the bug is.
I am trying to use Dataset to load my file in order to use Bert embeddings model baut when I finished loading using dataset and I want to pass to the tokenizer using the function map; I get the following error : raise TypeError(
TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects.
I was able to load my file using dataset before but since this morning , I keep getting this erreor.
## Steps to reproduce the bug
```python
# Xtrain, ytrain, filename, len_labels = read_file_2(fic)
# Xtrain, lge_size = get_flaubert_layer(Xtrain, path_to_model_lge)
data_preprocessed = make_new_traindata(Xtrain)
my_dict = {"verbatim": data_preprocessed[1], "label": ytrain} # lemme avec conjonction
dataset = Dataset.from_dict(my_dict)
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2963/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2961 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2961/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2961/comments | https://api.github.com/repos/huggingface/datasets/issues/2961/events | https://github.com/huggingface/datasets/pull/2961 | 1,006,453,781 | PR_kwDODunzps4sPTXV | 2,961 | Fix CI doc build | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,489,208,000 | 1,632,489,487,000 | 1,632,489,487,000 | MEMBER | null | Pin `fsspec`.
Before the issue: 'fsspec-2021.8.1', 's3fs-2021.8.1'
Generating the issue: 'fsspec-2021.9.0', 's3fs-0.5.1'
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2961/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2961",
"html_url": "https://github.com/huggingface/datasets/pull/2961",
"diff_url": "https://github.com/huggingface/datasets/pull/2961.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2961.patch",
"merged_at": 1632489487000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2960/comments | https://api.github.com/repos/huggingface/datasets/issues/2960/events | https://github.com/huggingface/datasets/pull/2960 | 1,006,222,850 | PR_kwDODunzps4sOl0Y | 2,960 | Support pandas 1.3 new `read_csv` parameters | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,472,644,000 | 1,632,482,551,000 | 1,632,482,550,000 | CONTRIBUTOR | null | Support two new arguments introduced in pandas v1.3.0:
- `encoding_errors`
- `on_bad_lines`
`read_csv` reference: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_csv.html | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2960/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2960",
"html_url": "https://github.com/huggingface/datasets/pull/2960",
"diff_url": "https://github.com/huggingface/datasets/pull/2960.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2960.patch",
"merged_at": 1632482550000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2958/comments | https://api.github.com/repos/huggingface/datasets/issues/2958/events | https://github.com/huggingface/datasets/pull/2958 | 1,005,144,601 | PR_kwDODunzps4sLTaB | 2,958 | Add security policy to the project | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,385,255,000 | 1,634,829,404,000 | 1,634,829,403,000 | MEMBER | null | Add security policy to the project, as recommended by GitHub: https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository
Close #2953. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2958/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2958/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2958",
"html_url": "https://github.com/huggingface/datasets/pull/2958",
"diff_url": "https://github.com/huggingface/datasets/pull/2958.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2958.patch",
"merged_at": 1634829403000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2955 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2955/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2955/comments | https://api.github.com/repos/huggingface/datasets/issues/2955/events | https://github.com/huggingface/datasets/pull/2955 | 1,003,999,469 | PR_kwDODunzps4sHuRu | 2,955 | Update legacy Python image for CI tests in Linux | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"There is an exception when running `pip install .[tests]`:\r\n```\r\nProcessing /home/circleci/datasets\r\nCollecting numpy>=1.17 (from datasets==1.12.2.dev0)\r\n Downloading https://files.pythonhosted.org/packages/45/b2/6c7545bb7a38754d63048c7696804a0d947328125d81bf12beaa692c3ae3/numpy-1.19.5-cp36-cp36m-manylinux1_x86_64.whl (13.4MB)\r\n 100% |ββββββββββββββββββββββββββββββββ| 13.4MB 3.9MB/s eta 0:00:011\r\n\r\n...\r\n\r\nCollecting faiss-cpu (from datasets==1.12.2.dev0)\r\n Downloading https://files.pythonhosted.org/packages/87/91/bf8ea0d42733cbb04f98d3bf27808e4919ceb5ec71102e21119398a97237/faiss-cpu-1.7.1.post2.tar.gz (41kB)\r\n 100% |ββββββββββββββββββββββββββββββββ| 51kB 30.9MB/s ta 0:00:01\r\n Complete output from command python setup.py egg_info:\r\n Traceback (most recent call last):\r\n File \"/home/circleci/.pyenv/versions/3.6.14/lib/python3.6/site-packages/setuptools/sandbox.py\", line 154, in save_modules\r\n yield saved\r\n File \"/home/circleci/.pyenv/versions/3.6.14/lib/python3.6/site-packages/setuptools/sandbox.py\", line 195, in setup_context\r\n yield\r\n File \"/home/circleci/.pyenv/versions/3.6.14/lib/python3.6/site-packages/setuptools/sandbox.py\", line 250, in run_setup\r\n _execfile(setup_script, ns)\r\n File \"/home/circleci/.pyenv/versions/3.6.14/lib/python3.6/site-packages/setuptools/sandbox.py\", line 45, in _execfile\r\n exec(code, globals, locals)\r\n File \"/tmp/easy_install-1pop4blm/numpy-1.21.2/setup.py\", line 34, in <module>\r\n method can be invoked.\r\n RuntimeError: Python version >= 3.7 required.\r\n```\r\n\r\nApparently, `numpy-1.21.2` tries to be installed in the temporary directory `/tmp/easy_install-1pop4blm` instead of the downloaded `numpy-1.19.5` (requirement of `datasets`).\r\n\r\nThis is caused because `pip` downloads the `.tar.gz` (instead of the `.whl`) and tries to build it in a tmp dir."
] | 1,632,299,127,000 | 1,632,479,765,000 | 1,632,479,765,000 | MEMBER | null | Instead of legacy, use next-generation convenience images, built from the ground up with CI, efficiency, and determinism in mind. Here are some of the highlights:
- Faster spin-up time - In Docker terminology, these next-gen images will generally have fewer and smaller layers. Using these new images will lead to faster image downloads when a build starts, and a higher likelihood that the image is already cached on the host.
- Improved reliability and stability - The existing legacy convenience images are rebuilt practically every day with potential changes from upstream that we cannot always test fast enough. This leads to frequent breaking changes, which is not the best environment for stable, deterministic builds. Next-gen images will only be rebuilt for security and critical-bugs, leading to more stable and deterministic images.
More info: https://circleci.com/docs/2.0/circleci-images | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2955/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2955/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2955",
"html_url": "https://github.com/huggingface/datasets/pull/2955",
"diff_url": "https://github.com/huggingface/datasets/pull/2955.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2955.patch",
"merged_at": 1632479765000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2954/comments | https://api.github.com/repos/huggingface/datasets/issues/2954/events | https://github.com/huggingface/datasets/pull/2954 | 1,003,904,803 | PR_kwDODunzps4sHa8O | 2,954 | Run tests in parallel | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"There is a speed up in Windows machines:\r\n- From `13m 52s` to `11m 10s`\r\n\r\nIn Linux machines, some workers crash with error message:\r\n```\r\nOSError: [Errno 12] Cannot allocate memory\r\n```",
"There is also a speed up in Linux machines:\r\n- From `7m 30s` to `5m 32s`"
] | 1,632,294,044,000 | 1,632,812,151,000 | 1,632,812,151,000 | MEMBER | null | Run CI tests in parallel to speed up the test suite.
Speed up results:
- Linux: from `7m 30s` to `5m 32s`
- Windows: from `13m 52s` to `11m 10s`
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2954/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2954/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2954",
"html_url": "https://github.com/huggingface/datasets/pull/2954",
"diff_url": "https://github.com/huggingface/datasets/pull/2954.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2954.patch",
"merged_at": 1632812151000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2953 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2953/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2953/comments | https://api.github.com/repos/huggingface/datasets/issues/2953/events | https://github.com/huggingface/datasets/issues/2953 | 1,002,766,517 | I_kwDODunzps47xQC1 | 2,953 | Trying to get in touch regarding a security issue | {
"login": "JamieSlome",
"id": 55323451,
"node_id": "MDQ6VXNlcjU1MzIzNDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/55323451?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JamieSlome",
"html_url": "https://github.com/JamieSlome",
"followers_url": "https://api.github.com/users/JamieSlome/followers",
"following_url": "https://api.github.com/users/JamieSlome/following{/other_user}",
"gists_url": "https://api.github.com/users/JamieSlome/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JamieSlome/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JamieSlome/subscriptions",
"organizations_url": "https://api.github.com/users/JamieSlome/orgs",
"repos_url": "https://api.github.com/users/JamieSlome/repos",
"events_url": "https://api.github.com/users/JamieSlome/events{/privacy}",
"received_events_url": "https://api.github.com/users/JamieSlome/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @JamieSlome,\r\n\r\nThanks for reaching out. Yes, you are right: I'm opening a PR to add the `SECURITY.md` file and a contact method.\r\n\r\nIn the meantime, please feel free to report the security issue to: feedback@huggingface.co"
] | 1,632,239,893,000 | 1,634,829,403,000 | 1,634,829,403,000 | NONE | null | Hey there!
I'd like to report a security issue but cannot find contact instructions on your repository.
If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future.
Thank you for your consideration, and I look forward to hearing from you!
(cc @huntr-helper) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2953/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2953/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2952/comments | https://api.github.com/repos/huggingface/datasets/issues/2952/events | https://github.com/huggingface/datasets/pull/2952 | 1,002,704,096 | PR_kwDODunzps4sDU8S | 2,952 | Fix missing conda deps | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,237,781,000 | 1,632,285,599,000 | 1,632,238,244,000 | MEMBER | null | `aiohttp` was added as a dependency in #2662 but was missing for the conda build, which causes the 1.12.0 and 1.12.1 to fail.
Fix #2932. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2952/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2952/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2952",
"html_url": "https://github.com/huggingface/datasets/pull/2952",
"diff_url": "https://github.com/huggingface/datasets/pull/2952.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2952.patch",
"merged_at": 1632238244000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2951/comments | https://api.github.com/repos/huggingface/datasets/issues/2951/events | https://github.com/huggingface/datasets/pull/2951 | 1,001,267,888 | PR_kwDODunzps4r-lGs | 2,951 | Dummy labels no longer on by default in `to_tf_dataset` | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Let me make sure we never need it, and if not then I'll remove it entirely in a follow-up PR.",
"Thanks ;) it will be less confusing and easier to maintain to not keep unused hacky features"
] | 1,632,162,419,000 | 1,632,232,857,000 | 1,632,219,272,000 | CONTRIBUTOR | null | After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2951/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2951/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2951",
"html_url": "https://github.com/huggingface/datasets/pull/2951",
"diff_url": "https://github.com/huggingface/datasets/pull/2951.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2951.patch",
"merged_at": 1632219272000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2950 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2950/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2950/comments | https://api.github.com/repos/huggingface/datasets/issues/2950/events | https://github.com/huggingface/datasets/pull/2950 | 1,001,085,353 | PR_kwDODunzps4r-AKu | 2,950 | Fix fn kwargs in filter | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,150,626,000 | 1,632,154,979,000 | 1,632,151,681,000 | MEMBER | null | #2836 broke the `fn_kwargs` parameter of `filter`, as mentioned in https://github.com/huggingface/datasets/issues/2927
I fixed that and added a test to make sure it doesn't happen again (for either map or filter)
Fix #2927 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2950/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2950/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2950",
"html_url": "https://github.com/huggingface/datasets/pull/2950",
"diff_url": "https://github.com/huggingface/datasets/pull/2950.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2950.patch",
"merged_at": 1632151681000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2949/comments | https://api.github.com/repos/huggingface/datasets/issues/2949/events | https://github.com/huggingface/datasets/pull/2949 | 1,001,026,680 | PR_kwDODunzps4r90Pt | 2,949 | Introduce web and wiki config in triviaqa dataset | {
"login": "shirte",
"id": 1706443,
"node_id": "MDQ6VXNlcjE3MDY0NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1706443?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shirte",
"html_url": "https://github.com/shirte",
"followers_url": "https://api.github.com/users/shirte/followers",
"following_url": "https://api.github.com/users/shirte/following{/other_user}",
"gists_url": "https://api.github.com/users/shirte/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shirte/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shirte/subscriptions",
"organizations_url": "https://api.github.com/users/shirte/orgs",
"repos_url": "https://api.github.com/users/shirte/repos",
"events_url": "https://api.github.com/users/shirte/events{/privacy}",
"received_events_url": "https://api.github.com/users/shirte/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I just made the dummy data smaller :)\r\nOnce github refreshes the change I think we can merge !",
"Thank you so much for reviewing and accepting my pull request!! :)\r\n\r\nI created these rather large dummy data sets to cover all different cases for the row structure. E.g. in the web configuration, it's possible that a row has evidence from wikipedia (\"EntityPages\") and the web (\"SearchResults\"). But it also might happen that either EntityPages or SearchResults is empty. Probably, I will add this thought to the dataset description in the future.",
"Ok I see ! Yes feel free to mention it in the dataset card, this can be useful.\r\n\r\nFor the dummy data though we can keep the small ones, as the tests are mainly about testing the parsing from the dataset script rather than the actual content of the dataset."
] | 1,632,147,443,000 | 1,633,440,052,000 | 1,633,102,769,000 | CONTRIBUTOR | null | The TriviaQA paper suggests that the two subsets (Wikipedia and Web)
should be treated differently. There are also different leaderboards
for the two sets on CodaLab. For that reason, introduce additional
builder configs in the trivia_qa dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2949/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2949/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2949",
"html_url": "https://github.com/huggingface/datasets/pull/2949",
"diff_url": "https://github.com/huggingface/datasets/pull/2949.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2949.patch",
"merged_at": 1633102769000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2948 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2948/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2948/comments | https://api.github.com/repos/huggingface/datasets/issues/2948/events | https://github.com/huggingface/datasets/pull/2948 | 1,000,844,077 | PR_kwDODunzps4r9PdV | 2,948 | Fix minor URL format in scitldr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,136,292,000 | 1,632,143,908,000 | 1,632,143,908,000 | MEMBER | null | While investigating issue #2918, I found this minor format issues in the URLs (if runned in a Windows machine). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2948/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2948/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2948",
"html_url": "https://github.com/huggingface/datasets/pull/2948",
"diff_url": "https://github.com/huggingface/datasets/pull/2948.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2948.patch",
"merged_at": 1632143908000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2947/comments | https://api.github.com/repos/huggingface/datasets/issues/2947/events | https://github.com/huggingface/datasets/pull/2947 | 1,000,798,338 | PR_kwDODunzps4r9GIP | 2,947 | Don't use old, incompatible cache for the new `filter` | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,133,139,000 | 1,632,155,109,000 | 1,632,145,382,000 | MEMBER | null | #2836 changed `Dataset.filter` and the resulting data that are stored in the cache are different and incompatible with the ones of the previous `filter` implementation.
However the caching mechanism wasn't able to differentiate between the old and the new implementation of filter (only the method name was taken into account).
This is an issue because anyone that update `datasets` and re-runs some code that uses `filter` would see an error, because the cache would try to load an incompatible `filter` result.
To fix this I added the notion of versioning for dataset transform in the caching mechanism, and bumped the version of the `filter` implementation to 2.0.0
This way the new `filter` outputs are now considered different from the old ones from the caching point of view.
This should fix #2943
cc @anton-l | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2947/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2947/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2947",
"html_url": "https://github.com/huggingface/datasets/pull/2947",
"diff_url": "https://github.com/huggingface/datasets/pull/2947.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2947.patch",
"merged_at": 1632145381000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2946 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2946/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2946/comments | https://api.github.com/repos/huggingface/datasets/issues/2946/events | https://github.com/huggingface/datasets/pull/2946 | 1,000,754,824 | PR_kwDODunzps4r89f8 | 2,946 | Update meteor score from nltk update | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,632,130,126,000 | 1,632,130,559,000 | 1,632,130,559,000 | MEMBER | null | It looks like there were issues in NLTK on the way the METEOR score was computed.
A fix was added in NLTK at https://github.com/nltk/nltk/pull/2763, and therefore the scoring function no longer returns the same values.
I updated the score of the example in the docs | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2946/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2946/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2946",
"html_url": "https://github.com/huggingface/datasets/pull/2946",
"diff_url": "https://github.com/huggingface/datasets/pull/2946.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2946.patch",
"merged_at": 1632130559000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2945/comments | https://api.github.com/repos/huggingface/datasets/issues/2945/events | https://github.com/huggingface/datasets/issues/2945 | 1,000,624,883 | I_kwDODunzps47pFLz | 2,945 | Protect master branch | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Cool, I think we can do both :)",
"@lhoestq now the 2 are implemented.\r\n\r\nPlease note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to the remote master branch; and eventually reverted without messing up the repo history)."
] | 1,632,120,421,000 | 1,632,139,287,000 | 1,632,139,216,000 | MEMBER | null | After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.:
- 00cc036fea7c7745cfe722360036ed306796a3f2
- 13ae8c98602bbad8197de3b9b425f4c78f582af1
- ...
I propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future:
- [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch
- Currently, simple merge commits are already disabled
- I propose to disable rebase merging as well
- ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~
- ~~This protection would reject direct pushes to master branch~~
- ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~
- [x] Protect the master branch only from direct pushing of **merge commits**
- GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch).
- No need to disable/re-enable this protection on each release
This purpose of this Issue is to open a discussion about this problem and to agree in a solution. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2945/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2945/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2944/comments | https://api.github.com/repos/huggingface/datasets/issues/2944/events | https://github.com/huggingface/datasets/issues/2944 | 1,000,544,370 | I_kwDODunzps47oxhy | 2,944 | Add `remove_columns` to `IterableDataset ` | {
"login": "cccntu",
"id": 31893406,
"node_id": "MDQ6VXNlcjMxODkzNDA2",
"avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cccntu",
"html_url": "https://github.com/cccntu",
"followers_url": "https://api.github.com/users/cccntu/followers",
"following_url": "https://api.github.com/users/cccntu/following{/other_user}",
"gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cccntu/subscriptions",
"organizations_url": "https://api.github.com/users/cccntu/orgs",
"repos_url": "https://api.github.com/users/cccntu/repos",
"events_url": "https://api.github.com/users/cccntu/events{/privacy}",
"received_events_url": "https://api.github.com/users/cccntu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi ! Good idea :)\r\nIf you are interested in contributing, feel free to give it a try and open a Pull Request. Also let me know if I can help you with this or if you have questions"
] | 1,632,110,460,000 | 1,633,707,113,000 | 1,633,707,113,000 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
```python
from datasets import load_dataset
dataset = load_dataset("c4", 'realnewslike', streaming =True, split='train')
dataset = dataset.remove_columns('url')
```
```
AttributeError: 'IterableDataset' object has no attribute 'remove_columns'
```
**Describe the solution you'd like**
It would be nice to have `.remove_columns()` to match the `Datasets` api.
**Describe alternatives you've considered**
This can be done with a single call to `.map()`,
I can try to help add this. π€ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2944/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2944/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2943/comments | https://api.github.com/repos/huggingface/datasets/issues/2943/events | https://github.com/huggingface/datasets/issues/2943 | 1,000,355,115 | I_kwDODunzps47oDUr | 2,943 | Backwards compatibility broken for cached datasets that use `.filter()` | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anton-l/subscriptions",
"organizations_url": "https://api.github.com/users/anton-l/orgs",
"repos_url": "https://api.github.com/users/anton-l/repos",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"received_events_url": "https://api.github.com/users/anton-l/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.\r\nTo avoid other users from having this issue we could make the caching differentiate the two, what do you think ?",
"If it's easy enough to implement, then yes please π But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests.",
"Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR",
"I just merged a fix, let me know if you're still having this kind of issues :)\r\n\r\nWe'll do a release soon to make this fix available",
"Definitely works on several manual cases with our dummy datasets, thank you @lhoestq !",
"Fixed by #2947."
] | 1,632,068,197,000 | 1,632,155,143,000 | 1,632,155,142,000 | CONTRIBUTOR | null | ## Describe the bug
After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with
`ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}`
Related feature: https://github.com/huggingface/datasets/pull/2836
:question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :)
## Workaround
Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`.
## Steps to reproduce the bug
1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists.
2. `pip install datasets==1.11.0` and run the following snippet:
```python
from datasets import load_dataset
ids = ["1272-141231-0000"]
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.filter(lambda x: x["id"] in ids)
```
3. `pip install datasets==1.12.1` and re-run the code again
## Expected results
Same result as with the previous `datasets` version.
## Actual results
```bash
Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1)
Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow
Traceback (most recent call last):
File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module>
ds = ds.filter(lambda x: x["id"] in ids)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter
indices = self.map(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map
return self._map_single(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper
out = func(self, *args, **kwargs)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single
return Dataset.from_file(cache_file_name, info=info, split=self.split)
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file
return cls(
File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__
self.info.features = self.info.features.reorder_fields_as(inferred_features)
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as
return Features(recursive_reorder(self, other))
File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder
raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position)
ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}
Process finished with exit code 1
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2943/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2943/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2942/comments | https://api.github.com/repos/huggingface/datasets/issues/2942/events | https://github.com/huggingface/datasets/pull/2942 | 1,000,309,765 | PR_kwDODunzps4r7tY6 | 2,942 | Add SEDE dataset | {
"login": "Hazoom",
"id": 13545154,
"node_id": "MDQ6VXNlcjEzNTQ1MTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/13545154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hazoom",
"html_url": "https://github.com/Hazoom",
"followers_url": "https://api.github.com/users/Hazoom/followers",
"following_url": "https://api.github.com/users/Hazoom/following{/other_user}",
"gists_url": "https://api.github.com/users/Hazoom/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Hazoom/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Hazoom/subscriptions",
"organizations_url": "https://api.github.com/users/Hazoom/orgs",
"repos_url": "https://api.github.com/users/Hazoom/repos",
"events_url": "https://api.github.com/users/Hazoom/events{/privacy}",
"received_events_url": "https://api.github.com/users/Hazoom/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks @albertvillanova for your great suggestions! I just pushed a new commit with the necessary fixes. For some reason, the test `test_metric_common` failed for `meteor` metric, which doesn't have any connection to this PR, so I'm trying to rebase and see if it helps.",
"Hi @Hazoom,\r\n\r\nYou were right: the non-passing test had nothing to do with this PR.\r\n\r\nUnfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:\r\n- your commits repeated two times\r\n- and commits which are not yours from the master branch\r\n\r\nIf you would like to clean your pull request, please make:\r\n```\r\ngit reset --hard 587b93a\r\ngit fetch upstream master\r\ngit merge upstream/master\r\ngit push --force origin sede\r\n```",
"> Hi @Hazoom,\r\n> \r\n> You were right: the non-passing test had nothing to do with this PR.\r\n> \r\n> Unfortunately, you did a git rebase (instead of a git merge), which is not recommended once you have already opened a pull request because you mess up your pull request history. You can see that your pull request now contains:\r\n> \r\n> * your commits repeated two times\r\n> * and commits which are not yours from the master branch\r\n> \r\n> If you would like to clean your pull request, please make:\r\n> \r\n> ```\r\n> git reset --hard 587b93a\r\n> git fetch upstream master\r\n> git merge upstream/master\r\n> git push --force origin sede\r\n> ```\r\n\r\nThanks @albertvillanova ",
"> Nice! Just one final request before approving your pull request:\r\n> \r\n> As you have updated the \"QuerySetId\" field data type, the size of the dataset is smaller now. You should regenerate the metadata. Please run:\r\n> \r\n> ```\r\n> rm datasets/sede/dataset_infos.json\r\n> datasets-cli test datasets/sede --save_infos --all_configs\r\n> ```\r\n\r\n@albertvillanova Good catch, just fixed it."
] | 1,632,057,084,000 | 1,632,479,995,000 | 1,632,479,994,000 | CONTRIBUTOR | null | This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2942/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2942/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2942",
"html_url": "https://github.com/huggingface/datasets/pull/2942",
"diff_url": "https://github.com/huggingface/datasets/pull/2942.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2942.patch",
"merged_at": 1632479994000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2940 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2940/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2940/comments | https://api.github.com/repos/huggingface/datasets/issues/2940/events | https://github.com/huggingface/datasets/pull/2940 | 999,680,796 | PR_kwDODunzps4r6EUF | 2,940 | add swedish_medical_ner dataset | {
"login": "bwang482",
"id": 6764450,
"node_id": "MDQ6VXNlcjY3NjQ0NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bwang482",
"html_url": "https://github.com/bwang482",
"followers_url": "https://api.github.com/users/bwang482/followers",
"following_url": "https://api.github.com/users/bwang482/following{/other_user}",
"gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwang482/subscriptions",
"organizations_url": "https://api.github.com/users/bwang482/orgs",
"repos_url": "https://api.github.com/users/bwang482/repos",
"events_url": "https://api.github.com/users/bwang482/events{/privacy}",
"received_events_url": "https://api.github.com/users/bwang482/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,908,985,000 | 1,633,436,014,000 | 1,633,436,013,000 | CONTRIBUTOR | null | Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021" | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2940/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2940",
"html_url": "https://github.com/huggingface/datasets/pull/2940",
"diff_url": "https://github.com/huggingface/datasets/pull/2940.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2940.patch",
"merged_at": 1633436013000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2939/comments | https://api.github.com/repos/huggingface/datasets/issues/2939/events | https://github.com/huggingface/datasets/pull/2939 | 999,639,630 | PR_kwDODunzps4r58Gu | 2,939 | MENYO-20k repo has moved, updating URL | {
"login": "cdleong",
"id": 4109253,
"node_id": "MDQ6VXNlcjQxMDkyNTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4109253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cdleong",
"html_url": "https://github.com/cdleong",
"followers_url": "https://api.github.com/users/cdleong/followers",
"following_url": "https://api.github.com/users/cdleong/following{/other_user}",
"gists_url": "https://api.github.com/users/cdleong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cdleong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cdleong/subscriptions",
"organizations_url": "https://api.github.com/users/cdleong/orgs",
"repos_url": "https://api.github.com/users/cdleong/repos",
"events_url": "https://api.github.com/users/cdleong/events{/privacy}",
"received_events_url": "https://api.github.com/users/cdleong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,905,314,000 | 1,632,238,297,000 | 1,632,238,296,000 | CONTRIBUTOR | null | Dataset repo moved to https://github.com/uds-lsv/menyo-20k_MT, now editing URL to match.
https://github.com/uds-lsv/menyo-20k_MT/blob/master/data/train.tsv is the file we're looking for | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2939/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2939",
"html_url": "https://github.com/huggingface/datasets/pull/2939",
"diff_url": "https://github.com/huggingface/datasets/pull/2939.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2939.patch",
"merged_at": 1632238296000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2938/comments | https://api.github.com/repos/huggingface/datasets/issues/2938/events | https://github.com/huggingface/datasets/pull/2938 | 999,552,263 | PR_kwDODunzps4r5qwa | 2,938 | Take namespace into account in caching | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"We might have collisions if a username and a dataset_name are the same. Maybe instead serialize the dataset name by replacing `/` with some string, eg `__SLASH__`, that will hopefully never appear in a dataset or user name (it's what I did in https://github.com/huggingface/datasets-preview-backend/blob/master/benchmark/scripts/serialize.py. That way, all the datasets are one-level deep directories",
"IIRC we enforce that no repo id or username can contain `___` (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)\r\n\r\ncc @Pierrci ",
"> IIRC we enforce that no repo id or username can contain ___ (exactly 3 underscores) specifically for this reason, so you can use that string (that we use in other projects)\r\n\r\nout of curiosity: where is it enforced?",
"> where is it enforced?\r\n\r\nNowhere yet but we should :) feel free to track in internal tracker and/or implement, as this will be useful in the future",
"Thanks for the trick, I'm doing the change :)\r\nWe can use\r\n`~/.cache/huggingface/datasets/username___dataset_name` for the data\r\n`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files",
"Merging, though it will have to be integrated again the refactor at #2986"
] | 1,631,897,853,000 | 1,632,920,492,000 | 1,632,920,491,000 | MEMBER | null | Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing.
I changed the dataset cache and module cache mechanism to include the username in the name of the cache directory that is used:
<s>
`~/.cache/huggingface/datasets/username/dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username/dataset_name` for the python files
</s>
EDIT: actually using three underscores:
`~/.cache/huggingface/datasets/username___dataset_name` for the data
`~/.cache/huggingface/modules/datasets_modules/datasets/username___dataset_name` for the python files
This PR should fix the issue https://github.com/huggingface/datasets/issues/2842
cc @stas00 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2938/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2938/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2938",
"html_url": "https://github.com/huggingface/datasets/pull/2938",
"diff_url": "https://github.com/huggingface/datasets/pull/2938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2938.patch",
"merged_at": 1632920491000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2936/comments | https://api.github.com/repos/huggingface/datasets/issues/2936/events | https://github.com/huggingface/datasets/pull/2936 | 999,521,647 | PR_kwDODunzps4r5knb | 2,936 | Check that array is not Float as nan != nan | {
"login": "Iwontbecreative",
"id": 494951,
"node_id": "MDQ6VXNlcjQ5NDk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/494951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Iwontbecreative",
"html_url": "https://github.com/Iwontbecreative",
"followers_url": "https://api.github.com/users/Iwontbecreative/followers",
"following_url": "https://api.github.com/users/Iwontbecreative/following{/other_user}",
"gists_url": "https://api.github.com/users/Iwontbecreative/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Iwontbecreative/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Iwontbecreative/subscriptions",
"organizations_url": "https://api.github.com/users/Iwontbecreative/orgs",
"repos_url": "https://api.github.com/users/Iwontbecreative/repos",
"events_url": "https://api.github.com/users/Iwontbecreative/events{/privacy}",
"received_events_url": "https://api.github.com/users/Iwontbecreative/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,895,401,000 | 1,632,217,145,000 | 1,632,217,144,000 | CONTRIBUTOR | null | The Exception wants to check for issues with StructArrays/ListArrays but catches FloatArrays with value nan as nan != nan.
Pass on FloatArrays as we should not raise an Exception for them. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2936/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2936",
"html_url": "https://github.com/huggingface/datasets/pull/2936",
"diff_url": "https://github.com/huggingface/datasets/pull/2936.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2936.patch",
"merged_at": 1632217144000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2935/comments | https://api.github.com/repos/huggingface/datasets/issues/2935/events | https://github.com/huggingface/datasets/pull/2935 | 999,518,469 | PR_kwDODunzps4r5j8B | 2,935 | Add Jigsaw unintended Bias | {
"login": "Iwontbecreative",
"id": 494951,
"node_id": "MDQ6VXNlcjQ5NDk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/494951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Iwontbecreative",
"html_url": "https://github.com/Iwontbecreative",
"followers_url": "https://api.github.com/users/Iwontbecreative/followers",
"following_url": "https://api.github.com/users/Iwontbecreative/following{/other_user}",
"gists_url": "https://api.github.com/users/Iwontbecreative/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Iwontbecreative/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Iwontbecreative/subscriptions",
"organizations_url": "https://api.github.com/users/Iwontbecreative/orgs",
"repos_url": "https://api.github.com/users/Iwontbecreative/repos",
"events_url": "https://api.github.com/users/Iwontbecreative/events{/privacy}",
"received_events_url": "https://api.github.com/users/Iwontbecreative/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Note that the tests seem to fail because of a bug in an Exception at the moment, see: https://github.com/huggingface/datasets/pull/2936 for the fix",
"@lhoestq implemented your changes, I think this might be ready for another look.",
"Thanks @lhoestq, implemented the changes, let me know if anything else pops up."
] | 1,631,895,151,000 | 1,632,480,112,000 | 1,632,480,112,000 | CONTRIBUTOR | null | Hi,
Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff.
This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2935/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2935",
"html_url": "https://github.com/huggingface/datasets/pull/2935",
"diff_url": "https://github.com/huggingface/datasets/pull/2935.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2935.patch",
"merged_at": 1632480112000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2934/comments | https://api.github.com/repos/huggingface/datasets/issues/2934/events | https://github.com/huggingface/datasets/issues/2934 | 999,477,413 | I_kwDODunzps47ktCl | 2,934 | to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution involves wrapping the linked dataset with `weakref.proxy` and adding a custom `__del__` to `tf.python.data.ops.dataset_ops.TensorSliceDataset` (this is the type of a dataset that is returned by `tf.data.Dataset.from_tensor_slices`; this works for TF 2.x, but I'm not sure `tf.python.data.ops.dataset_ops` is a valid path for TF 1.x) that deletes the linked dataset, which is assigned to the dataset object as a property. Will open a draft PR soon!",
"Thanks a lot for investigating !"
] | 1,631,892,413,000 | 1,634,115,803,000 | 1,634,115,803,000 | MEMBER | null | To reproduce:
```python
import datasets as ds
import weakref
import gc
d = ds.load_dataset("mnist", split="train")
ref = weakref.ref(d._data.table)
tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label")
del tfd, d
gc.collect()
assert ref() is None, "Error: there is at least one reference left"
```
This causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards.
Moreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this.
cc @Rocketknight1 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2934/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2933 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2933/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2933/comments | https://api.github.com/repos/huggingface/datasets/issues/2933/events | https://github.com/huggingface/datasets/pull/2933 | 999,392,566 | PR_kwDODunzps4r5MHs | 2,933 | Replace script_version with revision | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm also fine with the removal in 1.15"
] | 1,631,887,479,000 | 1,632,131,530,000 | 1,632,131,530,000 | MEMBER | null | As discussed in https://github.com/huggingface/datasets/pull/2718#discussion_r707013278, the parameter name `script_version` is no longer applicable to datasets without loading script (i.e., datasets only with raw data files).
This PR replaces the parameter name `script_version` with `revision`.
This way, we are also aligned with:
- Transformers: `AutoTokenizer.from_pretrained(..., revision=...)`
- Hub: `HfApi.dataset_info(..., revision=...)`, `HfApi.upload_file(..., revision=...)` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2933/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2933/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2933",
"html_url": "https://github.com/huggingface/datasets/pull/2933",
"diff_url": "https://github.com/huggingface/datasets/pull/2933.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2933.patch",
"merged_at": 1632131530000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2932/comments | https://api.github.com/repos/huggingface/datasets/issues/2932/events | https://github.com/huggingface/datasets/issues/2932 | 999,317,750 | I_kwDODunzps47kGD2 | 2,932 | Conda build fails | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Why 1.9 ?\r\n\r\nhttps://anaconda.org/HuggingFace/datasets currently says 1.11",
"Alright I added 1.12.0 and 1.12.1 and fixed the conda build #2952 "
] | 1,631,882,962,000 | 1,632,238,270,000 | 1,632,238,270,000 | MEMBER | null | ## Describe the bug
Current `datasets` version in conda is 1.9 instead of 1.12.
The build of the conda package fails.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2932/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2931/comments | https://api.github.com/repos/huggingface/datasets/issues/2931/events | https://github.com/huggingface/datasets/pull/2931 | 998,326,359 | PR_kwDODunzps4r1-JH | 2,931 | Fix bug in to_tf_dataset | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway!"
] | 1,631,804,883,000 | 1,631,811,698,000 | 1,631,811,697,000 | CONTRIBUTOR | null | Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2931/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2931",
"html_url": "https://github.com/huggingface/datasets/pull/2931",
"diff_url": "https://github.com/huggingface/datasets/pull/2931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2931.patch",
"merged_at": 1631811697000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2930/comments | https://api.github.com/repos/huggingface/datasets/issues/2930/events | https://github.com/huggingface/datasets/issues/2930 | 998,154,311 | I_kwDODunzps47fqBH | 2,930 | Mutable columns argument breaks set_format | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "Rocketknight1",
"id": 12866554,
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rocketknight1",
"html_url": "https://github.com/Rocketknight1",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"type": "User",
"site_admin": false
},
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Pushed a fix to my branch #2731 "
] | 1,631,795,242,000 | 1,631,800,253,000 | 1,631,800,253,000 | CONTRIBUTOR | null | ## Describe the bug
If you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("glue", "cola")
column_list = ["idx", "label"]
dataset.set_format("python", columns=column_list)
column_list[1] = "foo" # Change the list after we call `set_format`
dataset['train'][:4].keys()
```
## Expected results
```python
dict_keys(['idx', 'label'])
```
## Actual results
```python
dict_keys(['idx'])
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2930/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2929/comments | https://api.github.com/repos/huggingface/datasets/issues/2929/events | https://github.com/huggingface/datasets/pull/2929 | 997,960,024 | PR_kwDODunzps4r015C | 2,929 | Add regression test for null Sequence | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,782,713,000 | 1,631,867,039,000 | 1,631,867,039,000 | MEMBER | null | Relates to #2892 and #2900. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2929/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2929",
"html_url": "https://github.com/huggingface/datasets/pull/2929",
"diff_url": "https://github.com/huggingface/datasets/pull/2929.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2929.patch",
"merged_at": 1631867039000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2928/comments | https://api.github.com/repos/huggingface/datasets/issues/2928/events | https://github.com/huggingface/datasets/pull/2928 | 997,941,506 | PR_kwDODunzps4r0yUb | 2,928 | Update BibTeX entry | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,781,560,000 | 1,631,795,734,000 | 1,631,795,734,000 | MEMBER | null | Update BibTeX entry. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2928/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2928/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2928",
"html_url": "https://github.com/huggingface/datasets/pull/2928",
"diff_url": "https://github.com/huggingface/datasets/pull/2928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2928.patch",
"merged_at": 1631795734000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2927/comments | https://api.github.com/repos/huggingface/datasets/issues/2927/events | https://github.com/huggingface/datasets/issues/2927 | 997,654,680 | I_kwDODunzps47dwCY | 2,927 | Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument | {
"login": "timothyjlaurent",
"id": 2000204,
"node_id": "MDQ6VXNlcjIwMDAyMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timothyjlaurent",
"html_url": "https://github.com/timothyjlaurent",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",
"gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}",
"starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions",
"organizations_url": "https://api.github.com/users/timothyjlaurent/orgs",
"repos_url": "https://api.github.com/users/timothyjlaurent/repos",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"received_events_url": "https://api.github.com/users/timothyjlaurent/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, I'm looking into it :)",
"Fixed by #2950."
] | 1,631,754,842,000 | 1,632,155,002,000 | 1,632,155,001,000 | NONE | null | ## Describe the bug
Upgrading to 1.12 caused `dataset.filter` call to fail with
> get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels
## Steps to reproduce the bug
```pythondef
filter_good_rows(
ex: Dict,
valid_rel_labels: Set[str],
valid_ner_labels: Set[str],
tokenizer: PreTrainedTokenizerFast,
) -> bool:
"""Get the good rows"""
encoding = get_encoding_for_text(text=ex["text"], tokenizer=tokenizer)
ex["encoding"] = encoding
for relation in ex["relations"]:
if not is_valid_relation(relation, valid_rel_labels):
return False
for span in ex["spans"]:
if not is_valid_span(span, valid_ner_labels, encoding):
return False
return True
def get_dataset():
loader_path = str(Path(__file__).parent / "prodigy_dataset_builder.py")
ds = load_dataset(
loader_path,
name="prodigy-dataset",
data_files=sorted(file_paths),
cache_dir=cache_dir,
)["train"]
valid_ner_labels = set(vocab.ner_category)
valid_relations = set(vocab.relation_types.keys())
ds = ds.filter(
filter_good_rows,
fn_kwargs=dict(
valid_rel_labels=valid_relations,
valid_ner_labels=valid_ner_labels,
tokenizer=vocab.tokenizer,
),
keep_in_memory=True,
num_proc=num_proc,
)
```
`ds` is a `DatasetDict` produced by a jsonl dataset.
This runs fine on 1.11 but fails on 1.12
**Stack Trace**
## Expected results
I expect 1.12 datasets filter to filter the dataset without raising as it does on 1.11
## Actual results
```
tf_ner_rel_lib/dataset.py:695: in load_prodigy_arrow_datasets_from_jsonl
ds = ds.filter(
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper
out = func(self, *args, **kwargs)
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2169: in filter
indices = self.map(
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1686: in map
return self._map_single(
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper
out = func(self, *args, **kwargs)
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2048: in _map_single
batch = apply_function_on_filtered_inputs(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
inputs = {'_input_hash': [2108817714, 1477695082, -1021597032, 2130671338, -1260483858, -1203431639, ...], '_task_hash': [18070...ons', 'relations', 'relations', ...], 'answer': ['accept', 'accept', 'accept', 'accept', 'accept', 'accept', ...], ...}
indices = [0, 1, 2, 3, 4, 5, ...], check_same_num_examples = False, offset = 0
def apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples=False, offset=0):
"""Utility to apply the function on a selection of columns."""
nonlocal update_data
fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]
if offset == 0:
effective_indices = indices
else:
effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
processed_inputs = (
> function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
)
E TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'valid_rel_labels'
../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1939: TypeError
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Mac
- Python version: 3.8.9
- PyArrow version: pyarrow==5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2927/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2925/comments | https://api.github.com/repos/huggingface/datasets/issues/2925/events | https://github.com/huggingface/datasets/pull/2925 | 997,407,034 | PR_kwDODunzps4rzJ9s | 2,925 | Add tutorial for no-code dataset upload | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [
"Cool, love it ! :)\r\n\r\nFeel free to add a paragraph saying how to load the dataset:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"stevhliu/demo\")\r\n\r\n# or to separate each csv file into several splits\r\ndata_files = {\"train\": \"train.csv\", \"test\": \"test.csv\"}\r\ndataset = load_dataset(\"stevhliu/demo\", data_files=data_files)\r\nprint(dataset[\"train\"][0])\r\n```",
"Perfect, feel free to mark this PR ready for review :)\r\n\r\ncc @albertvillanova do you have any comment ? You can check the tutorial here:\r\nhttps://47389-250213286-gh.circle-artifacts.com/0/docs/_build/html/no_code_upload.html\r\n\r\nMaybe we can just add a list of supported file types:\r\n- csv\r\n- json\r\n- json lines\r\n- text\r\n- parquet",
"I just added a mention of the login for private datasets. Don't hesitate to edit or comment.\r\n\r\nOtherwise I think it's all good, feel free to merge it @stevhliu if you don't have other changes to make :)"
] | 1,631,732,082,000 | 1,632,765,115,000 | 1,632,765,115,000 | CONTRIBUTOR | null | This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dataset without accessing the terminal or knowing Git. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2925/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2925/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2925",
"html_url": "https://github.com/huggingface/datasets/pull/2925",
"diff_url": "https://github.com/huggingface/datasets/pull/2925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2925.patch",
"merged_at": 1632765115000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2924/comments | https://api.github.com/repos/huggingface/datasets/issues/2924/events | https://github.com/huggingface/datasets/issues/2924 | 997,378,113 | I_kwDODunzps47cshB | 2,924 | "File name too long" error for file locks | {
"login": "gar1t",
"id": 184949,
"node_id": "MDQ6VXNlcjE4NDk0OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/184949?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gar1t",
"html_url": "https://github.com/gar1t",
"followers_url": "https://api.github.com/users/gar1t/followers",
"following_url": "https://api.github.com/users/gar1t/following{/other_user}",
"gists_url": "https://api.github.com/users/gar1t/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gar1t/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gar1t/subscriptions",
"organizations_url": "https://api.github.com/users/gar1t/orgs",
"repos_url": "https://api.github.com/users/gar1t/repos",
"events_url": "https://api.github.com/users/gar1t/events{/privacy}",
"received_events_url": "https://api.github.com/users/gar1t/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, the filename here is less than 255\r\n```python\r\n>>> len(\"_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock\")\r\n154\r\n```\r\nso not sure why it's considered too long for your filesystem.\r\n(also note that the lock files we use always have smaller filenames than 255)\r\n\r\nhttps://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135",
"Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system before and the Internet's not forthcoming with any info.",
"Snap, encountered when trying to run [this example from PyTorch Lightning Flash](https://lightning-flash.readthedocs.io/en/latest/reference/speech_recognition.html):\r\n\r\n```py\r\nimport torch\r\n\r\nimport flash\r\nfrom flash.audio import SpeechRecognition, SpeechRecognitionData\r\nfrom flash.core.data.utils import download_data\r\n\r\n# 1. Create the DataModule\r\ndownload_data(\"https://pl-flash-data.s3.amazonaws.com/timit_data.zip\", \"./data\")\r\n\r\ndatamodule = SpeechRecognitionData.from_json(\r\n input_fields=\"file\",\r\n target_fields=\"text\",\r\n train_file=\"data/timit/train.json\",\r\n test_file=\"data/timit/test.json\",\r\n)\r\n```\r\n\r\nGave this traceback:\r\n\r\n```py\r\nTraceback (most recent call last):\r\n File \"lf_ft.py\", line 10, in <module>\r\n datamodule = SpeechRecognitionData.from_json(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py\", line 1005, in from_json\r\n return cls.from_data_source(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py\", line 571, in from_data_source\r\n train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py\", line 307, in to_datasets\r\n train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING)\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py\", line 344, in generate_dataset\r\n data = load_data(data, mock_dataset)\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py\", line 103, in load_data\r\n dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)})\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py\", line 1599, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py\", line 1457, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/builder.py\", line 285, in __init__\r\n with FileLock(lock_path):\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py\", line 323, in __enter__\r\n self.acquire()\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py\", line 272, in acquire\r\n self._acquire()\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py\", line 403, in _acquire\r\n fd = os.open(self._lock_file, open_mode)\r\nOSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock'\r\n```\r\n\r\nMy home directory is encrypted, therefore the maximum length is 143 ([source 1](https://github.com/ray-project/ray/issues/1463#issuecomment-425674521), [source 2](https://stackoverflow.com/a/6571568/2668831))\r\n\r\nFrom what I've read I think the error is in reference to the file name (just the final part of the path) which is 145 chars long:\r\n\r\n```py\r\n>>> len(\"_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock\")\r\n145\r\n```\r\n\r\nI also have a file in this directory (i.e. whose length is not a problem):\r\n\r\n```py\r\n>>> len(\"_home_louis_.cache_huggingface_datasets_librispeech_asr_clean_2.1.0_468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1.lock\")\r\n137\r\n```",
"Perhaps this could be exposed as a config setting so you could change it manually?\r\n\r\nhttps://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135\r\n\r\nRather than hard-code 255, default it to 255, and allow it to be changed, the same way is done for `datasets.config.IN_MEMORY_MAX_SIZE`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/12b7e13bc568b9f92705f64b249e148f3bc9a9ea/src/datasets/config.py#L171-L173\r\n\r\nIn fact there already appears to be an existing variable to do so:\r\n\r\nhttps://github.com/huggingface/datasets/blob/12b7e13bc568b9f92705f64b249e148f3bc9a9ea/src/datasets/config.py#L187\r\n\r\nIt's used here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/efe89edd36e4ffa562fc3eebaf07a5fec26e6dac/src/datasets/builder.py#L163-L165\r\n\r\nPerhaps it could be set based on a test (trying to create a 255 char length named lock file and seeing if it fails)",
"Just fixed it, sending a PR :smile:",
"Hi @lmmx @gar1t ,\r\n\r\nit would be helpful if you could run the following code and copy-paste the output here:\r\n```python\r\nimport datasets\r\nimport os\r\nos.statvfs(datasets.config.HF_DATASETS_CACHE)\r\n```",
"`os.statvfs_result(f_bsize=4096, f_frsize=4096, f_blocks=240046344, f_bfree=96427610, f_bavail=84216487, f_files=61038592, f_ffree=58216027, f_favail=58216027, f_flag=4102, f_namemax=143)`",
"Hi @lmmx,\r\n\r\nThanks for providing the result of the command. I've opened a PR, and it would be great if you could verify that the fix works on your system. To install the version of the datasets with the fix, please run the following command:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@fix-2924\r\n```\r\n\r\nBtw, I saw your PR, and I appreciate your effort. However, my approach is a bit simpler for the end-user, so that's why I decided to fix the issue myself.",
"No problem Mario I didn't know that was where that value was recorded so I learnt something :smiley: I just wanted to get a local version working, of course you should implement whatever fix is best for HF. Yes can confirm this fixes it too. Thanks!"
] | 1,631,729,810,000 | 1,635,500,544,000 | 1,635,500,544,000 | NONE | null | ## Describe the bug
Getting the following error when calling `load_dataset("gar1t/test")`:
```
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Steps to reproduce the bug
Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4):
```python
from datasets import load_dataset
load_dataset("gar1t/test")
```
## Expected results
Expect the function to return without an error.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset
builder_instance.download_and_prepare(
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare
self._save_info()
File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info
with FileLock(lock_path):
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__
self.acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire
self._acquire()
File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire
fd = os.open(self._lock_file, open_mode)
OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock'
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 5.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2924/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2924/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2922/comments | https://api.github.com/repos/huggingface/datasets/issues/2922/events | https://github.com/huggingface/datasets/pull/2922 | 997,332,662 | PR_kwDODunzps4ry6-s | 2,922 | Fix conversion of multidim arrays in list to arrow | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,726,496,000 | 1,631,726,572,000 | 1,631,726,505,000 | MEMBER | null | Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation.
However in #2361 we started to keep numpy arrays in order to keep their dtypes.
It works when we pass any multi-dim numpy array (the conversion to arrow has been added on our side), but not for lists of multi-dim numpy arrays.
In this PR I added two strategies:
- one that takes a list of multi-dim numpy arrays on returns an arrow array in an optimized way (more common case)
- one that takes a list of possibly very nested data (lists, dicts, tuples) containing multi-dim arrays. This one is less optimized since it converts all the multi-dim numpy arrays into lists of 1-d arrays for compatibility with arrow. This strategy is simpler that just trying to create the arrow array from a possibly very nested data structure, but in the future we can improve it if needed.
Fix https://github.com/huggingface/datasets/issues/2921 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2922/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2922",
"html_url": "https://github.com/huggingface/datasets/pull/2922",
"diff_url": "https://github.com/huggingface/datasets/pull/2922.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2922.patch",
"merged_at": 1631726505000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2921/comments | https://api.github.com/repos/huggingface/datasets/issues/2921/events | https://github.com/huggingface/datasets/issues/2921 | 997,325,424 | I_kwDODunzps47cfpw | 2,921 | Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values" | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,725,931,000 | 1,631,726,505,000 | 1,631,726,505,000 | MEMBER | null | This error has been introduced in https://github.com/huggingface/datasets/pull/2361
To reproduce:
```python
import numpy as np
from datasets import Dataset
d = Dataset.from_dict({"a": [np.zeros((2, 2))]})
```
raises
```python
Traceback (most recent call last):
File "playground/ttest.py", line 5, in <module>
d = Dataset.from_dict({"a": [np.zeros((2, 2))]}).with_format("torch")
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 458, in from_dict
pa_table = InMemoryTable.from_pydict(mapping=mapping)
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 365, in from_pydict
return cls(pa.Table.from_pydict(*args, **kwargs))
File "pyarrow/table.pxi", line 1639, in pyarrow.lib.Table.from_pydict
File "pyarrow/array.pxi", line 332, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 223, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_writer.py", line 107, in __arrow_array__
out = pa.array(self.data, type=type)
File "pyarrow/array.pxi", line 306, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2921/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2920/comments | https://api.github.com/repos/huggingface/datasets/issues/2920/events | https://github.com/huggingface/datasets/pull/2920 | 997,323,014 | PR_kwDODunzps4ry4_u | 2,920 | Fix unwanted tqdm bar when accessing examples | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,631,725,751,000 | 1,631,726,304,000 | 1,631,726,304,000 | MEMBER | null | A change in #2814 added bad progress bars in `map_nested`. Now they're disabled by default
Fix #2919 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2920/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2920",
"html_url": "https://github.com/huggingface/datasets/pull/2920",
"diff_url": "https://github.com/huggingface/datasets/pull/2920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2920.patch",
"merged_at": 1631726303000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2919/comments | https://api.github.com/repos/huggingface/datasets/issues/2919/events | https://github.com/huggingface/datasets/issues/2919 | 997,127,487 | I_kwDODunzps47bvU_ | 2,919 | Unwanted progress bars when accessing examples | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"doing a patch release now :)"
] | 1,631,714,710,000 | 1,631,726,509,000 | 1,631,726,303,000 | MEMBER | null | When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples:
```python
In [1]: import datasets as ds
In [2]: d = ds.Dataset.from_dict({"a": [0, 1, 2]}).with_format("torch")
In [3]: d[0]
100%|ββββββββββββββββββββββββββββββββ| 1/1 [00:00<00:00, 3172.70it/s]
Out[3]: {'a': tensor(0)}
```
This is because the pytorch formatter calls `map_nested` that uses progress bars
cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2919/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2919/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2918/comments | https://api.github.com/repos/huggingface/datasets/issues/2918/events | https://github.com/huggingface/datasets/issues/2918 | 997,063,347 | I_kwDODunzps47bfqz | 2,918 | `Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 3287858981,
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming",
"name": "streaming",
"color": "fef2c0",
"default": false,
"description": ""
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @SBrandeis, thanks for reporting! ^^\r\n\r\nI think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389\r\n\r\nI will ask them if they are planning to fix it...",
"Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`\r\n```python\r\nIn [1]: import fsspec\r\n\r\nIn [2]: import json\r\n\r\nIn [3]: with fsspec.open('https://raw.githubusercontent.com/allenai/scitldr/master/SciTLDR-Data/SciTLDR-FullText/test.jsonl', encoding=\"utf-8\") as f:\r\n ...: for row in f:\r\n ...: data = json.loads(row)\r\n ...:\r\n---------------------------------------------------------------------------\r\nClientPayloadError Traceback (most recent call last)\r\n```",
"Thanks for investigating @albertvillanova ! π€ "
] | 1,631,711,167,000 | 1,638,346,500,000 | 1,638,346,500,000 | CONTRIBUTOR | null | ## Describe the bug
Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`:
```python
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
cc @lhoestq
## Steps to reproduce the bug
```python
from datasets import load_dataset
iter_dset = iter(
load_dataset("scitldr", name="FullText", split="test", streaming=True)
)
next(iter_dset)
```
## Expected results
Returns the first sample of the dataset
## Actual results
Calling `__next__` crashes with the following Traceback:
```python
----> 1 next(dset_iter)
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
339
340 def __iter__(self):
--> 341 for key, example in self._iter():
342 if self.features:
343 # we encode the example for ClassLabel feature types for example
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self)
336 else:
337 ex_iterable = self._ex_iterable
--> 338 yield from ex_iterable
339
340 def __iter__(self):
~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self)
76
77 def __iter__(self):
---> 78 for key, example in self.generate_examples_fn(**self.kwargs):
79 yield key, example
80
~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split)
162
163 with open(filepath, encoding="utf-8") as f:
--> 164 for id_, row in enumerate(f):
165 data = json.loads(row)
166 if self.config.name == "AIC":
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length)
496 else:
497 length = min(self.size - self.loc, length)
--> 498 return super().read(length)
499
500 async def async_fetch_all(self):
~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length)
1481 # don't even bother calling fetch
1482 return b""
-> 1483 out = self.cache._fetch(self.loc, self.loc + length)
1484 self.loc += len(out)
1485 return out
~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end)
378 elif start < self.start:
379 if self.end - end > self.blocksize:
--> 380 self.cache = self.fetcher(start, bend)
381 self.start = start
382 else:
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs)
86 def wrapper(*args, **kwargs):
87 self = obj or args[0]
---> 88 return sync(self.loop, func, *args, **kwargs)
89
90 return wrapper
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs)
67 raise FSTimeoutError
68 if isinstance(result[0], BaseException):
---> 69 raise result[0]
70 return result[0]
71
~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout)
23 coro = asyncio.wait_for(coro, timeout=timeout)
24 try:
---> 25 result[0] = await coro
26 except Exception as ex:
27 result[0] = ex
~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end)
538 if r.status == 206:
539 # partial content, as expected
--> 540 out = await r.read()
541 elif "Content-Length" in r.headers:
542 cl = int(r.headers["Content-Length"])
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self)
1030 if self._body is None:
1031 try:
-> 1032 self._body = await self.content.read()
1033 for trace in self._traces:
1034 await trace.send_response_chunk_received(
~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n)
342 async def read(self, n: int = -1) -> bytes:
343 if self._exception is not None:
--> 344 raise self._exception
345
346 # migration problem; with DataQueue you have to catch
ClientPayloadError: 400, message='Can not decode content-encoding: gzip'
```
## Environment info
- `datasets` version: 1.12.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.5
- PyArrow version: 2.0.0
- aiohttp version: 3.7.4.post0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2918/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2917/comments | https://api.github.com/repos/huggingface/datasets/issues/2917/events | https://github.com/huggingface/datasets/issues/2917 | 997,041,658 | I_kwDODunzps47baX6 | 2,917 | windows download abnormal | {
"login": "wei1826676931",
"id": 52347799,
"node_id": "MDQ6VXNlcjUyMzQ3Nzk5",
"avatar_url": "https://avatars.githubusercontent.com/u/52347799?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wei1826676931",
"html_url": "https://github.com/wei1826676931",
"followers_url": "https://api.github.com/users/wei1826676931/followers",
"following_url": "https://api.github.com/users/wei1826676931/following{/other_user}",
"gists_url": "https://api.github.com/users/wei1826676931/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wei1826676931/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wei1826676931/subscriptions",
"organizations_url": "https://api.github.com/users/wei1826676931/orgs",
"repos_url": "https://api.github.com/users/wei1826676931/repos",
"events_url": "https://api.github.com/users/wei1826676931/events{/privacy}",
"received_events_url": "https://api.github.com/users/wei1826676931/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used",
"It is indeed an agency problem, thank you very, very much",
"Let me know if you have other questions :)\r\n\r\nClosing this issue now"
] | 1,631,709,935,000 | 1,631,812,668,000 | 1,631,812,668,000 | NONE | null | ## Describe the bug
The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why??
## Steps to reproduce the bug
```python3.7 + windows
![image](https://user-images.githubusercontent.com/52347799/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png)
# Sample code to reproduce the bug
```
## Expected results
It can be downloaded normally.
## Actual results
it cann't
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.11.0
- Platform:windows
- Python version:3.7
- PyArrow version:
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2917/timeline | null | null | null | false |