url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.26B
| node_id
stringlengths 18
32
| number
int64 1
4.44k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,654B
| updated_at
int64 1,587B
1,654B
| closed_at
int64 1,587B
1,654B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4443/comments | https://api.github.com/repos/huggingface/datasets/issues/4443/events | https://github.com/huggingface/datasets/issues/4443 | 1,259,606,334 | I_kwDODunzps5LFBE- | 4,443 | Dataset Viewer issue for openclimatefix/nimrod-uk-1km | {
"login": "ZYMXIXI",
"id": 32382826,
"node_id": "MDQ6VXNlcjMyMzgyODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/32382826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZYMXIXI",
"html_url": "https://github.com/ZYMXIXI",
"followers_url": "https://api.github.com/users/ZYMXIXI/followers",
"following_url": "https://api.github.com/users/ZYMXIXI/following{/other_user}",
"gists_url": "https://api.github.com/users/ZYMXIXI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZYMXIXI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZYMXIXI/subscriptions",
"organizations_url": "https://api.github.com/users/ZYMXIXI/orgs",
"repos_url": "https://api.github.com/users/ZYMXIXI/repos",
"events_url": "https://api.github.com/users/ZYMXIXI/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZYMXIXI/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,654,244,236,000 | 1,654,244,237,000 | null | NONE | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4443/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4442/comments | https://api.github.com/repos/huggingface/datasets/issues/4442/events | https://github.com/huggingface/datasets/issues/4442 | 1,258,589,276 | I_kwDODunzps5LBIxc | 4,442 | Dataset Viewer issue for amazon_polarity | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks, looking at it"
] | 1,654,197,518,000 | 1,654,241,950,000 | null | MEMBER | null | ### Link
https://huggingface.co/datasets/amazon_polarity/viewer/amazon_polarity/test
### Description
For some reason the train split is OK but the test split is not for this dataset:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/amazon_polarity/__init__.py'
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4442/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4441/comments | https://api.github.com/repos/huggingface/datasets/issues/4441/events | https://github.com/huggingface/datasets/issues/4441 | 1,258,568,656 | I_kwDODunzps5LBDvQ | 4,441 | Dataset Viewer issue for aeslc | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,654,196,232,000 | 1,654,196,232,000 | null | MEMBER | null | ### Link
https://huggingface.co/datasets/aeslc
### Description
The dataset viewer can't find `dataset_infos.json` in it's cache:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/aeslc/eb8e30234cf984a58ebe9f205674597ac1db2ec91e7321cd7f36864f7e3671b8/dataset_infos.json'
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4441/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4440/comments | https://api.github.com/repos/huggingface/datasets/issues/4440/events | https://github.com/huggingface/datasets/pull/4440 | 1,258,494,469 | PR_kwDODunzps44_io_ | 4,440 | Update docs around audio and vision | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4440). All of your documentation changes will be reflected on that endpoint."
] | 1,654,191,723,000 | 1,654,192,190,000 | null | MEMBER | null | As part of the strategy to center the docs around the different modalities, this PR updates the quickstart to include audio and vision examples. This improves the developer experience by making audio and vision content more discoverable, enabling users working in these modalities to also quickly get started without digging too deeply into the docs.
Other changes include:
- Moved the installation guide to the Get Started section because it should be part of a user's onboarding to the library before exploring tutorials or how-to's.
- Updated the native TF code at creating a `tf.data.Dataset` because it was throwing an error. The `to_tensor()` bit was redundant and removing it fixed the error (please double-check me here!).
- Added some UI components to the quickstart so it's easier for users to navigate directly to the relevant section with context about what to expect.
- Reverted to the code tabs for content that don't have any framework-specific text. I think this saves space compared to the code blocks. We'll still use the code blocks if the `torch` text is different from the `tf` text.
Let me know what you think, especially if we should include some code samples for training a model in the audio/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4440/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4440",
"html_url": "https://github.com/huggingface/datasets/pull/4440",
"diff_url": "https://github.com/huggingface/datasets/pull/4440.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4440.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4439/comments | https://api.github.com/repos/huggingface/datasets/issues/4439/events | https://github.com/huggingface/datasets/issues/4439 | 1,258,434,111 | I_kwDODunzps5LAi4_ | 4,439 | TIMIT won't load after manual download: Errors about files that don't exist | {
"login": "drscotthawley",
"id": 13925685,
"node_id": "MDQ6VXNlcjEzOTI1Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/13925685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drscotthawley",
"html_url": "https://github.com/drscotthawley",
"followers_url": "https://api.github.com/users/drscotthawley/followers",
"following_url": "https://api.github.com/users/drscotthawley/following{/other_user}",
"gists_url": "https://api.github.com/users/drscotthawley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drscotthawley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drscotthawley/subscriptions",
"organizations_url": "https://api.github.com/users/drscotthawley/orgs",
"repos_url": "https://api.github.com/users/drscotthawley/repos",
"events_url": "https://api.github.com/users/drscotthawley/events{/privacy}",
"received_events_url": "https://api.github.com/users/drscotthawley/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"To have some context, please see:\r\n- #4145\r\n\r\nPlease, also note that we have recently made some fixes to the script, which are in our GitHub master branch but not yet released:\r\n- #4422\r\n- #4425 \r\n- #4436",
"Thanks Albert! I'll try pulling `datasets` from the git repo instead of PyPI, and/or just wait for the next release.\r\n",
"I'm closing this issue then. Please, feel free to reopen it again if the problem persists."
] | 1,654,187,756,000 | 1,654,245,857,000 | 1,654,245,856,000 | NONE | null | ## Describe the bug
I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both cases the HF dataloader fails; it is looking for files that don't exist anywhere in the dataset: it is looking for files with lower-case letters like "**test*" (all the filenames in both my copies are uppercase) and certain file extensions that exclude the .DOC which is provided in TIMIT:
## Steps to reproduce the bug
```python
data = load_dataset('timit_asr', 'clean')['train']
```
## Expected results
The dataset should load with no errors.
## Actual results
This error message:
```
File "/home/ubuntu/envs/data2vec/lib/python3.9/site-packages/datasets/data_files.py", line 201, in resolve_patterns_locally_or_by_urls
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to resolve any data file that matches '['**test*', '**eval*']' at /home/ubuntu/datasets/timit with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
```
But this is a strange sort of error: why is it looking for lower-case file names when all the TIMIT dataset filenames are uppercase? Why does it exclude .DOC files when the only parts of the TIMIT data set with "TEST" in them have ".DOC" extensions? ...I wonder, how was anyone able to get this to work in the first place?
The files in the dataset look like the following:
```
³ PHONCODE.DOC
³ PROMPTS.TXT
³ SPKRINFO.TXT
³ SPKRSENT.TXT
³ TESTSET.DOC
```
...so why are these being excluded by the dataset loader?
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.27
- Python version: 3.9.9
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4439/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4438/comments | https://api.github.com/repos/huggingface/datasets/issues/4438/events | https://github.com/huggingface/datasets/pull/4438 | 1,258,255,394 | PR_kwDODunzps44-vhC | 4,438 | Fix docstring of inspect_dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,179,670,000 | 1,654,188,055,000 | 1,654,187,547,000 | MEMBER | null | As pointed out by @sgugger:
- huggingface/doc-builder/issues/235 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4438/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4438",
"html_url": "https://github.com/huggingface/datasets/pull/4438",
"diff_url": "https://github.com/huggingface/datasets/pull/4438.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4438.patch",
"merged_at": 1654187547000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4437/comments | https://api.github.com/repos/huggingface/datasets/issues/4437/events | https://github.com/huggingface/datasets/pull/4437 | 1,258,249,582 | PR_kwDODunzps44-uRW | 4,437 | Add missing columns to `blended_skill_talk` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4437). All of your documentation changes will be reflected on that endpoint."
] | 1,654,179,386,000 | 1,654,179,695,000 | null | CONTRIBUTOR | null | Adds the missing columns to `blended_skill_talk` to align the loading logic with [ParlAI](https://github.com/facebookresearch/ParlAI/blob/main/parlai/tasks/blended_skill_talk/build.py).
Fix #4426 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4437/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4437",
"html_url": "https://github.com/huggingface/datasets/pull/4437",
"diff_url": "https://github.com/huggingface/datasets/pull/4437.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4437.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4436/comments | https://api.github.com/repos/huggingface/datasets/issues/4436/events | https://github.com/huggingface/datasets/pull/4436 | 1,257,758,834 | PR_kwDODunzps449FsU | 4,436 | Fix directory names for LDC data in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,152,304,000 | 1,654,162,376,000 | 1,654,161,867,000 | MEMBER | null | Related to:
- #4422 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4436/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4436",
"html_url": "https://github.com/huggingface/datasets/pull/4436",
"diff_url": "https://github.com/huggingface/datasets/pull/4436.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4436.patch",
"merged_at": 1654161867000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4435/comments | https://api.github.com/repos/huggingface/datasets/issues/4435/events | https://github.com/huggingface/datasets/issues/4435 | 1,257,496,552 | I_kwDODunzps5K89_o | 4,435 | Load a local cached dataset that has been modified | {
"login": "mihail911",
"id": 2789441,
"node_id": "MDQ6VXNlcjI3ODk0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2789441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mihail911",
"html_url": "https://github.com/mihail911",
"followers_url": "https://api.github.com/users/mihail911/followers",
"following_url": "https://api.github.com/users/mihail911/following{/other_user}",
"gists_url": "https://api.github.com/users/mihail911/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mihail911/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mihail911/subscriptions",
"organizations_url": "https://api.github.com/users/mihail911/orgs",
"repos_url": "https://api.github.com/users/mihail911/repos",
"events_url": "https://api.github.com/users/mihail911/events{/privacy}",
"received_events_url": "https://api.github.com/users/mihail911/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! `datasets` caches every modification/loading, so you can either rerun the pipeline up to the `map` call or use `Dataset.from_file(modified_dataset)` to load the dataset directly from the cache file.",
"Awesome, hvala Mario! This works. "
] | 1,654,134,709,000 | 1,654,214,366,000 | 1,654,214,358,000 | NONE | null | ## Describe the bug
I have loaded a dataset as follows:
```
d = load_dataset("emotion", split="validation")
```
Afterwards I make some modifications to the dataset via a `map` call:
```
d.map(some_update_func, cache_file_name=modified_dataset)
```
This generates a cached version of the dataset on my local system in the same directory as the original download of the data (/path/to/cache). Running an `ls` returns:
```
modified_dataset
dataset_info.json
emotion-test.arrow
emotion-train.arrow
emotion-validation.arrow
```
as expected. However, when I try to load up the modified cached dataset via a call to
```
modified = load_dataset("emotion", split="validation", data_files="/path/to/cache/modified_dataset")
```
it simply redownloads a new version of the dataset and dumps to a new cache rather than loading up the original modified dataset:
```
Using custom data configuration validation-cdbf51685638421b
Downloading and preparing dataset emotion/validation to ...
```
How am I supposed to load the original modified local cache copy of the dataset?
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4435/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4434/comments | https://api.github.com/repos/huggingface/datasets/issues/4434/events | https://github.com/huggingface/datasets/pull/4434 | 1,256,207,321 | PR_kwDODunzps443mAr | 4,434 | Fix dummy dataset generation script for handling nested types of _URLs | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,654,095,195,000 | 1,654,095,195,000 | null | CONTRIBUTOR | null | It seems that when user specify nested _URLs structures in their dataset script. An error will be raised when generating dummy dataset.
I think the types of all elements in `dummy_data_dict.values()` should be checked because they may have different types.
Linked to issue #4428
PS: I am not sure whether my code fix this issue in a proper way. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4434/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4434",
"html_url": "https://github.com/huggingface/datasets/pull/4434",
"diff_url": "https://github.com/huggingface/datasets/pull/4434.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4434.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4433/comments | https://api.github.com/repos/huggingface/datasets/issues/4433/events | https://github.com/huggingface/datasets/pull/4433 | 1,255,830,758 | PR_kwDODunzps442P5L | 4,433 | Fix script fetching and local path handling in `inspect_dataset` and `inspect_metric` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4433). All of your documentation changes will be reflected on that endpoint."
] | 1,654,085,396,000 | 1,654,085,719,000 | null | CONTRIBUTOR | null | Fix #4348 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4433/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4433",
"html_url": "https://github.com/huggingface/datasets/pull/4433",
"diff_url": "https://github.com/huggingface/datasets/pull/4433.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4433.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4432/comments | https://api.github.com/repos/huggingface/datasets/issues/4432/events | https://github.com/huggingface/datasets/pull/4432 | 1,255,523,720 | PR_kwDODunzps441JmK | 4,432 | Fix builder docstring | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,076,730,000 | 1,654,191,827,000 | 1,654,191,315,000 | MEMBER | null | Currently, the args of `DatasetBuilder` do not appear in the docs: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/builder_classes#datasets.DatasetBuilder | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4432/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4432",
"html_url": "https://github.com/huggingface/datasets/pull/4432",
"diff_url": "https://github.com/huggingface/datasets/pull/4432.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4432.patch",
"merged_at": 1654191315000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4431/comments | https://api.github.com/repos/huggingface/datasets/issues/4431/events | https://github.com/huggingface/datasets/pull/4431 | 1,254,618,948 | PR_kwDODunzps44x5aG | 4,431 | Add personaldialog datasets | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"These test errors are related to issue #4428 \r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4431). All of your documentation changes will be reflected on that endpoint.",
"I only made a trivial modification in my commit https://github.com/huggingface/datasets/pull/4431/commits/402c893d35224d7828176717233909ac5f1e7b3e\r\n\r\nI have submitted a PR #4434 for the about issue."
] | 1,654,046,440,000 | 1,654,097,300,000 | null | CONTRIBUTOR | null | It seems that all tests are passed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4431/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4431",
"html_url": "https://github.com/huggingface/datasets/pull/4431",
"diff_url": "https://github.com/huggingface/datasets/pull/4431.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4431.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4430 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4430/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4430/comments | https://api.github.com/repos/huggingface/datasets/issues/4430/events | https://github.com/huggingface/datasets/issues/4430 | 1,254,412,591 | I_kwDODunzps5KxNEv | 4,430 | Add ability to load newer, cleaner version of Multi-News | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! Our versioning is based on Git revisions (the `revision` param in `load_dataset`), so you can just replace the old URL with the new one and open a PR :). I can also give you some pointers if needed.",
"@mariosasko Awesome thanks! I will do that. Looks like this new version of the data is not available as a zip but as three files (train/dev/test). How is this usually handled in HF Datasets, should `_URL` be a dict with keys `train`, `val`, `test` perhaps?"
] | 1,654,030,844,000 | 1,654,192,931,000 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
The [Multi-News dataloader points to the original version of the Multi-News dataset](https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/datasets/multi_news/multi_news.py#L47), but this has [known errors in it](https://github.com/Alex-Fabbri/Multi-News/issues/11). There exists a [newer version which fixes some of these issues](https://drive.google.com/open?id=1jwBzXBVv8sfnFrlzPnSUBHEEAbpIUnFq).
Unfortunately I don't think you can just replace this old URL with the new one, otherwise this could lead to issues with reproducibility.
**Describe the solution you'd like**
Add a new version to the Multi-News dataloader that points to the updated dataset which has fixes for some known issues.
**Describe alternatives you've considered**
Replace the current URL to the original version to the dataset with the URL to the version with fixes.
**Additional context**
Would be happy to make a PR for this, could someone maybe point me to another dataloader that has multiple versions so I can see how this is handled in `datasets`?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4430/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4429/comments | https://api.github.com/repos/huggingface/datasets/issues/4429/events | https://github.com/huggingface/datasets/pull/4429 | 1,254,184,358 | PR_kwDODunzps44whxN | 4,429 | Update builder docstring for deprecated/added arguments | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4429). All of your documentation changes will be reflected on that endpoint."
] | 1,654,018,645,000 | 1,654,092,911,000 | null | MEMBER | null | This PR updates the builder docstring with deprecated/added directives for arguments name/config_name.
Follow up of:
- #4414
- huggingface/doc-builder#233
First merge:
- #4432 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4429/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4429",
"html_url": "https://github.com/huggingface/datasets/pull/4429",
"diff_url": "https://github.com/huggingface/datasets/pull/4429.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4429.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4428/comments | https://api.github.com/repos/huggingface/datasets/issues/4428/events | https://github.com/huggingface/datasets/issues/4428 | 1,254,092,818 | I_kwDODunzps5Kv_AS | 4,428 | Errors when building dummy data if you use nested _URLS | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,654,013,457,000 | 1,654,013,457,000 | null | CONTRIBUTOR | null | ## Describe the bug
When making dummy data with the `datasets-cli dummy_data` tool,
an error will be raised if you use a nested _URLS in your dataset script.
Traceback (most recent call last):
File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module>
main()
File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 311, in run
self._autogenerate_dummy_data(
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 337, in _autogenerate_dummy_data
dataset_builder._split_generators(dl_manager)
File "/home/name/.cache/huggingface/modules/datasets_modules/datasets/personal_dialog/559332bced5eeafa7f7efc2a7c10ce02cee2a8116bbab4611c35a50ba2715b77/personal_dialog.py", line 108, in _split_generators
data_dir = dl_manager.download_and_extract(urls)
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 56, in download_and_extract
dummy_output = self.mock_download_manager.download(url_or_urls)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 130, in download
return self.download_and_extract(data_url)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 122, in download_and_extract
return self.create_dummy_data_dict(dummy_file, data_url)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 165, in create_dummy_data_dict
if isinstance(first_value, str) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()):
TypeError: unhashable type: 'list'
## Steps to reproduce the bug
You can use my dataset script implemented here:
https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py
```python
datasets_cli dummy_data datasets/personal_dialog --auto_generate
```
You can change https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py#L54
to
```
"train": "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz"
```
before runing the above script to avoid downloading a large training data.
## Expected results
The dummy data should be generated
## Actual results
An error is raised.
It seems that in https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165
We only check if the first item of dummy_data_dict.values() is str.
However, dummy_data_dict.values() may have the type of [str, list, list].
A simple fix would be changing https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 to
```python
if all([isinstance(value, str) for value in dummy_data_dict.values()]) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()):
```
But I don't know if this kinds of change may bring any side effect since I am not sure about the detail logic here.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version: Python 3.9.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4428/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4427/comments | https://api.github.com/repos/huggingface/datasets/issues/4427/events | https://github.com/huggingface/datasets/pull/4427 | 1,253,959,313 | PR_kwDODunzps44vyGg | 4,427 | Add HF.co for PRs/Issues for specific datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,007,481,000 | 1,654,087,062,000 | 1,654,086,542,000 | MEMBER | null | As in https://github.com/huggingface/transformers/pull/17485, issues and PR for datasets under a namespace have to be on the HF Hub | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4427/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4427",
"html_url": "https://github.com/huggingface/datasets/pull/4427",
"diff_url": "https://github.com/huggingface/datasets/pull/4427.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4427.patch",
"merged_at": 1654086542000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4426/comments | https://api.github.com/repos/huggingface/datasets/issues/4426/events | https://github.com/huggingface/datasets/issues/4426 | 1,253,887,311 | I_kwDODunzps5KvM1P | 4,426 | Add loading variable number of columns for different splits | {
"login": "DrMatters",
"id": 22641583,
"node_id": "MDQ6VXNlcjIyNjQxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrMatters",
"html_url": "https://github.com/DrMatters",
"followers_url": "https://api.github.com/users/DrMatters/followers",
"following_url": "https://api.github.com/users/DrMatters/following{/other_user}",
"gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions",
"organizations_url": "https://api.github.com/users/DrMatters/orgs",
"repos_url": "https://api.github.com/users/DrMatters/repos",
"events_url": "https://api.github.com/users/DrMatters/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrMatters/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! Indeed the column is missing, but you shouldn't get an error? Have you made some modifications (locally) to the loading script? I've opened a PR to add the missing columns to the script. "
] | 1,654,004,416,000 | 1,654,180,135,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have.
When loading such data, an exception occurs at table.py:cast_table_to_schema, because of mismatched columns. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4426/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4425/comments | https://api.github.com/repos/huggingface/datasets/issues/4425/events | https://github.com/huggingface/datasets/pull/4425 | 1,253,641,604 | PR_kwDODunzps44uuDq | 4,425 | Make extensions case-insensitive in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,991,804,000 | 1,654,092,930,000 | 1,654,092,411,000 | MEMBER | null | Related to #4422. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4425/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4425",
"html_url": "https://github.com/huggingface/datasets/pull/4425",
"diff_url": "https://github.com/huggingface/datasets/pull/4425.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4425.patch",
"merged_at": 1654092411000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4424/comments | https://api.github.com/repos/huggingface/datasets/issues/4424/events | https://github.com/huggingface/datasets/pull/4424 | 1,253,542,488 | PR_kwDODunzps44uZBD | 4,424 | Fix DuplicatedKeysError in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,986,865,000 | 1,654,005,050,000 | 1,654,004,551,000 | MEMBER | null | Fix #4422. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4424/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4424",
"html_url": "https://github.com/huggingface/datasets/pull/4424",
"diff_url": "https://github.com/huggingface/datasets/pull/4424.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4424.patch",
"merged_at": 1654004551000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4423 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4423/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4423/comments | https://api.github.com/repos/huggingface/datasets/issues/4423/events | https://github.com/huggingface/datasets/pull/4423 | 1,253,326,023 | PR_kwDODunzps44trdP | 4,423 | Add new dataset MMChat | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,653,972,307,000 | 1,653,972,307,000 | null | CONTRIBUTOR | null | Hi, I am adding a new dataset MMChat.
It seems that all tests are passed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4423/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4423",
"html_url": "https://github.com/huggingface/datasets/pull/4423",
"diff_url": "https://github.com/huggingface/datasets/pull/4423.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4423.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4422/comments | https://api.github.com/repos/huggingface/datasets/issues/4422/events | https://github.com/huggingface/datasets/issues/4422 | 1,253,146,511 | I_kwDODunzps5KsX-P | 4,422 | Cannot load timit_asr data set | {
"login": "bhaddow",
"id": 992795,
"node_id": "MDQ6VXNlcjk5Mjc5NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/992795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhaddow",
"html_url": "https://github.com/bhaddow",
"followers_url": "https://api.github.com/users/bhaddow/followers",
"following_url": "https://api.github.com/users/bhaddow/following{/other_user}",
"gists_url": "https://api.github.com/users/bhaddow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhaddow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhaddow/subscriptions",
"organizations_url": "https://api.github.com/users/bhaddow/orgs",
"repos_url": "https://api.github.com/users/bhaddow/repos",
"events_url": "https://api.github.com/users/bhaddow/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhaddow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @bhaddow.\r\n\r\nI'm fixing it.",
"Thanks for the quick fix!",
"@bhaddow we have also made a fix so that you don't have to convert to uppercase the file extensions of the LDC data.\r\n\r\nWould you mind checking if it works OK now for you and reporting if there are any issues? Thanks. ",
"Hi @albertvillanova -It loads fine on a copy of the data from deepai - although I have to remove the copies of the .WAV files (with extension .WAV,wav). On a copy of the data that was obtained from the LDC, the glob still fails to find the files. The LDC copy looks like it was copied from CD, in 2004, so the structure may be different to a current download.",
"Ah, if I change the train/ and test/ directories to TRAIN/ and TEST/ then it works!",
"Thanks for your investigation and report, @bhaddow. I'm adding another fix for the TRAIN/train and TEST/test directory names."
] | 1,653,948,022,000 | 1,654,151,645,000 | 1,654,004,551,000 | NONE | null | ## Describe the bug
I am trying to load the timit_asr data set. I have tried with a copy from the LDC, and a copy from deepai. In both cases they fail with a "duplicate key" error. With the LDC version I have to convert the file extensions all to upper-case before I can load it at all.
## Steps to reproduce the bug
```python
timit = datasets.load_dataset("timit_asr", data_dir = "/path/to/dataset")
# Sample code to reproduce the bug
```
## Expected results
The data set should load without error. It worked for me before the LDC url change.
## Actual results
```
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: SA1
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4422/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4421/comments | https://api.github.com/repos/huggingface/datasets/issues/4421/events | https://github.com/huggingface/datasets/pull/4421 | 1,253,059,467 | PR_kwDODunzps44szxR | 4,421 | Add extractor for bzip2-compressed files | {
"login": "asivokon",
"id": 2910707,
"node_id": "MDQ6VXNlcjI5MTA3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2910707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asivokon",
"html_url": "https://github.com/asivokon",
"followers_url": "https://api.github.com/users/asivokon/followers",
"following_url": "https://api.github.com/users/asivokon/following{/other_user}",
"gists_url": "https://api.github.com/users/asivokon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asivokon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asivokon/subscriptions",
"organizations_url": "https://api.github.com/users/asivokon/orgs",
"repos_url": "https://api.github.com/users/asivokon/repos",
"events_url": "https://api.github.com/users/asivokon/events{/privacy}",
"received_events_url": "https://api.github.com/users/asivokon/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,653,938,380,000 | 1,653,938,380,000 | null | NONE | null | This change enables loading bzipped datasets, just like any other compressed dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4421/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4421",
"html_url": "https://github.com/huggingface/datasets/pull/4421",
"diff_url": "https://github.com/huggingface/datasets/pull/4421.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4421.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4420/comments | https://api.github.com/repos/huggingface/datasets/issues/4420/events | https://github.com/huggingface/datasets/issues/4420 | 1,252,739,239 | I_kwDODunzps5Kq0in | 4,420 | Metric evaluation problems in multi-node, shared file system | {
"login": "gullabi",
"id": 40303490,
"node_id": "MDQ6VXNlcjQwMzAzNDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/40303490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gullabi",
"html_url": "https://github.com/gullabi",
"followers_url": "https://api.github.com/users/gullabi/followers",
"following_url": "https://api.github.com/users/gullabi/following{/other_user}",
"gists_url": "https://api.github.com/users/gullabi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gullabi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gullabi/subscriptions",
"organizations_url": "https://api.github.com/users/gullabi/orgs",
"repos_url": "https://api.github.com/users/gullabi/repos",
"events_url": "https://api.github.com/users/gullabi/events{/privacy}",
"received_events_url": "https://api.github.com/users/gullabi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"If you call `metric.compute` in a distributed setup like yours, then `metric.compute` is called in each process. `metric.compute` first calls `metric.add_batch`, and it looks like your error appears at that stage.\r\n\r\nTo make sure that all the processes have started writing their predictions/references at the same time, each process waits for process 0 to lock `slurm-{world_size}-0.arrow.lock`. Process 0 locks this file when `metric.add_batch` is called, so here when `metric.compute` is called.\r\n\r\nTherefore your error can happen when process 0 takes too much time to call `metric.compute` compared to process 3 (>100 seconds by default). I haven't tried running your code but could it be the case ?\r\n\r\nI guess it could also happen if you run multiple times the same distributed job at the same time with the same `experiment_id` because they would collide.\r\n"
] | 1,653,917,045,000 | 1,653,922,893,000 | null | NONE | null | ## Describe the bug
Metric evaluation fails in multi-node within a shared file system, because the master process cannot find the lock files from other nodes. (This issue was originally mentioned in the transformers repo https://github.com/huggingface/transformers/issues/17412)
## Steps to reproduce the bug
1. clone [this huggingface model](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) and replace the `run_speech_recognition_ctc.py` script with the version in the gist [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71#file-run_speech_recognition_ctc-py).
2. Setup the `venv` according to the requirements of the model file plus `datasets==2.0.0`, `transformers==4.18.0` and `torch==1.9.0`
3. Launch the runner in a distributed environment which has a shared file system for two nodes, preferably with SLURM. Example [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71)
Specifically for the datasets, for the distributed setup the `load_metric` is called as:
```
process_id=int(os.environ["RANK"])
num_process=int(os.environ["WORLD_SIZE"])
eval_metrics = {metric: load_metric(metric,
process_id=process_id,
num_process=num_process,
experiment_id="slurm")
for metric in data_args.eval_metrics}
```
## Expected results
The training should not fail, due to the failure of the `Metric.compute()` step.
## Actual results
For the test I am executing the world size is 4, with 2 GPUs in 2 nodes. However the process is not finding the necessary lock files
```
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 841, in <module>
main()
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 792, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1497, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1624, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2291, in evaluate
metric_key_prefix=metric_key_prefix,
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2535, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in compute_metrics
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in <dictcomp>
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 419, in compute
self.add_batch(**inputs)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 465, in add_batch
self._init_writer()
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 552, in _init_writer
self._check_rendez_vous() # wait for master to be ready and to let everyone go
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 342, in _check_rendez_vous
) from None
ValueError: Expected to find locked file /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock from process 3 but it doesn't exist.
```
When I look at the cache directory, I can see all the lock files in principle:
```
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-rdv.lock
```
I see that there was another related issue here https://github.com/huggingface/datasets/issues/1942, but it seems to have resolved via https://github.com/huggingface/datasets/pull/1966. Let me know if there is problem with how I am calling the `load_metric` or whether I need to make changes to the `.compute()` steps.
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-4.18.0-147.8.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core
- Python version: 3.7.4
- PyArrow version: 7.0.0
- Pandas version: 1.3.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4420/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4419/comments | https://api.github.com/repos/huggingface/datasets/issues/4419/events | https://github.com/huggingface/datasets/issues/4419 | 1,252,652,896 | I_kwDODunzps5Kqfdg | 4,419 | Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! If the only goal is to improve readability, it's better to use `assertTupleEqual` than `assertSequenceEqual` for Python tuples. Also, note that this function is called internally by `assertEqual`, but I guess we can accept a PR to be more verbose.",
"Hi @mariosasko, right! I'll update the issue title/desc with `assertTupleEqual` even though as you said it seems to be internally using `assertEqual` so I'm not sure whether it's worth it or not...\r\n\r\nhttps://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual"
] | 1,653,912,798,000 | 1,654,069,069,000 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating.
Find an example of an `assertEqual` over a tuple in 🤗 `datasets` unit tests over an `ArrowDataset` at https://github.com/huggingface/datasets/blob/0bb47271910c8a0b628dba157988372307fca1d2/tests/test_arrow_dataset.py#L570
**Describe the solution you'd like**
Start slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`.
**Additional context**
If so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks 🤗
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4419/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4418/comments | https://api.github.com/repos/huggingface/datasets/issues/4418/events | https://github.com/huggingface/datasets/pull/4418 | 1,252,506,268 | PR_kwDODunzps44q9pG | 4,418 | Add dataset MMChat | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,653,905,440,000 | 1,653,922,698,000 | 1,653,922,698,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4418/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4418",
"html_url": "https://github.com/huggingface/datasets/pull/4418",
"diff_url": "https://github.com/huggingface/datasets/pull/4418.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4418.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4417/comments | https://api.github.com/repos/huggingface/datasets/issues/4417/events | https://github.com/huggingface/datasets/issues/4417 | 1,251,933,091 | I_kwDODunzps5Knvuj | 4,417 | how to convert a dict generator into a huggingface dataset. | {
"login": "StephennFernandes",
"id": 32235549,
"node_id": "MDQ6VXNlcjMyMjM1NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/32235549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StephennFernandes",
"html_url": "https://github.com/StephennFernandes",
"followers_url": "https://api.github.com/users/StephennFernandes/followers",
"following_url": "https://api.github.com/users/StephennFernandes/following{/other_user}",
"gists_url": "https://api.github.com/users/StephennFernandes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StephennFernandes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StephennFernandes/subscriptions",
"organizations_url": "https://api.github.com/users/StephennFernandes/orgs",
"repos_url": "https://api.github.com/users/StephennFernandes/repos",
"events_url": "https://api.github.com/users/StephennFernandes/events{/privacy}",
"received_events_url": "https://api.github.com/users/StephennFernandes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | open | false | null | [] | null | [
"@albertvillanova @lhoestq , could you please help me on this issue. ",
"Hi ! As mentioned on the [forum](https://discuss.huggingface.co/t/how-to-wrap-a-generator-with-hf-dataset/18464), the simplest for now would be to define a [dataset script](https://huggingface.co/docs/datasets/dataset_script) which can contain your generator. But we can also explore adding something like `ds = Dataset.from_iterable(seqio_dataset)`"
] | 1,653,841,707,000 | 1,653,919,239,000 | null | NONE | null | ### Link
_No response_
### Description
Hey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset.
The generator contains all the samples needed for training the model but I cannot convert it into a huggingface dataset.
The code looks like this:
```
for ex in seqio_data:
print(ex[“text”])
```
I need to convert the seqio_data (generator) into huggingface dataset.
the complete seqio code goes here:
```
import functools
import seqio
import tensorflow as tf
import t5.data
from datasets import load_dataset
from t5.data import postprocessors
from t5.data import preprocessors
from t5.evaluation import metrics
from seqio import FunctionDataSource, utils
TaskRegistry = seqio.TaskRegistry
def gen_dataset(split, shuffle=False, seed=None, column="text", dataset_params=None):
dataset = load_dataset(**dataset_params)
if shuffle:
if seed:
dataset = dataset.shuffle(seed=seed)
else:
dataset = dataset.shuffle()
while True:
for item in dataset[str(split)]:
yield item[column]
def dataset_fn(split, shuffle_files, seed=None, dataset_params=None):
return tf.data.Dataset.from_generator(
functools.partial(gen_dataset, split, shuffle_files, seed, dataset_params=dataset_params),
output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_name)
)
@utils.map_over_dataset
def target_to_key(x, key_map, target_key):
"""Assign the value from the dataset to target_key in key_map"""
return {**key_map, target_key: x}
dataset_name = 'oscar-corpus/OSCAR-2109'
subset= 'mr'
dataset_params = {"path": dataset_name, "language":subset, "use_auth_token":True}
dataset_shapes = None
TaskRegistry.add(
"oscar_marathi_corpus",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_params=dataset_params),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=dataset_shapes,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"targets": None,
}, target_key="targets")],
output_features={"targets": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},
metric_fns=[]
)
dataset = seqio.get_mixture_or_task("oscar_marathi_corpus").get_dataset(
sequence_length=None,
split="train",
shuffle=True,
num_epochs=1,
shard_info=seqio.ShardInfo(index=0, num_shards=10),
use_cached=False,
seed=42
)
for _, ex in zip(range(5), dataset):
print(ex['targets'].numpy().decode())
```
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4417/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4416/comments | https://api.github.com/repos/huggingface/datasets/issues/4416/events | https://github.com/huggingface/datasets/pull/4416 | 1,251,875,763 | PR_kwDODunzps44o7sF | 4,416 | Add LCCC dataset | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you very much for your help @albertvillanova .\r\n\r\nI think I have fixed all the comments.\r\n\r\nPlease let me know if this PR need further modification ;)",
"@albertvillanova Thank you very much for your kind help.\r\nThese suggestions make the code looks more pythonic.\r\n\r\nI have commited these changes"
] | 1,653,827,239,000 | 1,654,161,759,000 | 1,654,161,226,000 | CONTRIBUTOR | null | Hi, I am trying to add a new dataset lccc.
All tests are passed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4416/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4416",
"html_url": "https://github.com/huggingface/datasets/pull/4416",
"diff_url": "https://github.com/huggingface/datasets/pull/4416.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4416.patch",
"merged_at": 1654161226000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4415/comments | https://api.github.com/repos/huggingface/datasets/issues/4415/events | https://github.com/huggingface/datasets/pull/4415 | 1,251,002,981 | PR_kwDODunzps44mIJk | 4,415 | Update `dataset_infos.json` with new split info in `Dataset.push_to_hub` to avoid verification error | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4415). All of your documentation changes will be reflected on that endpoint."
] | 1,653,671,022,000 | 1,654,001,099,000 | null | CONTRIBUTOR | null | Update `dataset_infos.json` when pushing splits one by one via `Dataset.push_to_hub` to avoid the splits verification error.
TODO:
~~- [ ] handle token + `{Audio, Image}.embed_storage`~~
- [x] tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4415/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4415",
"html_url": "https://github.com/huggingface/datasets/pull/4415",
"diff_url": "https://github.com/huggingface/datasets/pull/4415.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4415.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4414/comments | https://api.github.com/repos/huggingface/datasets/issues/4414/events | https://github.com/huggingface/datasets/pull/4414 | 1,250,546,888 | PR_kwDODunzps44klhY | 4,414 | Rename DatasetBuilder config_name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,643,682,000 | 1,654,009,641,000 | 1,654,009,131,000 | MEMBER | null | This PR renames the DatasetBuilder keyword argument `name` to `config_name` so that:
- it avoids confusion with the attribute `DatasetBuilder.name`, which is different
- it aligns with the Dataset property name `config_name`, defined in `DatasetInfoMixin.config_name`
Other simpler possibility could be to rename it to just `config` instead.
Please note I have only renamed this argument of DatasetBuilder because I think this refactoring has a low impact on users: we can assume this is not a public facing parameter, but private or related to the inners of our library.
It would have a major impact to rename it also in:
- load_dataset
- load_dataset_builder: although this could also be assumed as inners...
- in our CLI commands
Besides the naming of `name`, I also find really confusing the naming of `path` in `load_dataset`. IMHO, they should have a more simpler and precise meaning (currently, they are too vague). I would propose (maybe for next major release):
```
load_dataset(dataset, config,...
```
instead of
```
load_dataset(path, name,...
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4414/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4414",
"html_url": "https://github.com/huggingface/datasets/pull/4414",
"diff_url": "https://github.com/huggingface/datasets/pull/4414.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4414.patch",
"merged_at": 1654009131000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4413/comments | https://api.github.com/repos/huggingface/datasets/issues/4413/events | https://github.com/huggingface/datasets/issues/4413 | 1,250,259,822 | I_kwDODunzps5KhXNu | 4,413 | Dataset Viewer issue for ett | {
"login": "dgcnz",
"id": 24966039,
"node_id": "MDQ6VXNlcjI0OTY2MDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/24966039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dgcnz",
"html_url": "https://github.com/dgcnz",
"followers_url": "https://api.github.com/users/dgcnz/followers",
"following_url": "https://api.github.com/users/dgcnz/following{/other_user}",
"gists_url": "https://api.github.com/users/dgcnz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dgcnz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dgcnz/subscriptions",
"organizations_url": "https://api.github.com/users/dgcnz/orgs",
"repos_url": "https://api.github.com/users/dgcnz/repos",
"events_url": "https://api.github.com/users/dgcnz/events{/privacy}",
"received_events_url": "https://api.github.com/users/dgcnz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting @dgcnz.\r\n\r\nI have checked that the dataset works fine in streaming mode.\r\n\r\nAdditionally, other datasets containing timestamps are properly rendered by the viewer: https://huggingface.co/datasets/blbooks\r\n\r\nI have tried to force the refresh of the preview, but the endpoint is not responsive: Connection timed out\r\n\r\nCC: @severo ",
"I've just resent the refresh of the preview to the new endpoint."
] | 1,653,617,555,000 | 1,654,153,287,000 | null | NONE | null | ### Link
https://huggingface.co/datasets/ett
### Description
Timestamp is not JSON serializable.
```
Status code: 500
Exception: Status500Error
Message: Type is not JSON serializable: Timestamp
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4413/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4412/comments | https://api.github.com/repos/huggingface/datasets/issues/4412/events | https://github.com/huggingface/datasets/pull/4412 | 1,249,490,179 | PR_kwDODunzps44hFvq | 4,412 | Skip hidden files/directories in data files resolution and `iter_files` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,567,028,000 | 1,654,089,175,000 | 1,654,088,656,000 | CONTRIBUTOR | null | Fix #4115 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4412/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4412",
"html_url": "https://github.com/huggingface/datasets/pull/4412",
"diff_url": "https://github.com/huggingface/datasets/pull/4412.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4412.patch",
"merged_at": 1654088656000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4411/comments | https://api.github.com/repos/huggingface/datasets/issues/4411/events | https://github.com/huggingface/datasets/pull/4411 | 1,249,462,390 | PR_kwDODunzps44g_yL | 4,411 | Update `_format_columns` in `remove_columns` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"🤗 This PR closes https://github.com/huggingface/datasets/issues/4398",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4411). All of your documentation changes will be reflected on that endpoint.",
"Hi! Thanks for reporting and providing a fix. I made a small change to make the fix easier to understand.",
"Hi, @mariosasko thanks! It makes sense, sorry I'm not that familiar with `datasets` code 😩 ",
"Sure @albertvillanova I'll do that later today and ping you once done, thanks! :hugs:",
"Hi again @albertvillanova! Let me know if those tests are fine 🤗 ",
"Hi @alvarobartt,\r\n\r\nI think your tests are failing. I don't know why previously, after your last commit, the CI tests were not triggered. \r\n\r\nIn order to force the re-running of the CI tests, I had to edit your file using the GitHub UI.\r\n\r\nFirst I tried to do it using my terminal, but I don't have push right to your PR branch: I would ask you next time you open a PR, please mark the checkbox \"Allow edits from maintainers\": https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork#enabling-repository-maintainer-permissions-on-existing-pull-requests",
"Hi @albertvillanova, let me check those again! And regarding that checkbox I thought it was already checked so my bad there 😩 ",
"@albertvillanova again it seems that the tests were not automatically triggered, but I tested those locally and now they work, as previously those were failing as I created an assertion as `self.assertEqual` over an empty list that was compared as `None` while the value was `[]` so I updated it to be `self.assertListEqual` and changed the comparison value to `[]`.",
"@lhoestq any idea why the CI is not triggered?",
"@alvarobartt I have tested locally and the tests continue failing.\r\n\r\nI think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n",
"You're right @albertvillanova I was indeed running the tests with `datasets==2.2.0` rather than with the branch version, I'll check it again! Sorry for the inconvenience...",
"> @alvarobartt I have tested locally and the tests continue failing.\r\n> \r\n> I think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n\r\nIn order to have some regressions tests for the fixed scenario, I've manually updated the value of `_format_columns` in the `ArrowDataset` so as to check whether it's updated or not right after calling `remove_columns`, and it does behave as expected, so with the latest version of this branch the reported issue doesn't occur anymore."
] | 1,653,565,206,000 | 1,654,173,226,000 | null | CONTRIBUTOR | null | As explained at #4398, when calling `dataset.add_faiss_index` under certain conditions when calling a sequence of operations `cast_column`, `map`, and `remove_columns`, it fails as it's trying to look for already removed columns.
So on, after testing some possible fixes, it seems that setting the dataset format right after removing the columns seems to be working fine, so I had to add a call to `.set_format` in the `remove_columns` function.
Hope this helps! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4411/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4411",
"html_url": "https://github.com/huggingface/datasets/pull/4411",
"diff_url": "https://github.com/huggingface/datasets/pull/4411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4411.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4410/comments | https://api.github.com/repos/huggingface/datasets/issues/4410/events | https://github.com/huggingface/datasets/pull/4410 | 1,249,148,457 | PR_kwDODunzps44f_Td | 4,410 | Remove Google Drive URL in spider dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,545,855,000 | 1,653,547,722,000 | 1,653,547,212,000 | MEMBER | null | The `spider` dataset is distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) license.
Fix #4401. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4410/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4410",
"html_url": "https://github.com/huggingface/datasets/pull/4410",
"diff_url": "https://github.com/huggingface/datasets/pull/4410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4410.patch",
"merged_at": 1653547212000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4409/comments | https://api.github.com/repos/huggingface/datasets/issues/4409/events | https://github.com/huggingface/datasets/pull/4409 | 1,249,083,179 | PR_kwDODunzps44fxiH | 4,409 | Update: add using pcm bytes (#4323) | {
"login": "YooSungHyun",
"id": 34292279,
"node_id": "MDQ6VXNlcjM0MjkyMjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/34292279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YooSungHyun",
"html_url": "https://github.com/YooSungHyun",
"followers_url": "https://api.github.com/users/YooSungHyun/followers",
"following_url": "https://api.github.com/users/YooSungHyun/following{/other_user}",
"gists_url": "https://api.github.com/users/YooSungHyun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YooSungHyun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YooSungHyun/subscriptions",
"organizations_url": "https://api.github.com/users/YooSungHyun/orgs",
"repos_url": "https://api.github.com/users/YooSungHyun/repos",
"events_url": "https://api.github.com/users/YooSungHyun/events{/privacy}",
"received_events_url": "https://api.github.com/users/YooSungHyun/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@lhoestq Maybe I'm missing something, but what's the reason to read and encode PCM files to WAV in `Audio.encode_example`. Isn't the whole purpose of the decodable types to operate on raw files whenever possible? IMO this PR should only modify `Audio.decode_example` to support PCM files/bytes decoding.",
"Because the PCM file is not enough, we also need the `sampling_rate` associated to it. Therefore the two alternatives are either:\r\n- convert to WAV\r\n- add a `sampling_rate` field to the Audio arrow storage (not sure how it would behave for backward compatibility though)",
"But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.",
"How does it get the sampling rate of a PCM file then ? According to [SO](https://stackoverflow.com/a/57027667/17517845) it's not possible to infer it from the file alone",
"> Awesome thanks ! Could you also add tests in `tests/features/test_audio.py` ?\r\n> \r\n> Maybe add a small pcm file in `tests/features/data` and check that everything works as expected in tests cases like `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` for example.\r\n\r\n@lhoestq how can i test test_audio.py? where is \"__main__\" func?\r\ndo you have some example or guideline?",
"> But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.\r\n\r\n@mariosasko @lhoestq \r\nthanks for comment!\r\n\r\nFirst of all, \"PCM file\" can not read alone to any audio library.\r\n\"PCM file\" has not any audio information header. (it just purely audio byte data. therefore, we don't have to encoding and decoding)\r\nbut, \"PCM file\" is audio extension, so we can use `datasets.Audio`\r\n\r\nif you want to read \"PCM file\" to audio file likely, it have to needs additional parameter. (channel, sampling_rate, else....)\r\nbut, in many situation, we only know sampling_rate for PCM\r\n\r\nand, if we want to use `datasets.Audio` for \"PCM file\", we must process encode_example.\r\ntherefore, i have to use sampling_rate for encoding for making wav-style byte. (we only know sampling_rate)\r\n\r\nIn my source code, I don't compare sampling rate(`datasets.Audio's self.sampling_rate` and `read pcm sampling_rate(value[\"sampling_rate\"])`) and checking mono\r\n@mariosasko ! do you want to process resampling and making mono? then i can modify my source\r\n"
] | 1,653,539,196,000 | 1,654,163,001,000 | null | NONE | null | first of all, please look #4323
why i can not use {"path","array","sampling_rate"}
because sf.write(format="wav") and sf.read(BytesIO) is changed my pcm data value
maybe, i think wav got header but, pcm is not.
and variable naming, pcm data is "byte" type. so, "array" name is not fair i think
so, i use scipy lib and numpy (that is huggingface dependency)
and refer to @lhoestq answered,
1. encode -> using sampling_rate and pcm byte -> wav style byte (scipy.wavfile.write to byte)
2. byte converting using fairseq style pcm audio read [FileAudioDataset](https://github.com/facebookresearch/fairseq/blob/main/fairseq/data/audio/raw_audio_dataset.py)
4. decode -> read wavfile.read
that way is not screw up my pcm byte to float data, and another audio type(wav) safety
please check! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4409/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4409",
"html_url": "https://github.com/huggingface/datasets/pull/4409",
"diff_url": "https://github.com/huggingface/datasets/pull/4409.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4409.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4408/comments | https://api.github.com/repos/huggingface/datasets/issues/4408/events | https://github.com/huggingface/datasets/pull/4408 | 1,248,687,574 | PR_kwDODunzps44ecNI | 4,408 | Update imagenet gate | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,510,739,000 | 1,653,511,511,000 | 1,653,511,007,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4408/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4408",
"html_url": "https://github.com/huggingface/datasets/pull/4408",
"diff_url": "https://github.com/huggingface/datasets/pull/4408.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4408.patch",
"merged_at": 1653511007000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4407/comments | https://api.github.com/repos/huggingface/datasets/issues/4407/events | https://github.com/huggingface/datasets/issues/4407 | 1,248,671,778 | I_kwDODunzps5KbTgi | 4,407 | Dataset Viewer issue for conll2012_ontonotesv5 | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @jiangwy99.\r\n\r\nI guess this could be addressed only once we fix our issue with irresponsive backend endpoint.\r\n\r\nCC: @severo ",
"I've just sent the forcing of the refresh of the preview to the new endpoint."
] | 1,653,509,913,000 | 1,654,157,761,000 | null | NONE | null | ### Link
https://huggingface.co/datasets/conll2012_ontonotesv5
### Description
Dataset viewer outage.
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4407/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4406/comments | https://api.github.com/repos/huggingface/datasets/issues/4406/events | https://github.com/huggingface/datasets/pull/4406 | 1,248,626,622 | PR_kwDODunzps44ePLU | 4,406 | Improve language tag for PIAF dataset | {
"login": "lbourdois",
"id": 58078086,
"node_id": "MDQ6VXNlcjU4MDc4MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/58078086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lbourdois",
"html_url": "https://github.com/lbourdois",
"followers_url": "https://api.github.com/users/lbourdois/followers",
"following_url": "https://api.github.com/users/lbourdois/following{/other_user}",
"gists_url": "https://api.github.com/users/lbourdois/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lbourdois/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lbourdois/subscriptions",
"organizations_url": "https://api.github.com/users/lbourdois/orgs",
"repos_url": "https://api.github.com/users/lbourdois/repos",
"events_url": "https://api.github.com/users/lbourdois/events{/privacy}",
"received_events_url": "https://api.github.com/users/lbourdois/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,653,507,715,000 | 1,653,663,083,000 | 1,653,663,083,000 | NONE | null | Hi,
As pointed out by @lhoestq in this discussion (https://huggingface.co/datasets/asi/wikitext_fr/discussions/1), it is not yet possible to edit datasets outside of a namespace with the Hub PR feature and that you have to go through GitHub.
This modification should allow better referencing since only the xx language tags are currently taken into account and not the xx-xx. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4406/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4406",
"html_url": "https://github.com/huggingface/datasets/pull/4406",
"diff_url": "https://github.com/huggingface/datasets/pull/4406.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4406.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4405/comments | https://api.github.com/repos/huggingface/datasets/issues/4405/events | https://github.com/huggingface/datasets/issues/4405 | 1,248,574,087 | I_kwDODunzps5Ka7qH | 4,405 | [TypeError: Couldn't cast array of type] Cannot process dataset in v2.2.2 | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"And if the problem is that the way I am to construct the {Entity Type: list of spans} makes entity types without any spans hard to handle, is there a better way to meet the demand? Although I have verified that to make entity types without any spans to behave like `entity_chunk[label] = [[\"\"]]` can perform normally, I still wonder if there is a more elegant way?"
] | 1,653,505,003,000 | 1,653,506,271,000 | null | NONE | null | ## Describe the bug
I am trying to process the [conll2012_ontonotesv5](https://huggingface.co/datasets/conll2012_ontonotesv5) dataset in `datasets` v2.2.2 and am running into a type error when casting the features.
## Steps to reproduce the bug
```python
import os
from typing import (
List,
Dict,
)
from collections import (
defaultdict,
)
from dataclasses import (
dataclass,
)
from datasets import (
load_dataset,
)
@dataclass
class ConllConverter:
path: str
name: str
cache_dir: str
def __post_init__(
self,
):
self.dataset = load_dataset(
path=self.path,
name=self.name,
cache_dir=self.cache_dir,
)
def convert(
self,
):
class_label = self.dataset["train"].features["sentences"][0]["named_entities"].feature
# label_set = list(set([
# label.split("-")[1] if label != "O" else label for label in class_label.names
# ]))
def prepare_chunk(token, entity):
assert len(token) == len(entity)
# Sequence length
length = len(token)
# Variable used
entity_chunk = defaultdict(list)
idx = flag = 0
# While loop
while idx < length:
if entity[idx] == "O":
flag += 1
idx += 1
else:
iob_tp, lab_tp = entity[idx].split("-")
assert iob_tp == "B"
idx += 1
while idx < length and entity[idx].startswith("I-"):
idx += 1
entity_chunk[lab_tp].append(token[flag: idx])
flag = idx
entity_chunk = dict(entity_chunk)
# for label in label_set:
# if label != "O" and label not in entity_chunk.keys():
# entity_chunk[label] = None
return entity_chunk
def prepare_features(
batch: Dict[str, List],
) -> Dict[str, List]:
sentence = [
sent for doc_sent in batch["sentences"] for sent in doc_sent
]
feature = {
"sentence": list(),
}
for sent in sentence:
token = sent["words"]
entity = class_label.int2str(sent["named_entities"])
entity_chunk = prepare_chunk(token, entity)
sent_feat = {
"token": token,
"entity": entity,
"entity_chunk": entity_chunk,
}
feature["sentence"].append(sent_feat)
return feature
column_names = self.dataset.column_names["train"]
dataset = self.dataset.map(
function=prepare_features,
with_indices=False,
batched=True,
batch_size=3,
remove_columns=column_names,
num_proc=1,
)
dataset.save_to_disk(
dataset_dict_path=os.path.join("data", self.path, self.name)
)
if __name__ == "__main__":
converter = ConllConverter(
path="conll2012_ontonotesv5",
name="english_v4",
cache_dir="cache",
)
converter.convert()
```
## Expected results
I want to use the dataset to perform NER task and to change the label list into a {Entity Type: list of spans} format.
## Actual results
<details>
<summary>Traceback</summary>
```python
Traceback (most recent call last): | 0/81 [00:00<?, ?ba/s]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 532, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 499, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2751, in _map_single
writer.write_batch(batch)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 503, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 198, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in cast_array_to_feature
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in <listcomp>
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1844, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>>
to
{'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)}
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 110, in <module>
converter.convert()
File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 91, in convert
dataset = self.dataset.map(
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 770, in map
{
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp>
k: dataset.map(
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2459, in map
transformed_shards[index] = async_result.get()
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 771, in get
raise self._value
TypeError: Couldn't cast array of type
struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>>
to
{'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)}
```
</details>
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: Ubuntu 18.04
- Python version: 3.9.7
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4405/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4404/comments | https://api.github.com/repos/huggingface/datasets/issues/4404/events | https://github.com/huggingface/datasets/issues/4404 | 1,248,572,899 | I_kwDODunzps5Ka7Xj | 4,404 | Dataset should have a `.name` field | {
"login": "f4hy",
"id": 36440,
"node_id": "MDQ6VXNlcjM2NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/36440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f4hy",
"html_url": "https://github.com/f4hy",
"followers_url": "https://api.github.com/users/f4hy/followers",
"following_url": "https://api.github.com/users/f4hy/following{/other_user}",
"gists_url": "https://api.github.com/users/f4hy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/f4hy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/f4hy/subscriptions",
"organizations_url": "https://api.github.com/users/f4hy/orgs",
"repos_url": "https://api.github.com/users/f4hy/repos",
"events_url": "https://api.github.com/users/f4hy/events{/privacy}",
"received_events_url": "https://api.github.com/users/f4hy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! You can already use `dset.builder_name` and `dset.config_name` for that purpose. And when it comes to versioning, it's better to use `dset._fingerprint` than the `version` attribute as the former represents a deterministic hash that encodes all the mutable ops executed on a dataset, and the latter stays the same unless it's manually updated after each op."
] | 1,653,504,968,000 | 1,653,570,601,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
If building pipelines that can evaluate on more than one dataset, it would be nice to be able to log results of things like `Evaluating on {dataset.name}` or `results for {dataset.name} are: {results}`
Without some way of concisely identifying a dataset from the dataset object, tools which might run on more than one dataset must be passed the dataset object _and_ the name/id of the dataset being used.
**Describe the solution you'd like**
The DatasetInfo class should have a `name` field which is the name of a dataset. then for a given dataset if it evolves in time the `version` can be updated but its different versions of the same dataset with a unique `name`. The name could then all be accessed by `dataset.name`
**Describe alternatives you've considered**
For my own purposes I am considering making `NamedDataset[Dataset]` where the subclass just has a .name field.
**Additional context**
My guess is that most usecases are not working with more than one dataset in a given pipeline so a name is not really needed. This has surprised me though as one of the advantages of a standard dataset interface is to be able to build pipelines which can be passed in a dataset and separate responsibilities of the dataset loading from the train or eval pipeline.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4404/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4403/comments | https://api.github.com/repos/huggingface/datasets/issues/4403/events | https://github.com/huggingface/datasets/pull/4403 | 1,248,390,134 | PR_kwDODunzps44dcpl | 4,403 | Uncomment logging deactivation for ArrowBasedBuilder | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,497,175,000 | 1,653,986,016,000 | 1,653,985,502,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4403/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4403",
"html_url": "https://github.com/huggingface/datasets/pull/4403",
"diff_url": "https://github.com/huggingface/datasets/pull/4403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4403.patch",
"merged_at": 1653985502000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4402/comments | https://api.github.com/repos/huggingface/datasets/issues/4402/events | https://github.com/huggingface/datasets/pull/4402 | 1,248,078,067 | PR_kwDODunzps44cdR5 | 4,402 | Skip identical files in `push_to_hub` instead of overwriting | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,484,371,000 | 1,653,491,796,000 | 1,653,491,283,000 | CONTRIBUTOR | null | Skip identical files instead of overwriting them to save bandwidth and circumvent (user-side/server-side) errors, which can arise when working with large datasets due to long-lasting HTTP connections, by repeating calls to `push_to_hub` to resume an upload.
To be able to check if an upload can be resumed, this PR modifies the shard naming scheme from:
```
data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].parquet
```
to:
```
data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]-<SHARD_FINGERPRINT>.parquet
```
cc @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4402/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4402/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4402",
"html_url": "https://github.com/huggingface/datasets/pull/4402",
"diff_url": "https://github.com/huggingface/datasets/pull/4402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4402.patch",
"merged_at": 1653491283000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4401/comments | https://api.github.com/repos/huggingface/datasets/issues/4401/events | https://github.com/huggingface/datasets/issues/4401 | 1,247,695,921 | I_kwDODunzps5KXlQx | 4,401 | "NonMatchingChecksumError" when importing 'spider' dataset | {
"login": "OmarAlaaeldein",
"id": 81417777,
"node_id": "MDQ6VXNlcjgxNDE3Nzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/81417777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OmarAlaaeldein",
"html_url": "https://github.com/OmarAlaaeldein",
"followers_url": "https://api.github.com/users/OmarAlaaeldein/followers",
"following_url": "https://api.github.com/users/OmarAlaaeldein/following{/other_user}",
"gists_url": "https://api.github.com/users/OmarAlaaeldein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OmarAlaaeldein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OmarAlaaeldein/subscriptions",
"organizations_url": "https://api.github.com/users/OmarAlaaeldein/orgs",
"repos_url": "https://api.github.com/users/OmarAlaaeldein/repos",
"events_url": "https://api.github.com/users/OmarAlaaeldein/events{/privacy}",
"received_events_url": "https://api.github.com/users/OmarAlaaeldein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4069435429,
"node_id": "LA_kwDODunzps7yjqgl",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive",
"name": "hosted-on-google-drive",
"color": "8B51EF",
"default": false,
"description": ""
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @OmarAlaaeldein.\r\n\r\nDatasets hosted at Google Drive give problems quite often due to a change in their service:\r\n- #3786 \r\n\r\nRelated to:\r\n- #3906\r\n\r\nI'm having a look.",
"We have made a Pull Request to replace the Google Drive URL. This fix will be accessible in our next `datasets` library release.\r\n\r\nIn the meantime, once the PR merged into master, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"
] | 1,653,464,707,000 | 1,653,547,212,000 | 1,653,547,212,000 | NONE | null | ## Describe the bug
When importing 'spider' dataset [https://huggingface.co/datasets/spider] an error occurs
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('spider')
```
## Expected results
Dataset object
## Actual results
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
## Environment info
- `datasets` version: 2.2.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.11
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4401/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4400/comments | https://api.github.com/repos/huggingface/datasets/issues/4400/events | https://github.com/huggingface/datasets/issues/4400 | 1,247,404,237 | I_kwDODunzps5KWeDN | 4,400 | load dataset wikitext-2-raw-v1 failed. Could not reach wikitext-2-raw-v1.py. | {
"login": "cailun01",
"id": 20658907,
"node_id": "MDQ6VXNlcjIwNjU4OTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/20658907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cailun01",
"html_url": "https://github.com/cailun01",
"followers_url": "https://api.github.com/users/cailun01/followers",
"following_url": "https://api.github.com/users/cailun01/following{/other_user}",
"gists_url": "https://api.github.com/users/cailun01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cailun01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cailun01/subscriptions",
"organizations_url": "https://api.github.com/users/cailun01/orgs",
"repos_url": "https://api.github.com/users/cailun01/repos",
"events_url": "https://api.github.com/users/cailun01/events{/privacy}",
"received_events_url": "https://api.github.com/users/cailun01/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,653,448,244,000 | 1,653,449,196,000 | 1,653,449,196,000 | NONE | null | ## Describe the bug
Could not reach wikitext-2-raw-v1.py
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikitext-2-raw-v1")
```
## Expected results
Download `wikitext-2-raw-v1` dataset successfully.
## Actual results
```
File "load_datasets.py", line 13, in <module>
load_dataset("wikitext-2-raw-v1")
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1715, in load_dataset
**config_kwargs,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1536, in load_dataset_builder
data_files=data_files,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1282, in dataset_module_factory
raise e1 from None
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1224, in dataset_module_factory
dynamic_modules_path=dynamic_modules_path,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 559, in get_module
local_path = self.download_loading_script(revision)
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 539, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 246, in cached_path
download_desc=download_config.download_desc,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 582, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.2.2/datasets/wikitext-2-raw-v1/wikitext-2-raw-v1.py (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=100)",),))
```
I tried to download wikitext-2-raw-v1.py by chrome and got:
![image](https://user-images.githubusercontent.com/20658907/170171595-0ca9f1da-c05a-4b57-861e-9530bfa3bdb9.png)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: CentOS 7
- Python version: 3.6
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4400/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4399/comments | https://api.github.com/repos/huggingface/datasets/issues/4399/events | https://github.com/huggingface/datasets/issues/4399 | 1,246,948,299 | I_kwDODunzps5KUuvL | 4,399 | LocalDatasetModuleFactoryWithoutScript extracts invalid builder name | {
"login": "apohllo",
"id": 40543,
"node_id": "MDQ6VXNlcjQwNTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apohllo",
"html_url": "https://github.com/apohllo",
"followers_url": "https://api.github.com/users/apohllo/followers",
"following_url": "https://api.github.com/users/apohllo/following{/other_user}",
"gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apohllo/subscriptions",
"organizations_url": "https://api.github.com/users/apohllo/orgs",
"repos_url": "https://api.github.com/users/apohllo/repos",
"events_url": "https://api.github.com/users/apohllo/events{/privacy}",
"received_events_url": "https://api.github.com/users/apohllo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Ok, so\r\n```\r\nos.path.basename(\"/home/user/\")\r\n```\r\ngives `''` while \r\n```\r\nos.path.basename(\"/home/user\")\r\n```\r\ngives `user`. \r\nThe code should check if the last char is a slash.\r\n",
"The fix is:\r\n```\r\n\"name\": os.path.basename(self.path[:-1] if self.path[-1] == \"/\" else self.path)\r\n```"
] | 1,653,415,381,000 | 1,653,416,036,000 | null | NONE | null | ## Describe the bug
Trying to load a local dataset raises an error indicating that the config builder has to have a name.
No error should be reported, since the call is completly valid.
## Steps to reproduce the bug
```python
load_dataset("./data/some-dataset/", name="some-name")
```
## Expected results
The dataset should be loaded.
## Actual results
```
Traceback (most recent call last):
File "train_lquad.py", line 19, in <module>
load(tokenize_target_function, tokenize_target_function, {}, tokenizer)
File "train_lquad.py", line 14, in load
dataset = load_dataset("./data/lquad/", name="lquad")
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1708, in load_dataset
builder_instance = load_dataset_builder(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1560, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 269, in __init__
self.config, self.config_id = self._create_builder_config(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 403, in _create_builder_config
raise ValueError(f"BuilderConfig must have a name, got {builder_config.name}")
ValueError: BuilderConfig must have a name, got
```
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.6
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
The error is probably in line 795 in load.py:
```
builder_kwargs = {
"hash": hash,
"data_files": data_files,
"name": os.path.basename(self.path),
"base_path": self.path,
**builder_kwargs,
}
```
`os.path.basename` for a directory returns an empty string, rather than the name of the directory.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4399/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4398/comments | https://api.github.com/repos/huggingface/datasets/issues/4398/events | https://github.com/huggingface/datasets/issues/4398 | 1,246,666,749 | I_kwDODunzps5KTp_9 | 4,398 | Calling `cast_column`/`remove_columns` and a sequence of `map` operations ends up making `faiss` fail with `ValueError` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"It works if we either remove the `ds = ds.cast_column(\"id\", Value(\"int32\"))` line from the code above, or if instead calling `ds.remove_columns()` we remove the columns inside each mapping as `ds.map(..., remove_columns=[...])` instead of right after the mapping.\r\n\r\nBoth of those solutions seem to fix the issue, so the root cause of it may be around that. Sorry I cannot provide you more insights, in case I get to fix it I'll submit a PR, in the meanwhile the code that I'm using as a workaround is the following:\r\n\r\n```python\r\nfrom transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\nimport torch\r\n\r\ntorch.set_grad_enabled(False)\r\nctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\nctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n\r\nfrom datasets import load_dataset, Value\r\n\r\nds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\nds = ds.cast_column(\"id\", Value(\"int32\"))\r\nds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n\r\ndef generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n\r\nds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\nds.add_faiss_index(column=\"embeddings\")\r\n```",
"FYI the main reason I want to use `dataset.remove_columns` rather than the function inside `dataset.map` is because according to the 🤗 Datasets documentation, it's faster.\r\n\r\n\"🤗 Datasets also has a [Dataset.remove_columns()](https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/main_classes#datasets.Dataset.remove_columns) method that is functionally identical, but faster, because it doesn’t copy the data of the remaining columns.\"\r\n\r\nMore information at https://huggingface.co/docs/datasets/process#map",
"Here I'm presenting all the scenarios so that you can further investigate the issue:\r\n\r\n- ✅ `cast_column` -> `map` with `remove_columns` -> `map` with `remove_columns` -> `add_faiss_index`\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- ❌ `cast_column` -> `map` -> `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- ❌ `cast_column` -> `map` with `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])}, remove_columns=[\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- ✅ `cast_column` -> `map` -> `remove_columns` -> `map` with `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.cast_column(\"id\", Value(\"int32\"))\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings, remove_columns=[\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```\r\n\r\n- ✅ `map` -> `remove_columns` -> `map` -> `remove_columns` -> `add_faiss_index`\r\n\r\n\r\n ```python\r\n from transformers import DPRContextEncoder, DPRContextEncoderTokenizer\r\n import torch\r\n \r\n torch.set_grad_enabled(False)\r\n ctx_encoder = DPRContextEncoder.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained(\"facebook/dpr-ctx_encoder-single-nq-base\")\r\n \r\n from datasets import load_dataset, Value\r\n \r\n ds = load_dataset(\"csv\", data_files=[\"sample.csv\"], split=\"train\")\r\n ds = ds.map(lambda x: {\"inputs\": f\"{ctx_tokenizer.sep_token}\".join([\"title\", \"summary\"])})\r\n ds = ds.remove_columns([\"title\", \"summary\"])\r\n \r\n def generate_embeddings(x):\r\n return {\"embeddings\": ctx_encoder(**ctx_tokenizer(x[\"inputs\"], return_tensors=\"pt\"))[0][0].numpy()}\r\n \r\n ds = ds.map(generate_embeddings)\r\n ds = ds.remove_columns([\"inputs\"])\r\n ds.add_faiss_index(column=\"embeddings\")\r\n ```",
"So on, I've created #4411 so as to fix the bug with `remove_columns` under certain conditions before `add_faiss_index`, which means that the scenarios not working above are already working fine."
] | 1,653,403,294,000 | 1,653,565,666,000 | null | CONTRIBUTOR | null | First of all, sorry in advance for the unclear title, but this bug is weird to explain (at least for me), so I tried my best to summarize all the information in this issue.
## Describe the bug
Calling a certain combination of operations over a 🤗 `Dataset` and then trying to calculate the `faiss` index with `.add_faiss_index` ends up throwing an exception while trying to set the format back of a previously removed column. But this just happens over certain conditions... I'll present some scenarios below!
## Steps to reproduce the bug
Assuming the following dataset named `sample.csv` with some IMDb data:
```csv
id,title,summary
1877830,"The Batman","When a sadistic serial killer begins murdering key political figures in Gotham, Batman is forced to investigate the city's hidden corruption and question his family's involvement."
9419884,"Doctor Strange in the Multiverse of Madness","Doctor Strange teams up with a mysterious teenage girl from his dreams who can travel across multiverses, to battle multiple threats, including other-universe versions of himself, which threaten to wipe out millions across the multiverse. They seek help from Wanda the Scarlet Witch, Wong and others."
11138512,"The Northman","From visionary director Robert Eggers comes The Northman, an action-filled epic that follows a young Viking prince on his quest to avenge his father's murder."
1745960,"Top Gun: Maverick","After more than thirty years of service as one of the Navy's top aviators, Pete Mitchell is where he belongs, pushing the envelope as a courageous test pilot and dodging the advancement in rank that would ground him."
```
We'll be able to reproduce the bug using the following piece of code:
```python
# Sample code to reproduce the bug
from transformers import DPRContextEncoder, DPRContextEncoderTokenizer
import torch
torch.set_grad_enabled(False)
ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
ctx_tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base")
from datasets import load_dataset, Value
ds = load_dataset("csv", data_files=["sample.csv"], split="train")
ds = ds.cast_column("id", Value("int32")) # from `int64` to `int32`
ds = ds.map(lambda x: {"inputs": f"{ctx_tokenizer.sep_token}".join(["title", "summary"])})
ds = ds.remove_columns(["title", "summary"])
def generate_embeddings(x):
return {"embeddings": ctx_encoder(**ctx_tokenizer(x["inputs"], return_tensors="pt"))[0][0].numpy()}
ds = ds.map(generate_embeddings)
ds = ds.remove_columns("inputs")
ds.add_faiss_index(column="embeddings") # It fails here!
```
The code above is an adaptation of https://huggingface.co/docs/datasets/faiss_es, for the sake of presenting the bug with a simple example.
## Expected results
Ideally, the `faiss` index should be calculated over the 🤗 `Dataset` and no exception should be triggered.
## Actual results
But what happens instead is that a `ValueError: Columns ['inputs'] not in the dataset. Current columns in the dataset: ['id', 'embeddings']`, which makes no sense as that column has been previously dropped.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4398/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4398/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4397/comments | https://api.github.com/repos/huggingface/datasets/issues/4397/events | https://github.com/huggingface/datasets/pull/4397 | 1,246,597,632 | PR_kwDODunzps44XcG3 | 4,397 | Fix dependency on dill version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,400,463,000 | 1,653,487,372,000 | 1,653,486,848,000 | MEMBER | null | We had to make a hotfix by pinning dill:
- #4380
because from version 0.3.5, our custom `save_function` pickling function was raising an exception:
- #4379
This PR fixes this by implementing our custom `save_function` depending on the version of dill.
CC: @anivegesana
This PR needs first being merged:
- [x] #4384
- so that a circular import is fixed
It is also convenient to merge first:
- [x] #4385 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4397/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4397",
"html_url": "https://github.com/huggingface/datasets/pull/4397",
"diff_url": "https://github.com/huggingface/datasets/pull/4397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4397.patch",
"merged_at": 1653486848000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4396/comments | https://api.github.com/repos/huggingface/datasets/issues/4396/events | https://github.com/huggingface/datasets/pull/4396 | 1,245,479,399 | PR_kwDODunzps44T0Di | 4,396 | Fix URL in gem dataset for totto config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,326,172,000 | 1,653,371,351,000 | 1,653,370,860,000 | MEMBER | null | As commented in:
- https://github.com/huggingface/datasets/issues/4386#issuecomment-1134902372
CC: @StevenTang1998 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4396/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4396/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4396",
"html_url": "https://github.com/huggingface/datasets/pull/4396",
"diff_url": "https://github.com/huggingface/datasets/pull/4396.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4396.patch",
"merged_at": 1653370859000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4395/comments | https://api.github.com/repos/huggingface/datasets/issues/4395/events | https://github.com/huggingface/datasets/pull/4395 | 1,245,436,486 | PR_kwDODunzps44TrBA | 4,395 | Add Pascal VOC dataset | {
"login": "nateraw",
"id": 32437151,
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nateraw",
"html_url": "https://github.com/nateraw",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"repos_url": "https://api.github.com/users/nateraw/repos",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4395). All of your documentation changes will be reflected on that endpoint."
] | 1,653,323,645,000 | 1,654,008,500,000 | null | CONTRIBUTOR | null | This PR adds the Pascal VOC dataset in the same way TFDS has it added. I believe we can iterate on this dataset and in future versions include more data, such as segmentation masks, but for now I think it is a good idea to just add it the same way as TFDS to get a solid first version out there. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4395/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4395",
"html_url": "https://github.com/huggingface/datasets/pull/4395",
"diff_url": "https://github.com/huggingface/datasets/pull/4395.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4395.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4394/comments | https://api.github.com/repos/huggingface/datasets/issues/4394/events | https://github.com/huggingface/datasets/issues/4394 | 1,245,221,657 | I_kwDODunzps5KOJMZ | 4,394 | trainer became extremely slow after reload dataset by `load_from_disk` | {
"login": "conan1024hao",
"id": 50416856,
"node_id": "MDQ6VXNlcjUwNDE2ODU2",
"avatar_url": "https://avatars.githubusercontent.com/u/50416856?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/conan1024hao",
"html_url": "https://github.com/conan1024hao",
"followers_url": "https://api.github.com/users/conan1024hao/followers",
"following_url": "https://api.github.com/users/conan1024hao/following{/other_user}",
"gists_url": "https://api.github.com/users/conan1024hao/gists{/gist_id}",
"starred_url": "https://api.github.com/users/conan1024hao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/conan1024hao/subscriptions",
"organizations_url": "https://api.github.com/users/conan1024hao/orgs",
"repos_url": "https://api.github.com/users/conan1024hao/repos",
"events_url": "https://api.github.com/users/conan1024hao/events{/privacy}",
"received_events_url": "https://api.github.com/users/conan1024hao/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"I tried to make the dataset much more smaller (100000 rows) , then the speed became `33.88it/s` from`8.62s/it`. It's nearly 200 times... Do you have any idea? Thank you!",
"Similar issue: https://github.com/huggingface/transformers/issues/8818\r\n\r\nI changed `RandomSampler` to `SequentialSampler` in the `trainer.py`, but the speed didn't become faster.",
"I changed\r\n```\r\ntokenized_datasets = load_from_disk(\r\n \"/pathto/dataset\"\r\n )\r\n```\r\nto\r\n```\r\ntokenized_datasets = load_from_disk(\r\n \"/pathto/dataset\", keep_in_memory=True\r\n )\r\n```\r\nand obtained normal speed. It's seems that the problem is on the os's speed limit."
] | 1,653,314,677,000 | 1,653,321,761,000 | null | NONE | null | ## Describe the bug
Due to memory problem, I need to save my tokenized datasets locally by CPU and reload it by multi GPU for running training script. However, after I reload it by `load_from_disk` and start training, the speed is extremely slow. It says I need about 1500 hours with 8 A100 cards. Before this, I can run the whole script in one day with a single A100 card.
Since I am try to pre-train a BERT, **my dataset is very large(29058165 rows)**
## Steps to reproduce the bug
```python
tokenized_datasets.save_to_disk(
"/pathto/dataset"
)
tokenized_datasets = load_from_disk(
"/pathto/dataset"
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=tokenized_datasets["train"] if training_args.do_train else None,
eval_dataset=tokenized_datasets["validation"]
if training_args.do_eval
else None,
tokenizer=tokenizer,
data_collator=data_collator,
)
train_result = trainer.train(resume_from_checkpoint=checkpoint)
```
## Expected results
Without the save and reload process, I only need about one day to run the whole script with one A100 card.
## Actual results
```
[INFO|trainer.py:1290] 2022-05-23 22:49:46,266 >> ***** Running training *****
[INFO|trainer.py:1291] 2022-05-23 22:49:46,266 >> Num examples = 29058165
[INFO|trainer.py:1292] 2022-05-23 22:49:46,266 >> Num Epochs = 5
[INFO|trainer.py:1293] 2022-05-23 22:49:46,266 >> Instantaneous batch size per device = 16
[INFO|trainer.py:1294] 2022-05-23 22:49:46,266 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1295] 2022-05-23 22:49:46,266 >> Gradient Accumulation steps = 2
[INFO|trainer.py:1296] 2022-05-23 22:49:46,266 >> Total optimization steps = 567540
0%| | 1/567540 [00:09<1544:49:04, 9.80s/it]
0%| | 2/567540 [00:17<1320:00:17, 8.37s/it]
0%| | 3/567540 [00:26<1393:10:17, 8.84s/it]
0%| | 4/567540 [00:34<1344:56:33, 8.53s/it]
0%| | 5/567540 [00:43<1359:36:12, 8.62s/it]
```
## Environment info
```
torch 1.11.0+cu113
torchaudio 0.11.0+cu113
torchvision 0.12.0+cu113
transformers 4.18.0
datasets 2.2.2
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4394/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4393/comments | https://api.github.com/repos/huggingface/datasets/issues/4393/events | https://github.com/huggingface/datasets/pull/4393 | 1,244,876,662 | PR_kwDODunzps44RxWN | 4,393 | Update CI deprecated legacy image | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,298,542,000 | 1,653,300,508,000 | 1,653,299,995,000 | MEMBER | null | Now our CI still uses a deprecated legacy image:
> You’re using a [deprecated Docker convenience image.](https://discuss.circleci.com/t/legacy-convenience-image-deprecation/41034) Upgrade to a next-gen Docker convenience image.
This PR updates to next-generation convenience image.
Related to:
- #2955 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4393/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4393",
"html_url": "https://github.com/huggingface/datasets/pull/4393",
"diff_url": "https://github.com/huggingface/datasets/pull/4393.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4393.patch",
"merged_at": 1653299995000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4392/comments | https://api.github.com/repos/huggingface/datasets/issues/4392/events | https://github.com/huggingface/datasets/pull/4392 | 1,244,859,971 | PR_kwDODunzps44RtsX | 4,392 | remove int documentation from logging docs | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https://api.github.com/users/lvwerra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lvwerra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lvwerra/subscriptions",
"organizations_url": "https://api.github.com/users/lvwerra/orgs",
"repos_url": "https://api.github.com/users/lvwerra/repos",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"received_events_url": "https://api.github.com/users/lvwerra/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,297,895,000 | 1,653,319,015,000 | 1,653,318,512,000 | MEMBER | null | Removes the `int` documentation from the [logging section](https://huggingface.co/docs/datasets/package_reference/logging_methods#levels) of the docs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4392/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4392",
"html_url": "https://github.com/huggingface/datasets/pull/4392",
"diff_url": "https://github.com/huggingface/datasets/pull/4392.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4392.patch",
"merged_at": 1653318512000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4391 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4391/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4391/comments | https://api.github.com/repos/huggingface/datasets/issues/4391/events | https://github.com/huggingface/datasets/pull/4391 | 1,244,839,185 | PR_kwDODunzps44RpGv | 4,391 | Refactor column mappings for question answering datasets | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks.\r\n> \r\n> I have no visibility about this, but if you say it is more useful for AutoTrain this way...\r\n\r\nThanks for the review @albertvillanova ! Yes, I need some way to reconstruct the original column names with a period because that's how they appear after we flatten the nested columns. In any case, we can adjust this later if needed :)",
"Does that mean that we need to change the metadata?",
"> Does that mean that we need to change the metadata?\r\n\r\nYes, but this PR takes care of it :)",
"Oh good! thanks for the heads up!"
] | 1,653,297,194,000 | 1,653,397,020,000 | 1,653,396,528,000 | MEMBER | null | This PR tweaks the keys in the metadata that are used to define the column mapping for question answering datasets. This is needed in order to faithfully reconstruct column names like `answers.text` and `answers.answer_start` from the keys in AutoTrain.
As observed in https://github.com/huggingface/datasets/pull/4367 we cannot use periods `.` in the keys of the YAML tags, so a decision was made to use a flat mapping with underscores. For QA datasets, however, it's handy to be able to reconstruct the nesting -- hence this PR.
cc @sashavor | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4391/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4391/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4391",
"html_url": "https://github.com/huggingface/datasets/pull/4391",
"diff_url": "https://github.com/huggingface/datasets/pull/4391.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4391.patch",
"merged_at": 1653396528000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4390/comments | https://api.github.com/repos/huggingface/datasets/issues/4390/events | https://github.com/huggingface/datasets/pull/4390 | 1,244,835,877 | PR_kwDODunzps44RoXs | 4,390 | Fix metadata validation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,297,080,000 | 1,654,075,672,000 | 1,654,075,165,000 | MEMBER | null | Since Python 3.8, the typing module:
- raises an AttributeError when trying to access `__args__` on any type, e.g.: `List.__args__`
- provides the `get_args` function instead: `get_args(List)`
This PR implements a fix for Python >=3.8 whereas maintaining backward compatibility. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4390/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4390/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4390",
"html_url": "https://github.com/huggingface/datasets/pull/4390",
"diff_url": "https://github.com/huggingface/datasets/pull/4390.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4390.patch",
"merged_at": 1654075165000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4389/comments | https://api.github.com/repos/huggingface/datasets/issues/4389/events | https://github.com/huggingface/datasets/pull/4389 | 1,244,693,690 | PR_kwDODunzps44RKMn | 4,389 | Fix bug in gem dataset for wiki_auto_asset_turk config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,290,389,000 | 1,653,302,306,000 | 1,653,301,795,000 | MEMBER | null | This PR fixes some URLs.
Fix #4386. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4389/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4389",
"html_url": "https://github.com/huggingface/datasets/pull/4389",
"diff_url": "https://github.com/huggingface/datasets/pull/4389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4389.patch",
"merged_at": 1653301795000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4388/comments | https://api.github.com/repos/huggingface/datasets/issues/4388/events | https://github.com/huggingface/datasets/pull/4388 | 1,244,645,158 | PR_kwDODunzps44RAG1 | 4,388 | Set builder name from module instead of class | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,287,195,000 | 1,653,456,283,000 | 1,653,455,775,000 | MEMBER | null | Now the builder name attribute is set from from the builder class name.
This PR sets the builder name attribute from the module name instead. Some motivating reasons:
- The dataset ID is relevant and unique among all datasets and this is directly related to the repository name, i.e., the name of the directory containing the dataset
- The name of the module (i.e. the file containing the loading loading script) is already relevant for loading: it must have the same name as its containing directory (related to the dataset ID), as we search for it using its directory name
- On the other hand, the name of the builder class is not relevant for loading: in our code, we just search for a class which is subclass of `DatasetBuilder` (independently of its name). We do not put any constraint on the naming of the builder class and indeed it can have a name completely different from its module/direcotry/dataset_id
IMO it makes more sense to align the caching directory name with the dataset_id/directory/module name instead of the builder class name.
Fix #4381. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4388/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4388/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4388",
"html_url": "https://github.com/huggingface/datasets/pull/4388",
"diff_url": "https://github.com/huggingface/datasets/pull/4388.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4388.patch",
"merged_at": 1653455775000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4387/comments | https://api.github.com/repos/huggingface/datasets/issues/4387/events | https://github.com/huggingface/datasets/issues/4387 | 1,244,147,817 | I_kwDODunzps5KKDBp | 4,387 | device/google/accessory/adk2012 - Git at Google | {
"login": "Aeckard45",
"id": 87345839,
"node_id": "MDQ6VXNlcjg3MzQ1ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aeckard45",
"html_url": "https://github.com/Aeckard45",
"followers_url": "https://api.github.com/users/Aeckard45/followers",
"following_url": "https://api.github.com/users/Aeckard45/following{/other_user}",
"gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions",
"organizations_url": "https://api.github.com/users/Aeckard45/orgs",
"repos_url": "https://api.github.com/users/Aeckard45/repos",
"events_url": "https://api.github.com/users/Aeckard45/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aeckard45/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,653,195,439,000 | 1,653,287,787,000 | 1,653,287,787,000 | NONE | null | "git clone https://android.googlesource.com/device/google/accessory/adk2012"
https://android.googlesource.com/device/google/accessory/adk2012/#:~:text=git%20clone%20https%3A//android.googlesource.com/device/google/accessory/adk2012 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4387/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4386/comments | https://api.github.com/repos/huggingface/datasets/issues/4386/events | https://github.com/huggingface/datasets/issues/4386 | 1,243,965,532 | I_kwDODunzps5KJWhc | 4,386 | Bug for wiki_auto_asset_turk from GEM | {
"login": "StevenTang1998",
"id": 37647985,
"node_id": "MDQ6VXNlcjM3NjQ3OTg1",
"avatar_url": "https://avatars.githubusercontent.com/u/37647985?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StevenTang1998",
"html_url": "https://github.com/StevenTang1998",
"followers_url": "https://api.github.com/users/StevenTang1998/followers",
"following_url": "https://api.github.com/users/StevenTang1998/following{/other_user}",
"gists_url": "https://api.github.com/users/StevenTang1998/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StevenTang1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StevenTang1998/subscriptions",
"organizations_url": "https://api.github.com/users/StevenTang1998/orgs",
"repos_url": "https://api.github.com/users/StevenTang1998/repos",
"events_url": "https://api.github.com/users/StevenTang1998/events{/privacy}",
"received_events_url": "https://api.github.com/users/StevenTang1998/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @StevenTang1998.\r\n\r\nI'm looking into it. ",
"Hi @StevenTang1998,\r\n\r\nWe have fixed the issue:\r\n- #4389\r\n\r\nThe fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by installing `datasets` from our GitHub repo:\r\n```\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```",
"Thanks for your reply!!\r\nAnd the totto dataset has the same problem. The url should be change to [https://storage.googleapis.com/totto-public/totto_data.zip](https://storage.googleapis.com/totto-public/totto_data.zip).",
"Hi again @StevenTang1998,\r\n\r\nI don't see any problem when loading `totto` dataset:\r\n```python\r\nIn [4]: import datasets\r\n ...: ds = datasets.load_dataset(\"totto\")\r\nDownloading builder script: 5.58kB [00:00, 5.33MB/s] \r\nDownloading metadata: 2.78kB [00:00, 2.96MB/s] \r\nUsing custom data configuration default\r\nDownloading and preparing dataset totto/default (download: 179.03 MiB, generated: 706.59 MiB, post-processed: Unknown size, total: 885.62 MiB) to .../.cache/huggingface/datasets/totto/default/1.0.0/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2...\r\nDownloading data: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 188M/188M [00:32<00:00, 5.77MB/s]\r\nDataset totto downloaded and prepared to .../.cache/huggingface/datasets/totto/default/1.0.0/263c85871e5451bc892c65ca0306c0629eb7beb161e0eb998f56231562335dd2. Subsequent calls will reuse this data.\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 147.95it/s]\r\n\r\nIn [5]: ds\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 120761\r\n })\r\n validation: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n test: Dataset({\r\n features: ['id', 'table_page_title', 'table_webpage_url', 'table_section_title', 'table_section_text', 'table', 'highlighted_cells', 'example_id', 'sentence_annotations', 'overlap_subset'],\r\n num_rows: 7700\r\n })\r\n})\r\n```",
"Sorry, I didn't express it clearly. It's the totto dataset from gem.\r\ndatasets.load_dataset('gem', 'totto')\r\n",
"@StevenTang1998 fixed in:\r\n- #4396",
"Thanks!!"
] | 1,653,136,290,000 | 1,653,371,752,000 | 1,653,301,795,000 | NONE | null | ## Describe the bug
The script of wiki_auto_asset_turk for GEM may be out of date.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('gem', 'wiki_auto_asset_turk')
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/load.py", line 1731, in load_dataset
builder_instance.download_and_prepare(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 640, in download_and_prepare
self._download_and_prepare(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 1158, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/builder.py", line 707, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/tangtianyi/.cache/huggingface/modules/datasets_modules/datasets/gem/982a54473b12c6a6e40d4356e025fb7172a5bb2065e655e2c1af51f2b3cf4ca1/gem.py", line 538, in _split_generators
dl_dir = dl_manager.download_and_extract(_URLs[self.config.name])
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 416, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 294, in download
downloaded_path_or_paths = map_nested(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 351, in map_nested
mapped = [
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 352, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 288, in _single_map_nested
return function(data_struct)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 320, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 234, in cached_path
output_path = get_from_cache(
File "/home/tangtianyi/miniconda3/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 579, in get_from_cache
raise FileNotFoundError(f"Couldn't find file at {url}")
FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/asset/raw/master/dataset/asset.test.orig
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4386/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4385/comments | https://api.github.com/repos/huggingface/datasets/issues/4385/events | https://github.com/huggingface/datasets/pull/4385 | 1,243,921,287 | PR_kwDODunzps44OwXF | 4,385 | Test dill | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I should point out that the hash will be the same if computed twice with the same code on the same version of dill (after adding huggingface's code that removes line numbers and file names, and sorts globals.) My changes in dill 0.3.5 and ones that I will make in 0.3.6 will result in different pickles than the ones dill 0.3.4 was making. This should still be fine for caching.",
"Just some comments @lhoestq:\r\n\r\nThe best practice for testing is to have a `test_<filename>.py` for each `<filename>.py`. Therefore in order to have the filenames aligned, I would propose:\r\n- either renaming `fingerprint.py` to `caching.py`\r\n- or renaming `test_caching.py` to `test_fingerprint.py`\r\n\r\nOn the other hand, my idea when implementing this test was not to test all the functionalities of the `Hasher`, but just to have a regression test that fails if dill version is > 0.3.4 and the pin in our `setup.py` is not present. Just recall that we had no failing test in our CI when the issue with dill was found on `transformers`.\r\n\r\nThe objective of this PR is just to have a regression test for that case: I tested and I got `AttributeError: module 'dill._dill' has no attribute 'stack'`\r\n\r\nFor this regression test, I took into account this comment by @gugarosa: https://github.com/huggingface/datasets/issues/4379#issuecomment-1133131825\r\n\r\nThere is no equivalent test in `test_caching.py` because our CI did not fail before pinning dill.",
"Ok I see, renaming it to `test_fingerprint.py` sounds like a good idea :)"
] | 1,653,123,463,000 | 1,653,467,413,000 | 1,653,466,908,000 | MEMBER | null | Regression test for future releases of `dill`.
Related to #4379. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4385/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4385",
"html_url": "https://github.com/huggingface/datasets/pull/4385",
"diff_url": "https://github.com/huggingface/datasets/pull/4385.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4385.patch",
"merged_at": 1653466908000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4384/comments | https://api.github.com/repos/huggingface/datasets/issues/4384/events | https://github.com/huggingface/datasets/pull/4384 | 1,243,919,748 | PR_kwDODunzps44OwFr | 4,384 | Refactor download | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This looks like a breaking change no ?\r\nAlso could you explain why it would be better this way ?",
"The might be only there to help type checkers, but I am not too familiar with the code base to know for sure. I think this might be useful:\n\nhttps://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING",
"> This looks like a breaking change no ?\r\n> Also could you explain why it would be better this way ?\r\n\r\nSorry, @lhoestq, I naively thought it was obvious. I have tried to give some arguments in the motivation of this PR (see above). I can give additional arguments if needed. "
] | 1,653,122,964,000 | 1,653,475,922,000 | 1,653,475,423,000 | MEMBER | null | This PR performs a refactoring of the download functionalities, by proposing a modular solution and moving them to their own package "download". Some motivating arguments:
- understandability: from a logical partitioning of the library, it makes sense to have all download functionalities grouped together instead of scattered in a much larger directory containing many more different functionalities
- abstraction: the level of abstraction of "download" (higher) is not the same as "utils" (lower); putting different levels of abstraction together, makes dependencies more intricate (potential circular dependencies) and the system more tightly coupled; when the levels of abstraction are clearly separated, the dependencies flow in a neat direction from higher to lower
- architectural: "download" is a domain-specific functionality of our library/application (a dataset builder performs several actions: download, generate dataset and cache it); these functionalities are at the core of our library; on the other hand, "utils" are always a low-level set of functionalities, not directly related to our domain/business core logic (all libraries have "utils"), thus at the periphery of our lib architecture
Also note that when a library is not architecturally designed following simple, neat, clean principles, this has a negative impact on extensibility, making more and more difficult to make enhancements.
As a concrete example in this case, please see: https://app.circleci.com/pipelines/github/huggingface/datasets/12185/workflows/ff25a790-8e3f-45a1-aadd-9d79dfb73c4d/jobs/72860
- After an extension, a circular import is found
- Diving into the cause of this circular import, see the dependency flow, which should be from higher to lower levels of abstraction:
```
ImportError while loading conftest '/home/circleci/datasets/tests/conftest.py'.
tests/conftest.py:12: in <module>
import datasets
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/__init__.py:37: in <module>
from .arrow_dataset import Dataset, concatenate_datasets
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/arrow_dataset.py:59: in <module>
from . import config
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/config.py:8: in <module>
from .utils.logging import get_logger
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/__init__.py:30: in <module>
from .download_manager import DownloadConfig, DownloadManager, DownloadMode
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/download_manager.py:39: in <module>
from .py_utils import NestedDataStructure, map_nested, size_str
../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/utils/py_utils.py:608: in <module>
if config.DILL_VERSION < version.parse("0.3.5"):
E AttributeError: module 'datasets.config' has no attribute 'DILL_VERSION'
```
Imports:
- datasets
- Dataset: lower level than datasets
- config: lower level than Dataset
- logger: lower level than config
- DownloadManager: !!! HIGHER level of abstraction than logger!!
Why when importing logger we require importing DownloadManager?!?
- Logically, it does not make sense
- This is due to an error in the design/architecture of our library:
- To import the logger, we need to import it from `.utils.logging`
- To import `.utils.logging` we need to import `.utils`
- The import of `.utils` require the import of all its submodules defined in `utils.__init__.py`, among them: `.utils.download_manager`!
When putting `logging` and `download_manager` both inside `utils`, in order to import `logging` we need to import `download_manager` first: this is a strong coupling between modules and moreover between modules at different levels of abstraction (to import a lower level module, we require to import a higher level module). Additionally, it is clear that is makes no sense that in order to import `logging` we require to import `download_manager` first. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4384/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4384",
"html_url": "https://github.com/huggingface/datasets/pull/4384",
"diff_url": "https://github.com/huggingface/datasets/pull/4384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4384.patch",
"merged_at": 1653475423000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4383/comments | https://api.github.com/repos/huggingface/datasets/issues/4383/events | https://github.com/huggingface/datasets/issues/4383 | 1,243,856,981 | I_kwDODunzps5KI8BV | 4,383 | L | {
"login": "AronCodes21",
"id": 99847861,
"node_id": "U_kgDOBfOOtQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99847861?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AronCodes21",
"html_url": "https://github.com/AronCodes21",
"followers_url": "https://api.github.com/users/AronCodes21/followers",
"following_url": "https://api.github.com/users/AronCodes21/following{/other_user}",
"gists_url": "https://api.github.com/users/AronCodes21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/AronCodes21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/AronCodes21/subscriptions",
"organizations_url": "https://api.github.com/users/AronCodes21/orgs",
"repos_url": "https://api.github.com/users/AronCodes21/repos",
"events_url": "https://api.github.com/users/AronCodes21/events{/privacy}",
"received_events_url": "https://api.github.com/users/AronCodes21/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,653,104,878,000 | 1,653,160,813,000 | 1,653,160,813,000 | NONE | null | ## Describe the L
L
## Expected L
A clear and concise lmll
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version: | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4383/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4382/comments | https://api.github.com/repos/huggingface/datasets/issues/4382/events | https://github.com/huggingface/datasets/issues/4382 | 1,243,839,783 | I_kwDODunzps5KI30n | 4,382 | First time trying | {
"login": "Aeckard45",
"id": 87345839,
"node_id": "MDQ6VXNlcjg3MzQ1ODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/87345839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Aeckard45",
"html_url": "https://github.com/Aeckard45",
"followers_url": "https://api.github.com/users/Aeckard45/followers",
"following_url": "https://api.github.com/users/Aeckard45/following{/other_user}",
"gists_url": "https://api.github.com/users/Aeckard45/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Aeckard45/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Aeckard45/subscriptions",
"organizations_url": "https://api.github.com/users/Aeckard45/orgs",
"repos_url": "https://api.github.com/users/Aeckard45/repos",
"events_url": "https://api.github.com/users/Aeckard45/events{/privacy}",
"received_events_url": "https://api.github.com/users/Aeckard45/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [] | 1,653,099,318,000 | 1,653,160,844,000 | 1,653,160,844,000 | NONE | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4382/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4381 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4381/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4381/comments | https://api.github.com/repos/huggingface/datasets/issues/4381/events | https://github.com/huggingface/datasets/issues/4381 | 1,243,478,863 | I_kwDODunzps5KHftP | 4,381 | Bug in caching 2 datasets both with the same builder class name | {
"login": "NouamaneTazi",
"id": 29777165,
"node_id": "MDQ6VXNlcjI5Nzc3MTY1",
"avatar_url": "https://avatars.githubusercontent.com/u/29777165?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NouamaneTazi",
"html_url": "https://github.com/NouamaneTazi",
"followers_url": "https://api.github.com/users/NouamaneTazi/followers",
"following_url": "https://api.github.com/users/NouamaneTazi/following{/other_user}",
"gists_url": "https://api.github.com/users/NouamaneTazi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NouamaneTazi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NouamaneTazi/subscriptions",
"organizations_url": "https://api.github.com/users/NouamaneTazi/orgs",
"repos_url": "https://api.github.com/users/NouamaneTazi/repos",
"events_url": "https://api.github.com/users/NouamaneTazi/events{/privacy}",
"received_events_url": "https://api.github.com/users/NouamaneTazi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi @NouamaneTazi, thanks for reporting.\r\n\r\nPlease note that both datasets are cached in the same directory because their loading builder classes have the same name: `class MTOP(datasets.GeneratorBasedBuilder)`.\r\n\r\nYou should name their builder classes differently, e.g.:\r\n- `MtopDomain`\r\n- `MtopIntent`",
"Hi @NouamaneTazi, please note that after our fix:\r\n- #4388\r\n\r\nwe do not consider the class name anymore, but the name of the file where the loading builder class is implemented. "
] | 1,653,070,683,000 | 1,654,157,917,000 | 1,653,455,775,000 | MEMBER | null | ## Describe the bug
The two datasets `mteb/mtop_intent` and `mteb/mtop_domain `use both the same cache folder `.cache/huggingface/datasets/mteb___mtop`. So if you first load `mteb/mtop_intent` then datasets will not load `mteb/mtop_domain`.
If you delete this cache folder and flip the order how you load the two datasets , you will get the opposite datasets loaded (difference is here in terms of the label and label_text).
## Steps to reproduce the bug
```python
import datasets
dataset = datasets.load_dataset("mteb/mtop_intent", "en")
print(dataset['train'][0])
dataset = datasets.load_dataset("mteb/mtop_domain", "en")
print(dataset['train'][0])
```
## Expected results
```
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_intent/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop_domain/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 0, 'label_text': 'messaging'}
```
## Actual results
```
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 920.14it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
Reusing dataset mtop (/home/nouamane/.cache/huggingface/datasets/mteb___mtop/en/0.0.0/f930e32a294fed424f70263d8802390e350fff17862266e5fc156175c07d9c35)
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1307.59it/s]
{'id': 3232343436343136, 'text': 'Has Angelika Kratzer video messaged me?', 'label': 1, 'label_text': 'GET_MESSAGE'}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.1
- Platform: macOS-12.1-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4381/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4381/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4380 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4380/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4380/comments | https://api.github.com/repos/huggingface/datasets/issues/4380/events | https://github.com/huggingface/datasets/pull/4380 | 1,243,183,054 | PR_kwDODunzps44MUz0 | 4,380 | Pin dill | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,054,859,000 | 1,653,064,887,000 | 1,653,064,384,000 | MEMBER | null | Hotfix #4379.
CC: @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4380/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4380/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4380",
"html_url": "https://github.com/huggingface/datasets/pull/4380",
"diff_url": "https://github.com/huggingface/datasets/pull/4380.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4380.patch",
"merged_at": 1653064384000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4379 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4379/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4379/comments | https://api.github.com/repos/huggingface/datasets/issues/4379/events | https://github.com/huggingface/datasets/issues/4379 | 1,243,175,854 | I_kwDODunzps5KGVuu | 4,379 | Latest dill release raises exception | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Fixed by:\r\n- #4380 ",
"Just an additional insight, the latest dill (either 0.3.5 or 0.3.5.1) also broke the hashing/fingerprinting of any mapping function.\r\n\r\nFor example:\r\n```\r\nfrom datasets import load_dataset\r\n\r\nd = load_dataset(\"rotten_tomatoes\")\r\nd.map(lambda x: x)\r\n```\r\n\r\nReturns the standard non-dillable error:\r\n```\r\nParameter 'function'=<function <lambda> at 0x7fe7d18c9560> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly....\r\n```",
"@albertvillanova ExamplesTests.test_run_speech_recognition_seq2seq is in which file?",
"Thanks a lot @gugarosa for the insight: we will incorporate it in our CI as regression testing for future dill releases.",
"Hi @anivegesana, that test is in `transformers` library:\r\n- https://github.com/huggingface/transformers/blob/main/examples/pytorch/test_pytorch_examples.py#L449\r\n- https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py ",
"@albertvillanova\n\nI did a deep dive into @gugarosa's problem and found the issue and it might be related to the one @sgugger discovered. In dill 0.3.5(.1), I created a new `save_function` that fixes a bug in dill that prevented the pickling of recursive inner functions. It was a more complete solution to the problem that `dill._dill.stack` tried to solve in the internal API of dill. Since `dill._dill.stack` was no longer needed, I removed it. Since datasets copies the `save_function` directly from the dill API, it stops working with the new dill version since `dill._dill.stack` is no longer present and the `save_function` has been updated with new code.\r\n\r\nhttps://github.com/huggingface/datasets/blob/95193ae61e92aa537d0c65d37a1fd9d2393aae89/src/datasets/utils/py_utils.py#L607-L678\r\n\r\n~If the dill version is below 0.3.5, you should keep this function. If it is after, you would need to update your copy of `save_function` to use the code I introduced, or manually add a `stack` variable to `dill._dill` if it doesn't exist. Fortunately, in any version of Python 3.7+, dictionaries are always in insertion order and dill no longer supports Python 3.6 or older. So, any globals dictionary saved by dill 0.3.5+ will be deterministic given that the version of dill is held constant and this save_function is unnecessary for newer versions of dill.~\r\n\r\nAh. I see what is happening. I guess a different copy of the function code is needed that sorts the global variables by name.\r\n\r\n```py\r\nif dill.__version__.split('.') < ['0', '3', '5']:\r\n # current save_function code inside here\r\nelse:\r\n # new save_function code inside here with the following line inserted after creating the globals\r\n globs = {k: globs[k] for k in sorted(globs.keys())} \r\n```\r\n\r\nWill look into the test case @sgugger pointed out after that and verify if this is causing the problem.\r\n\r\nI am actually looking into rewriting the global variables code in uqfoundation/dill#466 and will keep this in mind and will try to create an easy way to modify the global variables in dill 0.3.6 (for example, sort them by key like datasets does).",
"Thanks a lot for your investigation @anivegesana.\r\n\r\nYes, we copied-pasted the old `save_function` function from `dill`, just adding a line to make deterministic the order of global variables `globs`. \r\n\r\nHowever, this function has changed a lot from version 0.3.5, after your PR (thank you for the fix in recursiveness, indeed):\r\n- uqfoundation/dill#443\r\n\r\nWe have to address this change.\r\n\r\nIf finally your PR to sort global variables is merged into dill 0.3.6, that will make our life easier, as the tweak will no longer be necessary. ;)\r\n\r\nI have included a regression test so that we are sure future releases of dill do not break `datasets`:\r\n- #4385 ",
"I should note that because Python 3.6 and older are now deprecated and Python 3.7 has insertion order dictionaries, the globals in dill will have a deterministic order, just not sorted. I would still keep it sorted like you have it to help with stability (for example, if someone reorders variables in a file, then sorting the globals would not invalidate the cache.)\n\nIt seems that the order is not quite deterministic in IPython. Huggingface datasets seems to do well in Jupyter regardless, so it is not a good idea to remove the sorting. uqfoundation/dill#19"
] | 1,653,054,516,000 | 1,653,148,406,000 | 1,653,066,387,000 | MEMBER | null | ## Describe the bug
As reported by @sgugger, latest dill release is breaking things with Datasets.
```
______________ ExamplesTests.test_run_speech_recognition_seq2seq _______________
self = <multiprocess.pool.ApplyResult object at 0x7fa5981a1cd0>, timeout = None
def get(self, timeout=None):
self.wait(timeout)
if not self.ready():
raise TimeoutError
if self._success:
return self._value
else:
> raise self._value
E TypeError: '>' not supported between instances of 'NoneType' and 'float'
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4379/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4379/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4378 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4378/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4378/comments | https://api.github.com/repos/huggingface/datasets/issues/4378/events | https://github.com/huggingface/datasets/pull/4378 | 1,242,935,373 | PR_kwDODunzps44Lf2R | 4,378 | Tidy up license metadata for google_wellformed_query, newspop, sick | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"& thank you!"
] | 1,653,041,772,000 | 1,653,400,223,000 | 1,653,397,827,000 | CONTRIBUTOR | null | Amend three licenses on datasets to fit naming convention (lower case, cc licenses include sub-version number). I think that's it - everything else on datasets looks great & super-searchable now! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4378/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4378/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4378",
"html_url": "https://github.com/huggingface/datasets/pull/4378",
"diff_url": "https://github.com/huggingface/datasets/pull/4378.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4378.patch",
"merged_at": 1653397827000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4377/comments | https://api.github.com/repos/huggingface/datasets/issues/4377/events | https://github.com/huggingface/datasets/pull/4377 | 1,242,746,186 | PR_kwDODunzps44K4OY | 4,377 | Fix checksum and bug in irc_disentangle dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,031,768,000 | 1,653,039,276,000 | 1,653,038,792,000 | MEMBER | null | There was a bug in filepath segment:
- wrong: `jkkummerfeld-irc-disentanglement-fd379e9`
- right: `jkkummerfeld-irc-disentanglement-35f0a40`
Also there was a bug in the checksum of the downloaded file.
This PR fixes these issues.
Fix partially #4376.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4377/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4377",
"html_url": "https://github.com/huggingface/datasets/pull/4377",
"diff_url": "https://github.com/huggingface/datasets/pull/4377.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4377.patch",
"merged_at": 1653038792000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4376/comments | https://api.github.com/repos/huggingface/datasets/issues/4376/events | https://github.com/huggingface/datasets/issues/4376 | 1,242,218,144 | I_kwDODunzps5KCr6g | 4,376 | irc_disentagle viewer error | {
"login": "labouz",
"id": 25671683,
"node_id": "MDQ6VXNlcjI1NjcxNjgz",
"avatar_url": "https://avatars.githubusercontent.com/u/25671683?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/labouz",
"html_url": "https://github.com/labouz",
"followers_url": "https://api.github.com/users/labouz/followers",
"following_url": "https://api.github.com/users/labouz/following{/other_user}",
"gists_url": "https://api.github.com/users/labouz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/labouz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/labouz/subscriptions",
"organizations_url": "https://api.github.com/users/labouz/orgs",
"repos_url": "https://api.github.com/users/labouz/repos",
"events_url": "https://api.github.com/users/labouz/events{/privacy}",
"received_events_url": "https://api.github.com/users/labouz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"DUPLICATED comment from https://github.com/huggingface/datasets/issues/3807:\r\n\r\nmy code:\r\n```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"irc_disentangle\", download_mode=\"force_redownload\")\r\n```\r\nhowever, it produces the same error\r\n```\r\n[38](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=37) if len(bad_urls) > 0:\r\n [39](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=38) error_msg = \"Checksums didn't match\" + for_verification_name + \":\\n\"\r\n---> [40](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=39) raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\n [41](file:///Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/site-packages/datasets/utils/info_utils.py?line=40) logger.info(\"All the checksums matched successfully\" + for_verification_name)\r\n\r\nNonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/jkkummerfeld/irc-disentanglement/tarball/master']\r\n```\r\nI attempted to use the `ignore_verifications' as such:\r\n\r\n```\r\nds = datasets.load_dataset('irc_disentangle', download_mode=\"force_redownload\", ignore_verifications=True)\r\n\r\nDownloading builder script: 12.0kB [00:00, 5.92MB/s] \r\nDownloading metadata: 7.58kB [00:00, 3.48MB/s] \r\nNo config specified, defaulting to: irc_disentangle/ubuntu\r\nDownloading and preparing dataset irc_disentangle/ubuntu (download: 112.98 MiB, generated: 60.05 MiB, post-processed: Unknown size, total: 173.03 MiB) to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5...\r\nDownloading data: 118MB [00:09, 12.1MB/s] \r\n \r\nDataset irc_disentangle downloaded and prepared to /Users/laylabouzoubaa/.cache/huggingface/datasets/irc_disentangle/ubuntu/1.0.0/0f24ab262a21d8c1d989fa53ed20caa928f5880be26c162bfbc02445dbade7e5. Subsequent calls will reuse this data.\r\n100%|██████████| 3/3 [00:00<00:00, 675.38it/s]\r\n```\r\nbut, this returns an empty set?\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n test: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n validation: Dataset({\r\n features: ['id', 'raw', 'ascii', 'tokenized', 'date', 'connections'],\r\n num_rows: 0\r\n })\r\n})\r\n```\r\nnot sure what else to try at this point?\r\nThanks in advanced🤗",
"Thanks for reporting, @labouz. I'm addressing it. ",
"The issue with checksum and empty dataset has been fixed by:\r\n- #4377\r\n\r\nTo load the dataset, you should force the re-generation of the dataset from the downloaded file by passing `download_mode=\"reuse_cache_if_exists\"` to `load_dataset`.\r\n\r\nIn relation with the issue with the dataset viewer, first the dataset should be refactored to support streaming.",
"parfait!\r\nit works now, thank you 🙏 "
] | 1,652,987,716,000 | 1,654,158,000,000 | 1,654,158,000,000 | NONE | null | the dataviewer shows this message for "ubuntu" - "train", "test", and "validation" splits:
```
Server error
Status code: 400
Exception: ValueError
Message: Cannot seek streaming HTTP file
```
it appears to give the same message for the "channel_two" data as well.
I get a Checksums error when using `load_data()` with this dataset. Even with the `download_mode` and `ignore_verifications` options set. i referenced the issue here: https://github.com/huggingface/datasets/issues/3807 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4376/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4375/comments | https://api.github.com/repos/huggingface/datasets/issues/4375/events | https://github.com/huggingface/datasets/pull/4375 | 1,241,921,147 | PR_kwDODunzps44IMCS | 4,375 | Support DataLoader with num_workers > 0 in streaming mode | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4375). All of your documentation changes will be reflected on that endpoint."
] | 1,652,972,431,000 | 1,654,187,602,000 | null | MEMBER | null | ### Issue
It's currently not possible to properly stream a dataset using multiple `torch.utils.data.DataLoader` workers:
- the `TorchIterableDataset` can't be pickled and passed to the subprocesses: https://github.com/huggingface/datasets/issues/3950
- streaming extension is failing: https://github.com/huggingface/datasets/issues/3951
- `fsspec` doesn't work out of the box in subprocesses
### Solution in this PR
I fixed these to enable passing an `IterableDataset` to a `torch.utils.data.DataLoader` with `num_workers > 0`.
I also had to shard the `IterableDataset` to give each worker a shard, otherwise data would be duplicated. This is implemented in `TorchIterableDataset.__iter__` and uses the new `IterableDataset._iter_shard(shard_idx)` method
I also had to do a few changes the patching that enable streaming in dataset scripts:
- the patches are now always applied - not just for streaming mode. They're applied when a builder is instantiated
- I improved it to also check for renamed modules or attributes (ex: pandas vs pd)
- I grouped all the patches of pathlib.Path into a class `xPath`, so that `Path` outside of dataset scripts stay unchanged - otherwise I didn't change the content of the extended Path methods for streaming
- I fixed a bug with the `pd.read_csv` patch, opening the file in "rb" mode was missing and causing some datasets to not work in streaming mode
### A few details regarding `fsspec` in multiprocessing
From https://github.com/fsspec/filesystem_spec/pull/963#issuecomment-1131709948 :
> Non-async instances might be safe in the forked child, if they hold no open files/sockets etc.; I'm not sure any implementations pass this test!
> If any async instance has been created, the newly forked processes must:
> 1. discard references to locks, threads and event loops and make new ones
> 2. not use any async fsspec instances from the parent process
> 3. clear all class instance caches
Therefore in a DataLoader's worker, I clear the reference to the loop and thread (1). We should be fine for 2 and 3 already since we don't use fsspec class instances from the parent process.
Fix https://github.com/huggingface/datasets/issues/3950
Fix https://github.com/huggingface/datasets/issues/3951
TODO: fix tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4375/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4375/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4375",
"html_url": "https://github.com/huggingface/datasets/pull/4375",
"diff_url": "https://github.com/huggingface/datasets/pull/4375.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4375.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4374/comments | https://api.github.com/repos/huggingface/datasets/issues/4374/events | https://github.com/huggingface/datasets/issues/4374 | 1,241,860,535 | I_kwDODunzps5KBUm3 | 4,374 | extremely slow processing when using a custom dataset | {
"login": "StephennFernandes",
"id": 32235549,
"node_id": "MDQ6VXNlcjMyMjM1NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/32235549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StephennFernandes",
"html_url": "https://github.com/StephennFernandes",
"followers_url": "https://api.github.com/users/StephennFernandes/followers",
"following_url": "https://api.github.com/users/StephennFernandes/following{/other_user}",
"gists_url": "https://api.github.com/users/StephennFernandes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StephennFernandes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StephennFernandes/subscriptions",
"organizations_url": "https://api.github.com/users/StephennFernandes/orgs",
"repos_url": "https://api.github.com/users/StephennFernandes/repos",
"events_url": "https://api.github.com/users/StephennFernandes/events{/privacy}",
"received_events_url": "https://api.github.com/users/StephennFernandes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,652,969,885,000 | 1,652,978,716,000 | null | NONE | null | ## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub
I have a large .txt file of 22 GB which i load into HF dataset
`lang_dataset = datasets.load_dataset("text", data_files="hi.txt")`
further i use a pre-processing function to clean the dataset
`lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)`
the following processing takes astronomical time to process, while hoging all the ram.
similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data.
`lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)`
the hours predicted to preprocess are as follows:
huggingface hub dataset: 6.5 hrs
custom loaded dataset: 7000 hrs
note: both the datasets are almost actually same, just provided by different sources with has +/- some samples, only one is hosted on the HF hub and the other is downloaded in a text format.
## Steps to reproduce the bug
```
import datasets
import psutil
import sys
import glob
from fastcore.utils import listify
import re
import gc
def remove_non_indic_sentences(example):
tmp_ls = []
eng_regex = r'[. a-zA-Z0-9ÖÄÅöäå _.,!"\'\/$]*'
for e in listify(example['text']):
matches = re.findall(eng_regex, e)
for match in (str(match).strip() for match in matches if match not in [""," ", " ", ",", " ,", ", ", " , "]):
if len(list(match.split(" "))) > 2:
e = re.sub(match," ",e,count=1)
tmp_ls.append(e)
gc.collect()
example['clean_text'] = tmp_ls
return example
lang_dataset = datasets.load_dataset("text", data_files="hi.txt")
lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)
## same thing work much faster when loading similar dataset from hub
lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", split="train", use_auth_token=True)
lang_dataset["train"] = lang_dataset["train"].map(
remove_non_indic_sentences, num_proc=12, batched=True, remove_columns=lang_dataset['train'].column_names), batch_size=64)
```
## Actual results
similar dataset of same size that's available in the huggingface hub works completely fine. which runs the same processing function and has the same amount of data.
`lang_dataset = datasets.load_dataset("oscar-corpus/OSCAR-2109", "hi", use_auth_token=True)
**the hours predicted to preprocess are as follows:**
huggingface hub dataset: 6.5 hrs
custom loaded dataset: 7000 hrs
**i even tried the following:**
- sharding the large 22gb text files into smaller files and loading
- saving the file to disk and then loading
- using lesser num_proc
- using smaller batch size
- processing without batches ie : without `batched=True`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2.dev0
- Platform: Ubuntu 20.04 LTS
- Python version: 3.9.7
- PyArrow version:8.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4374/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4373/comments | https://api.github.com/repos/huggingface/datasets/issues/4373/events | https://github.com/huggingface/datasets/pull/4373 | 1,241,769,310 | PR_kwDODunzps44HsaY | 4,373 | Remove links in docs to old dataset viewer | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,966,679,000 | 1,653,060,268,000 | 1,653,059,765,000 | CONTRIBUTOR | null | Remove the links in the docs to the no longer maintained dataset viewer. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4373/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4373/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4373",
"html_url": "https://github.com/huggingface/datasets/pull/4373",
"diff_url": "https://github.com/huggingface/datasets/pull/4373.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4373.patch",
"merged_at": 1653059765000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4372 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4372/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4372/comments | https://api.github.com/repos/huggingface/datasets/issues/4372/events | https://github.com/huggingface/datasets/pull/4372 | 1,241,703,826 | PR_kwDODunzps44HeYC | 4,372 | Check if dataset features match before push in `DatasetDict.push_to_hub` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,963,550,000 | 1,653,060,216,000 | 1,653,059,730,000 | CONTRIBUTOR | null | Fix #4211 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4372/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4372/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4372",
"html_url": "https://github.com/huggingface/datasets/pull/4372",
"diff_url": "https://github.com/huggingface/datasets/pull/4372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4372.patch",
"merged_at": 1653059730000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4371/comments | https://api.github.com/repos/huggingface/datasets/issues/4371/events | https://github.com/huggingface/datasets/pull/4371 | 1,241,500,906 | PR_kwDODunzps44GzSZ | 4,371 | Add missing language tags for udhr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,952,850,000 | 1,653,040,272,000 | 1,653,039,790,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4371/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4371",
"html_url": "https://github.com/huggingface/datasets/pull/4371",
"diff_url": "https://github.com/huggingface/datasets/pull/4371.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4371.patch",
"merged_at": 1653039790000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4369/comments | https://api.github.com/repos/huggingface/datasets/issues/4369/events | https://github.com/huggingface/datasets/pull/4369 | 1,240,245,642 | PR_kwDODunzps44CpCe | 4,369 | Add redirect to dataset script in the repo structure page | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,893,533,000 | 1,652,948,341,000 | 1,652,947,851,000 | MEMBER | null | Following https://github.com/huggingface/hub-docs/pull/146 I added a redirection to the dataset scripts documentation in the repository structure page. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4369/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4369/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4369",
"html_url": "https://github.com/huggingface/datasets/pull/4369",
"diff_url": "https://github.com/huggingface/datasets/pull/4369.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4369.patch",
"merged_at": 1652947851000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4368/comments | https://api.github.com/repos/huggingface/datasets/issues/4368/events | https://github.com/huggingface/datasets/pull/4368 | 1,240,064,860 | PR_kwDODunzps44CDFk | 4,368 | Add long answer candidates to natural questions dataset | {
"login": "seirasto",
"id": 4257308,
"node_id": "MDQ6VXNlcjQyNTczMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/4257308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seirasto",
"html_url": "https://github.com/seirasto",
"followers_url": "https://api.github.com/users/seirasto/followers",
"following_url": "https://api.github.com/users/seirasto/following{/other_user}",
"gists_url": "https://api.github.com/users/seirasto/gists{/gist_id}",
"starred_url": "https://api.github.com/users/seirasto/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/seirasto/subscriptions",
"organizations_url": "https://api.github.com/users/seirasto/orgs",
"repos_url": "https://api.github.com/users/seirasto/repos",
"events_url": "https://api.github.com/users/seirasto/events{/privacy}",
"received_events_url": "https://api.github.com/users/seirasto/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4368). All of your documentation changes will be reflected on that endpoint.",
"Once we have added `long_answer_candidates` maybe it would be worth to also add the missing `candidate_index` (inside `long_answer`). What do you think, @seirasto ?",
"Also note the \"Data Fields\" section in the README is missing the `long_answer` field.\r\n\r\nMoreover, there is no instance example in \"Data Instances\" section.",
"We could either make these fixes in this PR or in a subsequent PR.",
"@albertvillanova I've added the missing fields and updated the README to include a data instance and some other things. ",
"Great! I've made the updates to align the README. Please let me know if I missed anything.",
"As there were many minor little fixes, I thought it would be easier to fix them directly."
] | 1,652,884,542,000 | 1,654,102,645,000 | null | NONE | null | This is a modification of the Natural Questions dataset to include missing information specifically related to long answer candidates. (See here: https://github.com/google-research-datasets/natural-questions#long-answer-candidates). This information is important to ensure consistent comparison with prior work. It does not disturb the rest of the format . @lhoestq @albertvillanova | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4368/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4368/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4368",
"html_url": "https://github.com/huggingface/datasets/pull/4368",
"diff_url": "https://github.com/huggingface/datasets/pull/4368.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4368.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4367 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4367/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4367/comments | https://api.github.com/repos/huggingface/datasets/issues/4367/events | https://github.com/huggingface/datasets/pull/4367 | 1,240,011,602 | PR_kwDODunzps44B340 | 4,367 | Remove config names as yaml keys | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I included the change from https://github.com/huggingface/datasets/pull/4302 directly in this PR, this way the datasets will be updated right away in the CI (the CI is only triggered when a dataset card is changed)",
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright it's ready now :)\r\n\r\nHere is an example for the `ade_corpus_v2` dataset card. Notice the new `configs` key:\r\n\r\nhttps://github.com/huggingface/datasets/blob/76d9a141740a03f6836feb251f6059894b8d8046/datasets/ade_corpus_v2/README.md#L1-L78\r\n\r\nCI failures are only related to dataset cards missing some content."
] | 1,652,882,364,000 | 1,653,039,326,000 | 1,653,038,839,000 | MEMBER | null | Many datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.
I fix this, I removed the tags separations per config name completely, and have a single flat YAML for all configurations. Dataset search doesn't use this info anyway. I removed all the config names used as YAML keys, and I moved them in under a new `config:` key.
This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946).
Also removing the dots in the YAML keys would allow us to do as in https://github.com/huggingface/datasets/pull/4302 which removes a hack that replaces all the dots by underscores in the YAML tags.
I also added a test in the CI that checks that all the YAML tags to make sure that:
- they can be parsed using a YAML parser
- they contain only valid YAML tags like languages or task_ids | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4367/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4367/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4367",
"html_url": "https://github.com/huggingface/datasets/pull/4367",
"diff_url": "https://github.com/huggingface/datasets/pull/4367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4367.patch",
"merged_at": 1653038839000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4366/comments | https://api.github.com/repos/huggingface/datasets/issues/4366/events | https://github.com/huggingface/datasets/issues/4366 | 1,239,534,165 | I_kwDODunzps5J4cpV | 4,366 | TypeError: __init__() missing 1 required positional argument: 'scheme' | {
"login": "jffgitt",
"id": 99231535,
"node_id": "U_kgDOBeonLw",
"avatar_url": "https://avatars.githubusercontent.com/u/99231535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jffgitt",
"html_url": "https://github.com/jffgitt",
"followers_url": "https://api.github.com/users/jffgitt/followers",
"following_url": "https://api.github.com/users/jffgitt/following{/other_user}",
"gists_url": "https://api.github.com/users/jffgitt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jffgitt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jffgitt/subscriptions",
"organizations_url": "https://api.github.com/users/jffgitt/orgs",
"repos_url": "https://api.github.com/users/jffgitt/repos",
"events_url": "https://api.github.com/users/jffgitt/events{/privacy}",
"received_events_url": "https://api.github.com/users/jffgitt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | [
"Duplicate of:\r\n- #3956\r\n\r\nI think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py"
] | 1,652,858,249,000 | 1,652,891,782,000 | 1,652,891,781,000 | NONE | null | "name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "",
"version" : {
"number" : "7.5.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "",
"build_date" : "2019-11-26T01:06:52.518245Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
when I run the order:
nohup python3 custom_service.pyc > service.log 2>&1&
the log:
nohup: 忽略输入
Traceback (most recent call last):
File "/home/xfz/p3_custom_test/custom_service.py", line 55, in <module>
File "/home/xfz/p3_custom_test/custom_service.py", line 48, in doInitialize
File "custom_impl.py", line 286, in custom_setup
File "custom_impl.py", line 127, in create_es_index
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/__init__.py", line 345, in __init__
ssl_show_warn=ssl_show_warn,
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 105, in client_node_configs
node_configs = hosts_to_node_configs(hosts)
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 154, in hosts_to_node_configs
node_configs.append(host_mapping_to_node_config(host))
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 221, in host_mapping_to_node_config
return NodeConfig(**options) # type: ignore
TypeError: __init__() missing 1 required positional argument: 'scheme'
[1]+ 退出 1 nohup python3 custom_service.pyc > service.log 2>&1
custom_service_pyc can't running
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4366/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4365/comments | https://api.github.com/repos/huggingface/datasets/issues/4365/events | https://github.com/huggingface/datasets/pull/4365 | 1,239,109,943 | PR_kwDODunzps43-4fC | 4,365 | Remove dots in config names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Closing in favor of https://github.com/huggingface/datasets/pull/4367"
] | 1,652,818,377,000 | 1,652,882,872,000 | 1,652,882,381,000 | MEMBER | null | 20+ datasets have dots in their config names. However it causes issues with the YAML tags of the dataset cards since we can't have dots in YAML keys.
This is related to https://github.com/huggingface/datasets/pull/2362 (internal https://github.com/huggingface/moon-landing/issues/946).
Also removing the dots in the config names would allow us to merge https://github.com/huggingface/datasets/pull/4302 which removes a hack that replaces all the dots by underscores in the YAML tags.
I also added a test in the CI that checks that all the YAML tags to make sure that:
- they can be parsed using a YAML parser
- they contain only valid YAML tags like `languages` or `task_ids`
- they contain valid config names (no invalid characters `<>:/\|?*.`) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4365/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4365",
"html_url": "https://github.com/huggingface/datasets/pull/4365",
"diff_url": "https://github.com/huggingface/datasets/pull/4365.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4365.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4364/comments | https://api.github.com/repos/huggingface/datasets/issues/4364/events | https://github.com/huggingface/datasets/pull/4364 | 1,238,976,106 | PR_kwDODunzps43-bmq | 4,364 | Support complex feature types as `features` in packaged loaders | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,810,003,000 | 1,653,999,983,000 | 1,653,999,392,000 | CONTRIBUTOR | null | This PR adds `table_cast` to the packaged loaders to fix casting to the `Image`/`Audio`, `ArrayND` and `ClassLabel` types. If these types are not present in the `builder.config.features` dictionary, the built-in `pa.Table.cast` is used for better performance. Additionally, this PR adds `cast_storage` to `ClassLabel` to support the string to int conversion in `table_cast` and ensure that integer labels are in a valid range.
Fix https://github.com/huggingface/datasets/issues/4210
This PR is also a solution for these (popular) discussions: https://discuss.huggingface.co/t/converting-string-label-to-int/2816 and https://discuss.huggingface.co/t/class-labels-for-custom-datasets/15130/2
TODO:
* [x] tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4364/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4364/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4364",
"html_url": "https://github.com/huggingface/datasets/pull/4364",
"diff_url": "https://github.com/huggingface/datasets/pull/4364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4364.patch",
"merged_at": 1653999391000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4363/comments | https://api.github.com/repos/huggingface/datasets/issues/4363/events | https://github.com/huggingface/datasets/issues/4363 | 1,238,897,652 | I_kwDODunzps5J2BP0 | 4,363 | The dataset preview is not available for this split. | {
"login": "roholazandie",
"id": 7584674,
"node_id": "MDQ6VXNlcjc1ODQ2NzQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/7584674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roholazandie",
"html_url": "https://github.com/roholazandie",
"followers_url": "https://api.github.com/users/roholazandie/followers",
"following_url": "https://api.github.com/users/roholazandie/following{/other_user}",
"gists_url": "https://api.github.com/users/roholazandie/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roholazandie/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roholazandie/subscriptions",
"organizations_url": "https://api.github.com/users/roholazandie/orgs",
"repos_url": "https://api.github.com/users/roholazandie/repos",
"events_url": "https://api.github.com/users/roholazandie/events{/privacy}",
"received_events_url": "https://api.github.com/users/roholazandie/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! A dataset has to be streamable to work with the viewer. I did a quick test, and yours is, so this might be a bug in the viewer. cc @severo \r\n",
"Looking at it. The message is now:\r\n\r\n```\r\nMessage: cannot cache function '__shear_dense': no locator available for file '/src/services/worker/.venv/lib/python3.9/site-packages/librosa/util/utils.py'\r\n```\r\n\r\nso possibly it's related to the libraries versions?\r\n",
"Maybe this SO thread can help: https://stackoverflow.com/questions/59290386/runtimeerror-at-cannot-cache-function-shear-dense-no-locator-available-fo"
] | 1,652,805,283,000 | 1,652,891,105,000 | null | NONE | null | I have uploaded the corpus developed by our lab in the speech domain to huggingface [datasets](https://huggingface.co/datasets/Roh/ryanspeech). You can read about the companion paper accepted in interspeech 2021 [here](https://arxiv.org/abs/2106.08468). The dataset works fine but I can't make the dataset preview work. It gives me the following error that I don't understand. Can you help me to begin debugging it?
```
Status code: 400
Exception: AttributeError
Message: 'NoneType' object has no attribute 'split'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4363/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4363/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4362/comments | https://api.github.com/repos/huggingface/datasets/issues/4362/events | https://github.com/huggingface/datasets/pull/4362 | 1,238,680,112 | PR_kwDODunzps439bkf | 4,362 | Update dataset_infos for UDHN/udhr dataset | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4362). All of your documentation changes will be reflected on that endpoint.",
"Thanks for contributing @leondz.\r\n\r\nThe checksums of the files have changed because more languages have been added:\r\n- the new language codes need to be added to the dataset card (README file)\r\n- I think the dataset version number should also be increased, so that users who had previously cached it, get a new dataset download (with the additional languages)",
"Yep! All done (also fixed the language tags in the README which were iso639-3 instead of the expected bcp47)",
"I guess the language code CI failure is due to languages.json being a subset of bcp47 (see issue #4304), happy to contribute a solution here, e.g. autogeneration of the lang list from the relevant isos and the ietf bcp47 subtag register or full code for validation",
"> Thanks again for your contribution, @leondz.\r\n> \r\n> Yes, I think it is OK to set version 1.0.0 (as previous was 0.0.0).\r\n> \r\n> One of the CI failures is related to dummy data: once you have updated the dataset version, the dummy_data ZIP file should be moved from \"dummy/0.0.0/dummy_data.zip\" to \"dummy/1.0.0/dummy_data.zip\".\r\n\r\nOh, thanks, I missed that one\r\n\r\n\r\n> Other CI failure is related to missing languages in our resources file. This has been addressed in this PR:\r\n> \r\n> * #4371\r\n> \r\n> You should merge master branch into your feature branch to incorporate that fix.\r\n\r\nYeah, I saw this :) I already have the merge, thanks. I'm talking about the longer-term picture: every time another language code comes up (e.g. da-bornholm or es-VE), the json will need updating, because the current approach is non-exhaustive manual whitelisting instead of relying on the established bcp standard."
] | 1,652,795,579,000 | 1,653,400,290,000 | null | CONTRIBUTOR | null | Checksum update to `udhr` for issue #4361 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4362/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4362/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4362",
"html_url": "https://github.com/huggingface/datasets/pull/4362",
"diff_url": "https://github.com/huggingface/datasets/pull/4362.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4362.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4361 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4361/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4361/comments | https://api.github.com/repos/huggingface/datasets/issues/4361/events | https://github.com/huggingface/datasets/issues/4361 | 1,238,671,931 | I_kwDODunzps5J1KI7 | 4,361 | `udhr` doesn't load, dataset checksum mismatch | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,652,795,229,000 | 1,652,795,255,000 | null | CONTRIBUTOR | null | ## Describe the bug
Loading `udhr` fails due to a checksum mismatch for some source files. Looks like both of the source files on unicode.org have changed:
size + checksum in datasets repo:
```
(hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json
{
"https://unicode.org/udhr/assemblies/udhr_xml.zip": {
"num_bytes": 2273633,
"checksum": "0565fa62c2ff155b84123198bcc967edd8c5eb9679eadc01e6fb44a5cf730fee"
},
"https://unicode.org/udhr/assemblies/udhr_txt.zip": {
"num_bytes": 2107471,
"checksum": "087b474a070dd4096ae3028f9ee0b30dcdcb030cc85a1ca02e143be46327e5e5"
}
}
```
size + checksum regenerated from current source files:
```
(hfdev) leon@blade:~/datasets/datasets/udhr$ rm dataset_infos.json
(hfdev) leon@blade:~/datasets/datasets/udhr$ datasets-cli test --save_infos udhr.py
Using custom data configuration default
Testing builder 'default' (1/1)
Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66...
Dataset udhn downloaded and prepared to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66. Subsequent calls will reuse this data.
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 686.69it/s]
Dataset Infos file saved at dataset_infos.json
Test successful.
(hfdev) leon@blade:~/datasets/datasets/udhr$ jq .default.download_checksums < dataset_infos.json
{
"https://unicode.org/udhr/assemblies/udhr_xml.zip": {
"num_bytes": 2389690,
"checksum": "a3350912790196c6e1b26bfd1c8a50e8575f5cf185922ecd9bd15713d7d21438"
},
"https://unicode.org/udhr/assemblies/udhr_txt.zip": {
"num_bytes": 2215441,
"checksum": "cb87ecb25b56f34e4fd6f22b323000524fd9c06ae2a29f122b048789cf17e9fe"
}
}
(hfdev) leon@blade:~/datasets/datasets/udhr$
```
--- is unicode.org a sustainable hosting solution for this dataset?
## Steps to reproduce the bug
```python
from datasets import load_dataset
udhr = load_dataset("udhr")
```
## Expected results
That a Dataset object containing the UDHR data will be returned.
## Actual results
```
>>> d = load_dataset('udhr')
Using custom data configuration default
Downloading and preparing dataset udhn/default (download: 4.18 MiB, generated: 6.15 MiB, post-processed: Unknown size, total: 10.33 MiB) to /home/leon/.cache/huggingface/datasets/udhn/default/0.0.0/ad74b91fa2b3c386e5751b0c52bdfda76d334f76731142fd432d4acc2e2fde66...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/leon/.local/lib/python3.9/site-packages/datasets/load.py", line 1731, in load_dataset
builder_instance.download_and_prepare(
File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 613, in download_and_prepare
self._download_and_prepare(
File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 1117, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/leon/.local/lib/python3.9/site-packages/datasets/builder.py", line 684, in _download_and_prepare
verify_checksums(
File "/home/leon/.local/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://unicode.org/udhr/assemblies/udhr_xml.zip', 'https://unicode.org/udhr/assemblies/udhr_txt.zip']
>>>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.1 commit/4110fb6034f79c5fb470cf1043ff52180e9c63b7
- Platform: Linux Ubuntu 20.04
- Python version: 3.9.12
- PyArrow version: 8.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4361/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4361/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4360/comments | https://api.github.com/repos/huggingface/datasets/issues/4360/events | https://github.com/huggingface/datasets/pull/4360 | 1,237,239,096 | PR_kwDODunzps434izs | 4,360 | Fix example in opus_ubuntu, Add license info | {
"login": "leondz",
"id": 121934,
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leondz",
"html_url": "https://github.com/leondz",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"repos_url": "https://api.github.com/users/leondz/repos",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"CI seems to fail due to languages incorrectly being flagged as invalid, I guess that's related to the currently-broken bcp47 validation (see #4304)",
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,710,948,000 | 1,654,088,767,000 | 1,654,088,229,000 | CONTRIBUTOR | null | This PR
* fixes a typo in the example for the`opus_ubuntu` dataset where it's mistakenly referred to as `ubuntu`
* adds the declared license info for this corpus' origin
* adds an example instance
* updates the data origin type | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4360/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4360/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4360",
"html_url": "https://github.com/huggingface/datasets/pull/4360",
"diff_url": "https://github.com/huggingface/datasets/pull/4360.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4360.patch",
"merged_at": 1654088229000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4359 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4359/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4359/comments | https://api.github.com/repos/huggingface/datasets/issues/4359/events | https://github.com/huggingface/datasets/pull/4359 | 1,237,149,578 | PR_kwDODunzps434Pb6 | 4,359 | Fix Version equality | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,707,166,000 | 1,653,409,537,000 | 1,653,409,034,000 | MEMBER | null | I think `Version` equality should align with other similar cases in Python, like:
```python
In [1]: "a" == 5, "a" == None
Out[1]: (False, False)
In [2]: "a" != 5, "a" != None
Out[2]: (True, True)
```
With this PR, we will get:
```python
In [3]: Version("1.0.0") == 5, Version("1.0.0") == None
Out[3]: (False, False)
In [4]: Version("1.0.0") != 5, Version("1.0.0") != None
Out[4]: (True, True)
```
Note I found this issue when `doc-builder` tried to compare:
```python
if param.default != inspect._empty
```
where `param.default` is an instance of `Version`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4359/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4359/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4359",
"html_url": "https://github.com/huggingface/datasets/pull/4359",
"diff_url": "https://github.com/huggingface/datasets/pull/4359.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4359.patch",
"merged_at": 1653409034000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4358/comments | https://api.github.com/repos/huggingface/datasets/issues/4358/events | https://github.com/huggingface/datasets/issues/4358 | 1,237,147,692 | I_kwDODunzps5JvWAs | 4,358 | Missing dataset tags and sections in some dataset cards | {
"login": "sashavor",
"id": 14205986,
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sashavor",
"html_url": "https://github.com/sashavor",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"repos_url": "https://api.github.com/users/sashavor/repos",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"@lhoestq I can take this issue. Please can you point out to me where I can find the other positional arguments?",
"Hi @RohitRathore1 :)\r\n\r\nYou can find all the YAML tags in the tagging app here: https://hf.co/spaces/huggingface/datasets-tagging). They're all passed as arguments to a DatasetMetadata object used to validate the tags."
] | 1,652,707,096,000 | 1,653,925,012,000 | null | CONTRIBUTOR | null | Summary of CircleCI errors for different dataset metadata:
- **BoolQ**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **Conllpp**: expected some content in section `Citation Information` but it is empty.
- **GLUE**: 'annotations_creators', 'language_creators', 'source_datasets' :['unknown'] are not registered tags
- **ConLL2003**: field 'task_ids': ['part-of-speech-tagging'] are not registered tags for 'task_ids'
- **Hate_speech18:** Expected some content in section `Data Instances` but it is empty, Expected some content in section `Data Splits` but it is empty
- **Jjigsaw_toxicity_pred**: `Citation Information` but it is empty.
- **LIAR** : `Data Instances`,`Data Fields`, `Data Splits`, `Citation Information` are empty.
- **MSRA NER** : Dataset Summary`, `Data Instances`, `Data Fields`, `Data Splits`, `Citation Information` are empty.
- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **sms_spam**: `Data Instances` and`Data Splits` are empty.
- **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'
- **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4358/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4358/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4357/comments | https://api.github.com/repos/huggingface/datasets/issues/4357/events | https://github.com/huggingface/datasets/pull/4357 | 1,237,037,069 | PR_kwDODunzps4333b9 | 4,357 | Fix warning in push_to_hub | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,701,817,000 | 1,652,714,329,000 | 1,652,713,841,000 | MEMBER | null | Fix warning:
```
FutureWarning: 'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4357/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4357/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4357",
"html_url": "https://github.com/huggingface/datasets/pull/4357",
"diff_url": "https://github.com/huggingface/datasets/pull/4357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4357.patch",
"merged_at": 1652713841000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4356 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4356/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4356/comments | https://api.github.com/repos/huggingface/datasets/issues/4356/events | https://github.com/huggingface/datasets/pull/4356 | 1,236,846,308 | PR_kwDODunzps433OsB | 4,356 | Fix dataset builder default version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This PR requires one of these other PRs being merged first:\r\n- #4359 \r\n- huggingface/doc-builder#211"
] | 1,652,691,910,000 | 1,653,919,018,000 | 1,653,918,474,000 | MEMBER | null | Currently, when using a custom config (subclass of `BuilderConfig`), default version set at the builder level is ignored: we must set default version in the custom config class.
However, when loading a dataset with `config_kwargs` (for a configuration not present in `BUILDER_CONFIGS`), the default version set in the custom config is ignored and "0.0.0" is used instead:
```python
ds = load_dataset("wikipedia", language="co", date="20220501", beam_runner="DirectRunner")
```
generates the following config:
```python
WikipediaConfig(name='20220501.co', version=0.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for co, parsed from 20220501 dump.')
```
with version "0.0.0" instead of "2.0.0".
See as a counter-example, when the config is present in `BUILDER_CONFIGS`:
```python
ds = load_dataset("wikipedia", "20220301.fr", beam_runner="DirectRunner")
```
generates the following config:
```python
WikipediaConfig(name='20220301.fr', version=2.0.0, data_dir=None, data_files=None, description='Wikipedia dataset for fr, parsed from 20220301 dump.')
```
with correct version "2.0.0", as set in the custom config class.
The reason for this is that `DatasetBuilder` has a default VERSION ("0.0.0") that overwrites the default version set at the custom config class.
This PR:
- Removes the default VERSION at `DatasetBuilder` (set to None, so that the class attribute exists but it does not override the custom config default version).
- Note that the `BuilderConfig` class already sets a default version = "0.0.0"; no need to pass this from the builder. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4356/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4356/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4356",
"html_url": "https://github.com/huggingface/datasets/pull/4356",
"diff_url": "https://github.com/huggingface/datasets/pull/4356.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4356.patch",
"merged_at": 1653918474000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4355 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4355/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4355/comments | https://api.github.com/repos/huggingface/datasets/issues/4355/events | https://github.com/huggingface/datasets/pull/4355 | 1,236,797,490 | PR_kwDODunzps433EgP | 4,355 | Fix warning in upload_file | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,689,291,000 | 1,652,700,482,000 | 1,652,699,997,000 | MEMBER | null | Fix warning:
```
FutureWarning: Pass path_or_fileobj='...' as keyword args. From version 0.7 passing these as positional arguments will result in an error
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4355/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4355/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4355",
"html_url": "https://github.com/huggingface/datasets/pull/4355",
"diff_url": "https://github.com/huggingface/datasets/pull/4355.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4355.patch",
"merged_at": 1652699997000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4354 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4354/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4354/comments | https://api.github.com/repos/huggingface/datasets/issues/4354/events | https://github.com/huggingface/datasets/issues/4354 | 1,236,404,383 | I_kwDODunzps5Jsgif | 4,354 | Problems with WMT dataset | {
"login": "eldarkurtic",
"id": 8884008,
"node_id": "MDQ6VXNlcjg4ODQwMDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8884008?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eldarkurtic",
"html_url": "https://github.com/eldarkurtic",
"followers_url": "https://api.github.com/users/eldarkurtic/followers",
"following_url": "https://api.github.com/users/eldarkurtic/following{/other_user}",
"gists_url": "https://api.github.com/users/eldarkurtic/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eldarkurtic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eldarkurtic/subscriptions",
"organizations_url": "https://api.github.com/users/eldarkurtic/orgs",
"repos_url": "https://api.github.com/users/eldarkurtic/repos",
"events_url": "https://api.github.com/users/eldarkurtic/events{/privacy}",
"received_events_url": "https://api.github.com/users/eldarkurtic/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi! Yes, the docs are outdated. Expect this to be fixed soon. \r\n\r\nIn the meantime, you can try to fix the issue yourself.\r\n\r\nThese are the configs/language pairs supported by `wmt15` from which you can choose:\r\n* `cs-en` (Czech - English)\r\n* `de-en` (German - English)\r\n* `fi-en` (Finnish- English)\r\n* `fr-en` (French - English)\r\n* `ru-en` (Russian - English)\r\n\r\nAnd the current implementation always uses all the subsets available for a language, so to define custom subsets, you'll have to clone the repo from the Hub and replace the line https://huggingface.co/datasets/wmt15/blob/main/wmt_utils.py#L688 with:\r\n`for split, ss_names in (self._subsets if self.config.subsets is None else self.config.subsets).items()`\r\n\r\nThen, you can load the dataset as follows:\r\n```python\r\nfrom datasets import load_dataset\r\ndset = load_dataset(\"path/to/local/wmt15_folder\", \"<one of 5 available configs>\", subsets=...)",
"@mariosasko thanks a lot for the suggested fix! "
] | 1,652,648,306,000 | 1,653,582,353,000 | null | NONE | null | ## Describe the bug
I am trying to load WMT15 dataset and to define which data-sources to use for train/validation/test splits, but unfortunately it seems that the official documentation at [https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)](https://huggingface.co/datasets/wmt15#:~:text=Versions%20exists%20for,wmt_translate%22%2C%20config%3Dconfig)) doesn't work anymore.
## Steps to reproduce the bug
```shell
>>> import datasets
>>> a = datasets.translate.wmt.WmtConfig()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'datasets' has no attribute 'translate'
>>> a = datasets.wmt.WmtConfig()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'datasets' has no attribute 'wmt'
```
## Expected results
To load WMT15 with given data-sources.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0
- Platform: Linux-5.10.0-10-amd64-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 7.0.0
- Pandas version: 1.4.1
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4354/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4354/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4353/comments | https://api.github.com/repos/huggingface/datasets/issues/4353/events | https://github.com/huggingface/datasets/pull/4353 | 1,236,092,176 | PR_kwDODunzps43016x | 4,353 | Don't strip proceeding hyphen | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,552,729,000 | 1,652,727,098,000 | 1,652,709,131,000 | CONTRIBUTOR | null | Closes #4320. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4353/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4353/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4353",
"html_url": "https://github.com/huggingface/datasets/pull/4353",
"diff_url": "https://github.com/huggingface/datasets/pull/4353.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4353.patch",
"merged_at": 1652709130000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4352/comments | https://api.github.com/repos/huggingface/datasets/issues/4352/events | https://github.com/huggingface/datasets/issues/4352 | 1,236,086,170 | I_kwDODunzps5JrS2a | 4,352 | When using `dataset.map()` if passed `Features` types do not match what is returned from the mapped function, execution does not except in an obvious way | {
"login": "plamb-viso",
"id": 99206017,
"node_id": "U_kgDOBenDgQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plamb-viso",
"html_url": "https://github.com/plamb-viso",
"followers_url": "https://api.github.com/users/plamb-viso/followers",
"following_url": "https://api.github.com/users/plamb-viso/following{/other_user}",
"gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions",
"organizations_url": "https://api.github.com/users/plamb-viso/orgs",
"repos_url": "https://api.github.com/users/plamb-viso/repos",
"events_url": "https://api.github.com/users/plamb-viso/events{/privacy}",
"received_events_url": "https://api.github.com/users/plamb-viso/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Hi ! Thanks for reporting :) `datasets` usually returns a `pa.lib.ArrowInvalid` error if the feature types don't match.\r\n\r\nIt would be awesome if we had a way to reproduce the `OverflowError` in this case, to better understand what happened and be able to provide the best error message"
] | 1,652,550,915,000 | 1,652,713,757,000 | null | NONE | null | ## Describe the bug
Recently I was trying to using `.map()` to preprocess a dataset. I defined the expected Features and passed them into `.map()` like `dataset.map(preprocess_data, features=features)`. My expected `Features` keys matched what came out of `preprocess_data`, but the types i had defined for them did not match the types that came back. Because of this, i ended up in tracebacks deep inside arrow_dataset.py and arrow_writer.py with exceptions that [did not make clear what the problem was](https://github.com/huggingface/datasets/issues/4349). In short i ended up with overflows and the OS killing processes when Arrow was attempting to write. It wasn't until I dug into `def write_batch` and the loop that loops over cols that I figured out what was going on.
It seems like `.map()` could set a boolean that it's checked that for at least 1 instance from the dataset, the returned data's types match the types provided by the `features` param and error out with a clear exception if they don't. This would make the cause of the issue much more understandable and save people time. This could be construed as a feature but it feels more like a bug to me.
## Steps to reproduce the bug
I don't have explicit code to repro the bug, but ill show an example
Code prior to the fix:
```python
def preprocess(examples):
# returns an encoded data dict with keys that match the features, but the types do not match
...
def get_encoded_data(data):
dataset = Dataset.from_pandas(data)
unique_labels = data['audit_type'].unique().tolist()
features = Features({
'image': Array3D(dtype="uint8", shape=(3, 224, 224))),
'input_ids': Sequence(feature=Value(dtype='int64'))),
'attention_mask': Sequence(Value(dtype='int64'))),
'token_type_ids': Sequence(Value(dtype='int64'))),
'bbox': Array2D(dtype="int64", shape=(512, 4))),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names)
```
The Features set that fixed it:
```python
features = Features({
'image': Sequence(Array3D(dtype="uint8", shape=(3, 224, 224))),
'input_ids': Sequence(Sequence(feature=Value(dtype='int64'))),
'attention_mask': Sequence(Sequence(Value(dtype='int64'))),
'token_type_ids': Sequence(Sequence(Value(dtype='int64'))),
'bbox': Sequence(Array2D(dtype="int64", shape=(512, 4))),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
```
The difference between my original code (which was based on documentation) and the working code is the addition of the `Sequence(...)` to 4/5 features as I am working with paginated data and the doc examples are not.
## Expected results
Dataset.map() attempts to validate the data types for each Feature on the first iteration and errors out if they are not validated.
## Actual results
Specify the actual results or traceback.
Based on the value of `writer_batch_size`, execution errors out when Arrow attempts to write because the types do not match, though its error messages dont make this obvious
Example errors:
```
OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB.
(offset overflow while concatenating arrays)
```
```
zsh: killed python doc_classification.py
UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
datasets version: 2.1.0
Platform: macOS-12.2.1-arm64-arm-64bit
Python version: 3.9.12
PyArrow version: 6.0.1
Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4352/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4352/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4351 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4351/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4351/comments | https://api.github.com/repos/huggingface/datasets/issues/4351/events | https://github.com/huggingface/datasets/issues/4351 | 1,235,950,209 | I_kwDODunzps5JqxqB | 4,351 | Add optional progress bar for .save_to_disk(..) and .load_from_disk(..) when working with remote filesystems | {
"login": "Rexhaif",
"id": 5154447,
"node_id": "MDQ6VXNlcjUxNTQ0NDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5154447?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Rexhaif",
"html_url": "https://github.com/Rexhaif",
"followers_url": "https://api.github.com/users/Rexhaif/followers",
"following_url": "https://api.github.com/users/Rexhaif/following{/other_user}",
"gists_url": "https://api.github.com/users/Rexhaif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Rexhaif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rexhaif/subscriptions",
"organizations_url": "https://api.github.com/users/Rexhaif/orgs",
"repos_url": "https://api.github.com/users/Rexhaif/repos",
"events_url": "https://api.github.com/users/Rexhaif/events{/privacy}",
"received_events_url": "https://api.github.com/users/Rexhaif/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! I like this idea. For consistency with `load_dataset`, we can use `fsspec`'s `TqdmCallback` in `.load_from_disk` to monitor the number of bytes downloaded, and in `.save_to_disk`, we can track the number of saved shards for consistency with `push_to_hub` (after we implement https://github.com/huggingface/datasets/issues/4196)."
] | 1,652,527,842,000 | 1,652,882,386,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
When working with large datasets stored on remote filesystems(such as s3), the process of uploading a dataset could take really long time. For instance: I was uploading a re-processed version of wmt17 en-ru to my s3 bucket and it took like 35 minutes(and that's given that I have a fiber optic connection). The only output during that process was a progress bar for flattening indices and then ~35 minutes of complete silence.
**Describe the solution you'd like**
I want to be able to enable a progress bar when calling .save_to_disk(..) and .load_from_disk(..), it would track either amount of bytes sent/received or amount of records written/loaded, and will give some ETA. Basically just tqdm.
**Describe alternatives you've considered**
- Save dataset to tmp folder at the disk and then upload it using custom wrapper over botocore, which will work with progress bar, like [this](https://alexwlchan.net/2021/04/s3-progress-bars/). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4351/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4351/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4350 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4350/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4350/comments | https://api.github.com/repos/huggingface/datasets/issues/4350/events | https://github.com/huggingface/datasets/pull/4350 | 1,235,505,104 | PR_kwDODunzps43zKIV | 4,350 | Add a new metric: CTC_Consistency | {
"login": "YEdenZ",
"id": 92551194,
"node_id": "U_kgDOBYQ4Gg",
"avatar_url": "https://avatars.githubusercontent.com/u/92551194?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YEdenZ",
"html_url": "https://github.com/YEdenZ",
"followers_url": "https://api.github.com/users/YEdenZ/followers",
"following_url": "https://api.github.com/users/YEdenZ/following{/other_user}",
"gists_url": "https://api.github.com/users/YEdenZ/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YEdenZ/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YEdenZ/subscriptions",
"organizations_url": "https://api.github.com/users/YEdenZ/orgs",
"repos_url": "https://api.github.com/users/YEdenZ/repos",
"events_url": "https://api.github.com/users/YEdenZ/events{/privacy}",
"received_events_url": "https://api.github.com/users/YEdenZ/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for your contribution, @YEdenZ.\r\n\r\nPlease note that our old `metrics` module is in the process of being incorporated to a separate library called `evaluate`: https://github.com/huggingface/evaluate\r\n\r\nTherefore, I would ask you to transfer your PR to that repository. Thank you."
] | 1,652,463,079,000 | 1,652,955,784,000 | 1,652,955,783,000 | NONE | null | Add CTC_Consistency metric
Do I also need to modify the `test_metric_common.py` file to make it run on test? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4350/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4350/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4350",
"html_url": "https://github.com/huggingface/datasets/pull/4350",
"diff_url": "https://github.com/huggingface/datasets/pull/4350.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4350.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4349/comments | https://api.github.com/repos/huggingface/datasets/issues/4349/events | https://github.com/huggingface/datasets/issues/4349 | 1,235,474,765 | I_kwDODunzps5Jo9lN | 4,349 | Dataset.map()'s fails at any value of parameter writer_batch_size | {
"login": "plamb-viso",
"id": 99206017,
"node_id": "U_kgDOBenDgQ",
"avatar_url": "https://avatars.githubusercontent.com/u/99206017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/plamb-viso",
"html_url": "https://github.com/plamb-viso",
"followers_url": "https://api.github.com/users/plamb-viso/followers",
"following_url": "https://api.github.com/users/plamb-viso/following{/other_user}",
"gists_url": "https://api.github.com/users/plamb-viso/gists{/gist_id}",
"starred_url": "https://api.github.com/users/plamb-viso/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/plamb-viso/subscriptions",
"organizations_url": "https://api.github.com/users/plamb-viso/orgs",
"repos_url": "https://api.github.com/users/plamb-viso/repos",
"events_url": "https://api.github.com/users/plamb-viso/events{/privacy}",
"received_events_url": "https://api.github.com/users/plamb-viso/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Note that this same issue occurs even if i preprocess with the more default way of tokenizing that uses LayoutLMv2Processor's internal OCR:\r\n\r\n```python\r\n feature_extractor = LayoutLMv2FeatureExtractor()\r\n tokenizer = LayoutLMv2Tokenizer.from_pretrained(\"microsoft/layoutlmv2-base-uncased\")\r\n processor = LayoutLMv2Processor(feature_extractor, tokenizer)\r\n encoded_inputs = processor(images, padding=\"max_length\", truncation=True)\r\n encoded_inputs[\"image\"] = np.array(encoded_inputs[\"image\"])\r\n encoded_inputs[\"label\"] = examples['label_id']\r\n```",
"Wanted to make sure anyone that finds this also finds my other report: https://github.com/huggingface/datasets/issues/4352",
"Did you close it because you found that it was due to the incorrect Feature types ?",
"Yeah-- my analysis of the issue was wrong in this one so I just closed it while linking to the new issue",
"I met with the same problem when doing some experiments about layoutlm. I tried to set the writer_batch_size to 1, and the error still exists. Is there any solutions to this problem?",
"The problem lies in how your Features are defined. It's erroring out when it actually goes to write them to disk"
] | 1,652,460,912,000 | 1,654,174,271,000 | 1,652,540,888,000 | NONE | null | ## Describe the bug
If the the value of `writer_batch_size` is less than the total number of instances in the dataset it will fail at that same number of instances. If it is greater than the total number of instances, it fails on the last instance.
Context:
I am attempting to fine-tune a pre-trained HuggingFace transformers model called LayoutLMv2. This model takes three inputs: document images, words and word bounding boxes. [The Processor for this model has two options](https://huggingface.co/docs/transformers/model_doc/layoutlmv2#usage-layoutlmv2processor), the default is passing a document to the Processor and allowing it to create images of the document and use PyTesseract to perform OCR and generate words/bounding boxes. The other option is to provide `revision="no_ocr"` to the pre-trained model which allows you to use your own OCR results (in my case, Amazon Textract) so you have to provide the image, words and bounding boxes yourself. I am using this second option which might be good context for the bug.
I am using the Dataset.map() paradigm to create these three inputs, encode them and save the dataset. Note that my documents (data instances) on average are fairly large and can range from 1 page up to 300 pages.
Code I am using is provided below
## Steps to reproduce the bug
I do not have explicit sample code, but I will paste the code I'm using in case reading it helps. When `.map()` is called, the dataset has 2933 rows, many of which represent large pdf documents.
```python
def get_encoded_data(data):
dataset = Dataset.from_pandas(data)
unique_labels = data['label'].unique()
features = Features({
'image': Array3D(dtype="int64", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'token_type_ids': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'label': ClassLabel(num_classes=len(unique_labels), names=unique_labels),
})
encoded_dataset = dataset.map(preprocess_data, features=features, remove_columns=dataset.column_names, writer_batch_size=dataset.num_rows+1)
encoded_dataset.save_to_disk(TRAINING_DATA_PATH + ENCODED_DATASET_NAME)
encoded_dataset.set_format(type="torch")
return encoded_dataset
```
```python
PROCESSOR = LayoutLMv2Processor.from_pretrained(MODEL_PATH, revision="no_ocr", use_fast=False)
def preprocess_data(examples):
directory = os.path.join(FILES_PATH, examples['file_location'])
images_dir = os.path.join(directory, PDF_IMAGE_DIR)
textract_response_path = os.path.join(directory, 'textract.json')
doc_meta_path = os.path.join(directory, 'doc_meta.json')
textract_document = get_textract_document(textract_response_path, doc_meta_path)
images, words, bboxes = get_doc_training_data(images_dir, textract_document)
encoded_inputs = PROCESSOR(images, words, boxes=bboxes, padding="max_length", truncation=True)
# https://github.com/NielsRogge/Transformers-Tutorials/issues/36
encoded_inputs["image"] = np.array(encoded_inputs["image"])
encoded_inputs["label"] = examples['label_id']
return encoded_inputs
```
## Expected results
My expectation is that `writer_batch_size` allows one to simply trade off performance and memory requirements, not that it must be a specific number for `.map()` to function correctly.
## Actual results
If writer_batch_size is set to a value less than the number of rows, I get either:
```
OverflowError: There was an overflow with type <class 'list'>. Try to reduce writer_batch_size to have batches smaller than 2GB.
(offset overflow while concatenating arrays)
```
or simply
```
zsh: killed python doc_classification.py
UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown
```
If it is greater than the number of rows, i get the `zsh: killed` error above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.0
- Platform: macOS-12.2.1-arm64-arm-64bit
- Python version: 3.9.12
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4349/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4349/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4348 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4348/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4348/comments | https://api.github.com/repos/huggingface/datasets/issues/4348/events | https://github.com/huggingface/datasets/issues/4348 | 1,235,432,976 | I_kwDODunzps5JozYQ | 4,348 | `inspect` functions can't fetch dataset script from the Hub | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Hi, thanks for reporting! `git bisect` points to #2986 as the PR that introduced the bug. Since then, there have been some additional changes to the loading logic, and in the current state, `force_local_path` (set via `local_path`) forbids pulling a script from the internet instead of downloading it: https://github.com/huggingface/datasets/blob/cfae0545b2ba05452e16136cacc7d370b4b186a1/src/datasets/inspect.py#L89-L91\r\n\r\ncc @lhoestq: `force_local_path` is only used in `inspect_dataset` and `inspect_metric`. Is it OK if we revert the behavior to match the old one?",
"Good catch ! Yea I think it's fine :)"
] | 1,652,458,106,000 | 1,654,008,246,000 | null | MEMBER | null | The `inspect_dataset` and `inspect_metric` functions are unable to retrieve a dataset or metric script from the Hub and store it locally at the specified `local_path`:
```py
>>> from datasets import inspect_dataset
>>> inspect_dataset('rotten_tomatoes', local_path='path/to/my/local/folder')
FileNotFoundError: Couldn't find a dataset script at /content/rotten_tomatoes/rotten_tomatoes.py or any data file in the same directory.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4348/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4348/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4347/comments | https://api.github.com/repos/huggingface/datasets/issues/4347/events | https://github.com/huggingface/datasets/pull/4347 | 1,235,318,064 | PR_kwDODunzps43yihq | 4,347 | Support remote cache_dir | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq thanks for your review.\r\n\r\nPlease note that `xjoin` cannot be used in this context, as it always returns a POSIX path string and this is not suitable on Windows machines.",
"<s>`xjoin` returns windows paths (not posix) on windows, since it just extends`os.path.join` </s>\r\n\r\nActually you are right.\r\n\r\nhttps://github.com/huggingface/datasets/blob/08ec04ccb59630a3029b2ecd8a14d327bddd0c4a/src/datasets/utils/streaming_download_manager.py#L104-L105\r\n\r\nThough this is not an issue because posix paths (as returned by Path().as_posix()) work on windows. That's why we can replace `os.path.join` with `xjoin` in streaming mode. They look like `c:/Program Files/` or something (can't confirm right now, I don't have a windows with me)",
"Until now, we have always replaced \"/\" in paths with `os.path.join` (`os.sep`,...) in order to support Windows paths (that contain r\"\\\\\").\r\n\r\nNow, you suggest ignoring this and work with POSIX strings (with \"/\").\r\n\r\nAs an example, when passing `cache_dir=r\"C:\\Users\\Username\\.mycache\"`:\r\n- Until now, it results in `self._cache_downloaded_dir = r\"C:\\Users\\Username\\.mycache\\downloads\"`\r\n- If we use `xjoin`, it will give `self._cache_downloaded_dir = \"C:/Users/Username/.mycache/downloads\"`\r\n\r\nYou say this is OK and we don't care if we work with POSIX strings on Windows machines.\r\n\r\nI'm incorporating your suggested changes then...",
"Also note that using `xjoin`, if we pass `cache_dir=\"C:\\\\Users\\\\Username\\\\.mycache\"`, we get:\r\n- `self._cache_dir_root = \"C:\\\\Users\\\\Username\\\\.mycache\"`\r\n- `self._cache_downloaded_dir = \"C:/Users/Username/.mycache/downloads\"`",
"It looks like it broke the CI on windows :/ maybe this was not a good idea, sorry"
] | 1,652,451,995,000 | 1,653,496,523,000 | 1,653,496,023,000 | MEMBER | null | This PR implements complete support for remote `cache_dir`. Before, the support was just partial.
This is useful to create datasets using Apache Beam (parallel data processing) builder with `cache_dir` in a remote bucket, e.g., for Wikipedia dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4347/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4347/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4347",
"html_url": "https://github.com/huggingface/datasets/pull/4347",
"diff_url": "https://github.com/huggingface/datasets/pull/4347.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4347.patch",
"merged_at": 1653496023000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4346 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4346/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4346/comments | https://api.github.com/repos/huggingface/datasets/issues/4346/events | https://github.com/huggingface/datasets/issues/4346 | 1,235,067,062 | I_kwDODunzps5JnaC2 | 4,346 | GH Action to build documentation never ends | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,652,438,684,000 | 1,652,440,920,000 | 1,652,440,920,000 | MEMBER | null | ## Describe the bug
See: https://github.com/huggingface/datasets/runs/6418035586?check_suite_focus=true
I finally forced the cancel of the workflow. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4346/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4346/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4345 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4345/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4345/comments | https://api.github.com/repos/huggingface/datasets/issues/4345/events | https://github.com/huggingface/datasets/pull/4345 | 1,235,062,787 | PR_kwDODunzps43xrky | 4,345 | Fix never ending GH Action to build documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,652,438,410,000 | 1,652,441,383,000 | 1,652,440,920,000 | MEMBER | null | There was an unclosed code block introduced by:
- #4313
https://github.com/huggingface/datasets/pull/4313/files#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R538
This causes the "Make documentation" step in the "Build documentation" workflow to never finish.
- I think this issue should also be addressed in the `doc-builder` lib.
Fix #4346. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4345/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4345/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4345",
"html_url": "https://github.com/huggingface/datasets/pull/4345",
"diff_url": "https://github.com/huggingface/datasets/pull/4345.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4345.patch",
"merged_at": 1652440920000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4344/comments | https://api.github.com/repos/huggingface/datasets/issues/4344/events | https://github.com/huggingface/datasets/pull/4344 | 1,234,882,542 | PR_kwDODunzps43xFEn | 4,344 | Fix docstring in DatasetDict::shuffle | {
"login": "felixdivo",
"id": 4403130,
"node_id": "MDQ6VXNlcjQ0MDMxMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/4403130?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felixdivo",
"html_url": "https://github.com/felixdivo",
"followers_url": "https://api.github.com/users/felixdivo/followers",
"following_url": "https://api.github.com/users/felixdivo/following{/other_user}",
"gists_url": "https://api.github.com/users/felixdivo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/felixdivo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/felixdivo/subscriptions",
"organizations_url": "https://api.github.com/users/felixdivo/orgs",
"repos_url": "https://api.github.com/users/felixdivo/repos",
"events_url": "https://api.github.com/users/felixdivo/events{/privacy}",
"received_events_url": "https://api.github.com/users/felixdivo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,652,429,160,000 | 1,653,470,623,000 | 1,653,406,521,000 | CONTRIBUTOR | null | I think due to #1626, the docstring contained this error ever since `seed` was added. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4344/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4344/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4344",
"html_url": "https://github.com/huggingface/datasets/pull/4344",
"diff_url": "https://github.com/huggingface/datasets/pull/4344.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4344.patch",
"merged_at": 1653406521000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4343/comments | https://api.github.com/repos/huggingface/datasets/issues/4343/events | https://github.com/huggingface/datasets/issues/4343 | 1,234,864,168 | I_kwDODunzps5Jmogo | 4,343 | Metrics documentation is not accessible in the datasets doc UI | {
"login": "fxmarty",
"id": 9808326,
"node_id": "MDQ6VXNlcjk4MDgzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9808326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fxmarty",
"html_url": "https://github.com/fxmarty",
"followers_url": "https://api.github.com/users/fxmarty/followers",
"following_url": "https://api.github.com/users/fxmarty/following{/other_user}",
"gists_url": "https://api.github.com/users/fxmarty/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fxmarty/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fxmarty/subscriptions",
"organizations_url": "https://api.github.com/users/fxmarty/orgs",
"repos_url": "https://api.github.com/users/fxmarty/repos",
"events_url": "https://api.github.com/users/fxmarty/events{/privacy}",
"received_events_url": "https://api.github.com/users/fxmarty/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400959,
"node_id": "MDU6TGFiZWwyMDY3NDAwOTU5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Metric%20discussion",
"name": "Metric discussion",
"color": "d722e8",
"default": false,
"description": "Discussions on the metrics"
}
] | closed | false | null | [] | null | [
"Hey @fxmarty :) Yes we are working on showing the docs of all the metrics on the Hugging face website. If you want to follow the advancements you can check the [evaluate](https://github.com/huggingface/evaluate) repository cc @lvwerra @sashavor "
] | 1,652,427,990,000 | 1,654,246,225,000 | 1,654,246,225,000 | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
Search for a metric name like "seqeval" yields no results on https://huggingface.co/docs/datasets/master/en/index . One needs to go look in `datasets/metrics/README.md` to find the doc. Even in the `README.md`, it can be hard to understand what the metric expects as an input, for example for `squad` there is a [key `id`](https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L42) documented only in the function doc but not in the `README.md`, and one needs to go look into the code to understand what the metric expects.
**Describe the solution you'd like**
Have the documentation for metrics appear as well in the doc UI, e.g. this https://github.com/huggingface/datasets/blob/1a4c185663a6958f48ec69624473fdc154a36a9d/metrics/squad/squad.py#L21-L63
I know there are plans to migrate metrics to the evaluate library, but just pointing this out.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4343/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4343/timeline | null | completed | null | null | false |