url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.26B
| node_id
stringlengths 18
32
| number
int64 1
4.44k
| title
stringlengths 1
276
| user
dict | labels
list | state
stringclasses 2
values | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
sequence | created_at
int64 1,587B
1,654B
| updated_at
int64 1,587B
1,654B
| closed_at
int64 1,587B
1,654B
⌀ | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 1
value | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4443/comments | https://api.github.com/repos/huggingface/datasets/issues/4443/events | https://github.com/huggingface/datasets/issues/4443 | 1,259,606,334 | I_kwDODunzps5LFBE- | 4,443 | Dataset Viewer issue for openclimatefix/nimrod-uk-1km | {
"login": "ZYMXIXI",
"id": 32382826,
"node_id": "MDQ6VXNlcjMyMzgyODI2",
"avatar_url": "https://avatars.githubusercontent.com/u/32382826?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZYMXIXI",
"html_url": "https://github.com/ZYMXIXI",
"followers_url": "https://api.github.com/users/ZYMXIXI/followers",
"following_url": "https://api.github.com/users/ZYMXIXI/following{/other_user}",
"gists_url": "https://api.github.com/users/ZYMXIXI/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZYMXIXI/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZYMXIXI/subscriptions",
"organizations_url": "https://api.github.com/users/ZYMXIXI/orgs",
"repos_url": "https://api.github.com/users/ZYMXIXI/repos",
"events_url": "https://api.github.com/users/ZYMXIXI/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZYMXIXI/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,654,244,236,000 | 1,654,244,237,000 | null | NONE | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4443/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4443/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4442/comments | https://api.github.com/repos/huggingface/datasets/issues/4442/events | https://github.com/huggingface/datasets/issues/4442 | 1,258,589,276 | I_kwDODunzps5LBIxc | 4,442 | Dataset Viewer issue for amazon_polarity | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks, looking at it"
] | 1,654,197,518,000 | 1,654,241,950,000 | null | MEMBER | null | ### Link
https://huggingface.co/datasets/amazon_polarity/viewer/amazon_polarity/test
### Description
For some reason the train split is OK but the test split is not for this dataset:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/amazon_polarity/__init__.py'
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4442/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4442/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4441/comments | https://api.github.com/repos/huggingface/datasets/issues/4441/events | https://github.com/huggingface/datasets/issues/4441 | 1,258,568,656 | I_kwDODunzps5LBDvQ | 4,441 | Dataset Viewer issue for aeslc | {
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [] | 1,654,196,232,000 | 1,654,196,232,000 | null | MEMBER | null | ### Link
https://huggingface.co/datasets/aeslc
### Description
The dataset viewer can't find `dataset_infos.json` in it's cache:
```
Server error
Status code: 400
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/aeslc/eb8e30234cf984a58ebe9f205674597ac1db2ec91e7321cd7f36864f7e3671b8/dataset_infos.json'
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4441/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4441/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4440/comments | https://api.github.com/repos/huggingface/datasets/issues/4440/events | https://github.com/huggingface/datasets/pull/4440 | 1,258,494,469 | PR_kwDODunzps44_io_ | 4,440 | Update docs around audio and vision | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4440). All of your documentation changes will be reflected on that endpoint."
] | 1,654,191,723,000 | 1,654,192,190,000 | null | MEMBER | null | As part of the strategy to center the docs around the different modalities, this PR updates the quickstart to include audio and vision examples. This improves the developer experience by making audio and vision content more discoverable, enabling users working in these modalities to also quickly get started without digging too deeply into the docs.
Other changes include:
- Moved the installation guide to the Get Started section because it should be part of a user's onboarding to the library before exploring tutorials or how-to's.
- Updated the native TF code at creating a `tf.data.Dataset` because it was throwing an error. The `to_tensor()` bit was redundant and removing it fixed the error (please double-check me here!).
- Added some UI components to the quickstart so it's easier for users to navigate directly to the relevant section with context about what to expect.
- Reverted to the code tabs for content that don't have any framework-specific text. I think this saves space compared to the code blocks. We'll still use the code blocks if the `torch` text is different from the `tf` text.
Let me know what you think, especially if we should include some code samples for training a model in the audio/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4440/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4440/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4440",
"html_url": "https://github.com/huggingface/datasets/pull/4440",
"diff_url": "https://github.com/huggingface/datasets/pull/4440.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4440.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4439/comments | https://api.github.com/repos/huggingface/datasets/issues/4439/events | https://github.com/huggingface/datasets/issues/4439 | 1,258,434,111 | I_kwDODunzps5LAi4_ | 4,439 | TIMIT won't load after manual download: Errors about files that don't exist | {
"login": "drscotthawley",
"id": 13925685,
"node_id": "MDQ6VXNlcjEzOTI1Njg1",
"avatar_url": "https://avatars.githubusercontent.com/u/13925685?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/drscotthawley",
"html_url": "https://github.com/drscotthawley",
"followers_url": "https://api.github.com/users/drscotthawley/followers",
"following_url": "https://api.github.com/users/drscotthawley/following{/other_user}",
"gists_url": "https://api.github.com/users/drscotthawley/gists{/gist_id}",
"starred_url": "https://api.github.com/users/drscotthawley/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/drscotthawley/subscriptions",
"organizations_url": "https://api.github.com/users/drscotthawley/orgs",
"repos_url": "https://api.github.com/users/drscotthawley/repos",
"events_url": "https://api.github.com/users/drscotthawley/events{/privacy}",
"received_events_url": "https://api.github.com/users/drscotthawley/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"To have some context, please see:\r\n- #4145\r\n\r\nPlease, also note that we have recently made some fixes to the script, which are in our GitHub master branch but not yet released:\r\n- #4422\r\n- #4425 \r\n- #4436",
"Thanks Albert! I'll try pulling `datasets` from the git repo instead of PyPI, and/or just wait for the next release.\r\n",
"I'm closing this issue then. Please, feel free to reopen it again if the problem persists."
] | 1,654,187,756,000 | 1,654,245,857,000 | 1,654,245,856,000 | NONE | null | ## Describe the bug
I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both cases the HF dataloader fails; it is looking for files that don't exist anywhere in the dataset: it is looking for files with lower-case letters like "**test*" (all the filenames in both my copies are uppercase) and certain file extensions that exclude the .DOC which is provided in TIMIT:
## Steps to reproduce the bug
```python
data = load_dataset('timit_asr', 'clean')['train']
```
## Expected results
The dataset should load with no errors.
## Actual results
This error message:
```
File "/home/ubuntu/envs/data2vec/lib/python3.9/site-packages/datasets/data_files.py", line 201, in resolve_patterns_locally_or_by_urls
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to resolve any data file that matches '['**test*', '**eval*']' at /home/ubuntu/datasets/timit with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
```
But this is a strange sort of error: why is it looking for lower-case file names when all the TIMIT dataset filenames are uppercase? Why does it exclude .DOC files when the only parts of the TIMIT data set with "TEST" in them have ".DOC" extensions? ...I wonder, how was anyone able to get this to work in the first place?
The files in the dataset look like the following:
```
³ PHONCODE.DOC
³ PROMPTS.TXT
³ SPKRINFO.TXT
³ SPKRSENT.TXT
³ TESTSET.DOC
```
...so why are these being excluded by the dataset loader?
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.27
- Python version: 3.9.9
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4439/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4439/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4438/comments | https://api.github.com/repos/huggingface/datasets/issues/4438/events | https://github.com/huggingface/datasets/pull/4438 | 1,258,255,394 | PR_kwDODunzps44-vhC | 4,438 | Fix docstring of inspect_dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,179,670,000 | 1,654,188,055,000 | 1,654,187,547,000 | MEMBER | null | As pointed out by @sgugger:
- huggingface/doc-builder/issues/235 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4438/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4438/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4438",
"html_url": "https://github.com/huggingface/datasets/pull/4438",
"diff_url": "https://github.com/huggingface/datasets/pull/4438.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4438.patch",
"merged_at": 1654187547000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4437/comments | https://api.github.com/repos/huggingface/datasets/issues/4437/events | https://github.com/huggingface/datasets/pull/4437 | 1,258,249,582 | PR_kwDODunzps44-uRW | 4,437 | Add missing columns to `blended_skill_talk` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4437). All of your documentation changes will be reflected on that endpoint."
] | 1,654,179,386,000 | 1,654,179,695,000 | null | CONTRIBUTOR | null | Adds the missing columns to `blended_skill_talk` to align the loading logic with [ParlAI](https://github.com/facebookresearch/ParlAI/blob/main/parlai/tasks/blended_skill_talk/build.py).
Fix #4426 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4437/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4437/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4437",
"html_url": "https://github.com/huggingface/datasets/pull/4437",
"diff_url": "https://github.com/huggingface/datasets/pull/4437.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4437.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4436/comments | https://api.github.com/repos/huggingface/datasets/issues/4436/events | https://github.com/huggingface/datasets/pull/4436 | 1,257,758,834 | PR_kwDODunzps449FsU | 4,436 | Fix directory names for LDC data in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,152,304,000 | 1,654,162,376,000 | 1,654,161,867,000 | MEMBER | null | Related to:
- #4422 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4436/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4436/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4436",
"html_url": "https://github.com/huggingface/datasets/pull/4436",
"diff_url": "https://github.com/huggingface/datasets/pull/4436.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4436.patch",
"merged_at": 1654161867000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4435 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4435/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4435/comments | https://api.github.com/repos/huggingface/datasets/issues/4435/events | https://github.com/huggingface/datasets/issues/4435 | 1,257,496,552 | I_kwDODunzps5K89_o | 4,435 | Load a local cached dataset that has been modified | {
"login": "mihail911",
"id": 2789441,
"node_id": "MDQ6VXNlcjI3ODk0NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2789441?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mihail911",
"html_url": "https://github.com/mihail911",
"followers_url": "https://api.github.com/users/mihail911/followers",
"following_url": "https://api.github.com/users/mihail911/following{/other_user}",
"gists_url": "https://api.github.com/users/mihail911/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mihail911/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mihail911/subscriptions",
"organizations_url": "https://api.github.com/users/mihail911/orgs",
"repos_url": "https://api.github.com/users/mihail911/repos",
"events_url": "https://api.github.com/users/mihail911/events{/privacy}",
"received_events_url": "https://api.github.com/users/mihail911/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi! `datasets` caches every modification/loading, so you can either rerun the pipeline up to the `map` call or use `Dataset.from_file(modified_dataset)` to load the dataset directly from the cache file.",
"Awesome, hvala Mario! This works. "
] | 1,654,134,709,000 | 1,654,214,366,000 | 1,654,214,358,000 | NONE | null | ## Describe the bug
I have loaded a dataset as follows:
```
d = load_dataset("emotion", split="validation")
```
Afterwards I make some modifications to the dataset via a `map` call:
```
d.map(some_update_func, cache_file_name=modified_dataset)
```
This generates a cached version of the dataset on my local system in the same directory as the original download of the data (/path/to/cache). Running an `ls` returns:
```
modified_dataset
dataset_info.json
emotion-test.arrow
emotion-train.arrow
emotion-validation.arrow
```
as expected. However, when I try to load up the modified cached dataset via a call to
```
modified = load_dataset("emotion", split="validation", data_files="/path/to/cache/modified_dataset")
```
it simply redownloads a new version of the dataset and dumps to a new cache rather than loading up the original modified dataset:
```
Using custom data configuration validation-cdbf51685638421b
Downloading and preparing dataset emotion/validation to ...
```
How am I supposed to load the original modified local cache copy of the dataset?
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4435/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4435/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4434 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4434/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4434/comments | https://api.github.com/repos/huggingface/datasets/issues/4434/events | https://github.com/huggingface/datasets/pull/4434 | 1,256,207,321 | PR_kwDODunzps443mAr | 4,434 | Fix dummy dataset generation script for handling nested types of _URLs | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,654,095,195,000 | 1,654,095,195,000 | null | CONTRIBUTOR | null | It seems that when user specify nested _URLs structures in their dataset script. An error will be raised when generating dummy dataset.
I think the types of all elements in `dummy_data_dict.values()` should be checked because they may have different types.
Linked to issue #4428
PS: I am not sure whether my code fix this issue in a proper way. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4434/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4434/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4434",
"html_url": "https://github.com/huggingface/datasets/pull/4434",
"diff_url": "https://github.com/huggingface/datasets/pull/4434.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4434.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4433/comments | https://api.github.com/repos/huggingface/datasets/issues/4433/events | https://github.com/huggingface/datasets/pull/4433 | 1,255,830,758 | PR_kwDODunzps442P5L | 4,433 | Fix script fetching and local path handling in `inspect_dataset` and `inspect_metric` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4433). All of your documentation changes will be reflected on that endpoint."
] | 1,654,085,396,000 | 1,654,085,719,000 | null | CONTRIBUTOR | null | Fix #4348 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4433/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4433/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4433",
"html_url": "https://github.com/huggingface/datasets/pull/4433",
"diff_url": "https://github.com/huggingface/datasets/pull/4433.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4433.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4432/comments | https://api.github.com/repos/huggingface/datasets/issues/4432/events | https://github.com/huggingface/datasets/pull/4432 | 1,255,523,720 | PR_kwDODunzps441JmK | 4,432 | Fix builder docstring | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,076,730,000 | 1,654,191,827,000 | 1,654,191,315,000 | MEMBER | null | Currently, the args of `DatasetBuilder` do not appear in the docs: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/builder_classes#datasets.DatasetBuilder | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4432/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4432/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4432",
"html_url": "https://github.com/huggingface/datasets/pull/4432",
"diff_url": "https://github.com/huggingface/datasets/pull/4432.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4432.patch",
"merged_at": 1654191315000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4431/comments | https://api.github.com/repos/huggingface/datasets/issues/4431/events | https://github.com/huggingface/datasets/pull/4431 | 1,254,618,948 | PR_kwDODunzps44x5aG | 4,431 | Add personaldialog datasets | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"These test errors are related to issue #4428 \r\n",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4431). All of your documentation changes will be reflected on that endpoint.",
"I only made a trivial modification in my commit https://github.com/huggingface/datasets/pull/4431/commits/402c893d35224d7828176717233909ac5f1e7b3e\r\n\r\nI have submitted a PR #4434 for the about issue."
] | 1,654,046,440,000 | 1,654,097,300,000 | null | CONTRIBUTOR | null | It seems that all tests are passed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4431/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4431/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4431",
"html_url": "https://github.com/huggingface/datasets/pull/4431",
"diff_url": "https://github.com/huggingface/datasets/pull/4431.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4431.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4430 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4430/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4430/comments | https://api.github.com/repos/huggingface/datasets/issues/4430/events | https://github.com/huggingface/datasets/issues/4430 | 1,254,412,591 | I_kwDODunzps5KxNEv | 4,430 | Add ability to load newer, cleaner version of Multi-News | {
"login": "JohnGiorgi",
"id": 8917831,
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JohnGiorgi",
"html_url": "https://github.com/JohnGiorgi",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! Our versioning is based on Git revisions (the `revision` param in `load_dataset`), so you can just replace the old URL with the new one and open a PR :). I can also give you some pointers if needed.",
"@mariosasko Awesome thanks! I will do that. Looks like this new version of the data is not available as a zip but as three files (train/dev/test). How is this usually handled in HF Datasets, should `_URL` be a dict with keys `train`, `val`, `test` perhaps?"
] | 1,654,030,844,000 | 1,654,192,931,000 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
The [Multi-News dataloader points to the original version of the Multi-News dataset](https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/datasets/multi_news/multi_news.py#L47), but this has [known errors in it](https://github.com/Alex-Fabbri/Multi-News/issues/11). There exists a [newer version which fixes some of these issues](https://drive.google.com/open?id=1jwBzXBVv8sfnFrlzPnSUBHEEAbpIUnFq).
Unfortunately I don't think you can just replace this old URL with the new one, otherwise this could lead to issues with reproducibility.
**Describe the solution you'd like**
Add a new version to the Multi-News dataloader that points to the updated dataset which has fixes for some known issues.
**Describe alternatives you've considered**
Replace the current URL to the original version to the dataset with the URL to the version with fixes.
**Additional context**
Would be happy to make a PR for this, could someone maybe point me to another dataloader that has multiple versions so I can see how this is handled in `datasets`?
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4430/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4430/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4429 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4429/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4429/comments | https://api.github.com/repos/huggingface/datasets/issues/4429/events | https://github.com/huggingface/datasets/pull/4429 | 1,254,184,358 | PR_kwDODunzps44whxN | 4,429 | Update builder docstring for deprecated/added arguments | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4429). All of your documentation changes will be reflected on that endpoint."
] | 1,654,018,645,000 | 1,654,092,911,000 | null | MEMBER | null | This PR updates the builder docstring with deprecated/added directives for arguments name/config_name.
Follow up of:
- #4414
- huggingface/doc-builder#233
First merge:
- #4432 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4429/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4429/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4429",
"html_url": "https://github.com/huggingface/datasets/pull/4429",
"diff_url": "https://github.com/huggingface/datasets/pull/4429.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4429.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4428/comments | https://api.github.com/repos/huggingface/datasets/issues/4428/events | https://github.com/huggingface/datasets/issues/4428 | 1,254,092,818 | I_kwDODunzps5Kv_AS | 4,428 | Errors when building dummy data if you use nested _URLS | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [] | 1,654,013,457,000 | 1,654,013,457,000 | null | CONTRIBUTOR | null | ## Describe the bug
When making dummy data with the `datasets-cli dummy_data` tool,
an error will be raised if you use a nested _URLS in your dataset script.
Traceback (most recent call last):
File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module>
main()
File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 311, in run
self._autogenerate_dummy_data(
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 337, in _autogenerate_dummy_data
dataset_builder._split_generators(dl_manager)
File "/home/name/.cache/huggingface/modules/datasets_modules/datasets/personal_dialog/559332bced5eeafa7f7efc2a7c10ce02cee2a8116bbab4611c35a50ba2715b77/personal_dialog.py", line 108, in _split_generators
data_dir = dl_manager.download_and_extract(urls)
File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 56, in download_and_extract
dummy_output = self.mock_download_manager.download(url_or_urls)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 130, in download
return self.download_and_extract(data_url)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 122, in download_and_extract
return self.create_dummy_data_dict(dummy_file, data_url)
File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 165, in create_dummy_data_dict
if isinstance(first_value, str) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()):
TypeError: unhashable type: 'list'
## Steps to reproduce the bug
You can use my dataset script implemented here:
https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py
```python
datasets_cli dummy_data datasets/personal_dialog --auto_generate
```
You can change https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py#L54
to
```
"train": "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz"
```
before runing the above script to avoid downloading a large training data.
## Expected results
The dummy data should be generated
## Actual results
An error is raised.
It seems that in https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165
We only check if the first item of dummy_data_dict.values() is str.
However, dummy_data_dict.values() may have the type of [str, list, list].
A simple fix would be changing https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 to
```python
if all([isinstance(value, str) for value in dummy_data_dict.values()]) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()):
```
But I don't know if this kinds of change may bring any side effect since I am not sure about the detail logic here.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version: Python 3.9.10
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4428/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4428/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4427 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4427/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4427/comments | https://api.github.com/repos/huggingface/datasets/issues/4427/events | https://github.com/huggingface/datasets/pull/4427 | 1,253,959,313 | PR_kwDODunzps44vyGg | 4,427 | Add HF.co for PRs/Issues for specific datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,654,007,481,000 | 1,654,087,062,000 | 1,654,086,542,000 | MEMBER | null | As in https://github.com/huggingface/transformers/pull/17485, issues and PR for datasets under a namespace have to be on the HF Hub | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4427/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4427/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4427",
"html_url": "https://github.com/huggingface/datasets/pull/4427",
"diff_url": "https://github.com/huggingface/datasets/pull/4427.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4427.patch",
"merged_at": 1654086542000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4426/comments | https://api.github.com/repos/huggingface/datasets/issues/4426/events | https://github.com/huggingface/datasets/issues/4426 | 1,253,887,311 | I_kwDODunzps5KvM1P | 4,426 | Add loading variable number of columns for different splits | {
"login": "DrMatters",
"id": 22641583,
"node_id": "MDQ6VXNlcjIyNjQxNTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DrMatters",
"html_url": "https://github.com/DrMatters",
"followers_url": "https://api.github.com/users/DrMatters/followers",
"following_url": "https://api.github.com/users/DrMatters/following{/other_user}",
"gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions",
"organizations_url": "https://api.github.com/users/DrMatters/orgs",
"repos_url": "https://api.github.com/users/DrMatters/repos",
"events_url": "https://api.github.com/users/DrMatters/events{/privacy}",
"received_events_url": "https://api.github.com/users/DrMatters/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! Indeed the column is missing, but you shouldn't get an error? Have you made some modifications (locally) to the loading script? I've opened a PR to add the missing columns to the script. "
] | 1,654,004,416,000 | 1,654,180,135,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have.
When loading such data, an exception occurs at table.py:cast_table_to_schema, because of mismatched columns. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4426/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4426/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4425/comments | https://api.github.com/repos/huggingface/datasets/issues/4425/events | https://github.com/huggingface/datasets/pull/4425 | 1,253,641,604 | PR_kwDODunzps44uuDq | 4,425 | Make extensions case-insensitive in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,991,804,000 | 1,654,092,930,000 | 1,654,092,411,000 | MEMBER | null | Related to #4422. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4425/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4425/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4425",
"html_url": "https://github.com/huggingface/datasets/pull/4425",
"diff_url": "https://github.com/huggingface/datasets/pull/4425.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4425.patch",
"merged_at": 1654092411000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4424/comments | https://api.github.com/repos/huggingface/datasets/issues/4424/events | https://github.com/huggingface/datasets/pull/4424 | 1,253,542,488 | PR_kwDODunzps44uZBD | 4,424 | Fix DuplicatedKeysError in timit_asr dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,986,865,000 | 1,654,005,050,000 | 1,654,004,551,000 | MEMBER | null | Fix #4422. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4424/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4424/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4424",
"html_url": "https://github.com/huggingface/datasets/pull/4424",
"diff_url": "https://github.com/huggingface/datasets/pull/4424.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4424.patch",
"merged_at": 1654004551000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4423 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4423/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4423/comments | https://api.github.com/repos/huggingface/datasets/issues/4423/events | https://github.com/huggingface/datasets/pull/4423 | 1,253,326,023 | PR_kwDODunzps44trdP | 4,423 | Add new dataset MMChat | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,653,972,307,000 | 1,653,972,307,000 | null | CONTRIBUTOR | null | Hi, I am adding a new dataset MMChat.
It seems that all tests are passed | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4423/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4423/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4423",
"html_url": "https://github.com/huggingface/datasets/pull/4423",
"diff_url": "https://github.com/huggingface/datasets/pull/4423.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4423.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4422/comments | https://api.github.com/repos/huggingface/datasets/issues/4422/events | https://github.com/huggingface/datasets/issues/4422 | 1,253,146,511 | I_kwDODunzps5KsX-P | 4,422 | Cannot load timit_asr data set | {
"login": "bhaddow",
"id": 992795,
"node_id": "MDQ6VXNlcjk5Mjc5NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/992795?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhaddow",
"html_url": "https://github.com/bhaddow",
"followers_url": "https://api.github.com/users/bhaddow/followers",
"following_url": "https://api.github.com/users/bhaddow/following{/other_user}",
"gists_url": "https://api.github.com/users/bhaddow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhaddow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhaddow/subscriptions",
"organizations_url": "https://api.github.com/users/bhaddow/orgs",
"repos_url": "https://api.github.com/users/bhaddow/repos",
"events_url": "https://api.github.com/users/bhaddow/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhaddow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @bhaddow.\r\n\r\nI'm fixing it.",
"Thanks for the quick fix!",
"@bhaddow we have also made a fix so that you don't have to convert to uppercase the file extensions of the LDC data.\r\n\r\nWould you mind checking if it works OK now for you and reporting if there are any issues? Thanks. ",
"Hi @albertvillanova -It loads fine on a copy of the data from deepai - although I have to remove the copies of the .WAV files (with extension .WAV,wav). On a copy of the data that was obtained from the LDC, the glob still fails to find the files. The LDC copy looks like it was copied from CD, in 2004, so the structure may be different to a current download.",
"Ah, if I change the train/ and test/ directories to TRAIN/ and TEST/ then it works!",
"Thanks for your investigation and report, @bhaddow. I'm adding another fix for the TRAIN/train and TEST/test directory names."
] | 1,653,948,022,000 | 1,654,151,645,000 | 1,654,004,551,000 | NONE | null | ## Describe the bug
I am trying to load the timit_asr data set. I have tried with a copy from the LDC, and a copy from deepai. In both cases they fail with a "duplicate key" error. With the LDC version I have to convert the file extensions all to upper-case before I can load it at all.
## Steps to reproduce the bug
```python
timit = datasets.load_dataset("timit_asr", data_dir = "/path/to/dataset")
# Sample code to reproduce the bug
```
## Expected results
The data set should load without error. It worked for me before the LDC url change.
## Actual results
```
datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !
Found duplicate Key: SA1
Keys should be unique and deterministic in nature
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- `datasets` version: 2.2.2
- Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4422/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4422/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4421/comments | https://api.github.com/repos/huggingface/datasets/issues/4421/events | https://github.com/huggingface/datasets/pull/4421 | 1,253,059,467 | PR_kwDODunzps44szxR | 4,421 | Add extractor for bzip2-compressed files | {
"login": "asivokon",
"id": 2910707,
"node_id": "MDQ6VXNlcjI5MTA3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/2910707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asivokon",
"html_url": "https://github.com/asivokon",
"followers_url": "https://api.github.com/users/asivokon/followers",
"following_url": "https://api.github.com/users/asivokon/following{/other_user}",
"gists_url": "https://api.github.com/users/asivokon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asivokon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asivokon/subscriptions",
"organizations_url": "https://api.github.com/users/asivokon/orgs",
"repos_url": "https://api.github.com/users/asivokon/repos",
"events_url": "https://api.github.com/users/asivokon/events{/privacy}",
"received_events_url": "https://api.github.com/users/asivokon/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [] | 1,653,938,380,000 | 1,653,938,380,000 | null | NONE | null | This change enables loading bzipped datasets, just like any other compressed dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4421/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4421/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4421",
"html_url": "https://github.com/huggingface/datasets/pull/4421",
"diff_url": "https://github.com/huggingface/datasets/pull/4421.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4421.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4420 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4420/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4420/comments | https://api.github.com/repos/huggingface/datasets/issues/4420/events | https://github.com/huggingface/datasets/issues/4420 | 1,252,739,239 | I_kwDODunzps5Kq0in | 4,420 | Metric evaluation problems in multi-node, shared file system | {
"login": "gullabi",
"id": 40303490,
"node_id": "MDQ6VXNlcjQwMzAzNDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/40303490?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gullabi",
"html_url": "https://github.com/gullabi",
"followers_url": "https://api.github.com/users/gullabi/followers",
"following_url": "https://api.github.com/users/gullabi/following{/other_user}",
"gists_url": "https://api.github.com/users/gullabi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gullabi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gullabi/subscriptions",
"organizations_url": "https://api.github.com/users/gullabi/orgs",
"repos_url": "https://api.github.com/users/gullabi/repos",
"events_url": "https://api.github.com/users/gullabi/events{/privacy}",
"received_events_url": "https://api.github.com/users/gullabi/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"If you call `metric.compute` in a distributed setup like yours, then `metric.compute` is called in each process. `metric.compute` first calls `metric.add_batch`, and it looks like your error appears at that stage.\r\n\r\nTo make sure that all the processes have started writing their predictions/references at the same time, each process waits for process 0 to lock `slurm-{world_size}-0.arrow.lock`. Process 0 locks this file when `metric.add_batch` is called, so here when `metric.compute` is called.\r\n\r\nTherefore your error can happen when process 0 takes too much time to call `metric.compute` compared to process 3 (>100 seconds by default). I haven't tried running your code but could it be the case ?\r\n\r\nI guess it could also happen if you run multiple times the same distributed job at the same time with the same `experiment_id` because they would collide.\r\n"
] | 1,653,917,045,000 | 1,653,922,893,000 | null | NONE | null | ## Describe the bug
Metric evaluation fails in multi-node within a shared file system, because the master process cannot find the lock files from other nodes. (This issue was originally mentioned in the transformers repo https://github.com/huggingface/transformers/issues/17412)
## Steps to reproduce the bug
1. clone [this huggingface model](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) and replace the `run_speech_recognition_ctc.py` script with the version in the gist [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71#file-run_speech_recognition_ctc-py).
2. Setup the `venv` according to the requirements of the model file plus `datasets==2.0.0`, `transformers==4.18.0` and `torch==1.9.0`
3. Launch the runner in a distributed environment which has a shared file system for two nodes, preferably with SLURM. Example [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71)
Specifically for the datasets, for the distributed setup the `load_metric` is called as:
```
process_id=int(os.environ["RANK"])
num_process=int(os.environ["WORLD_SIZE"])
eval_metrics = {metric: load_metric(metric,
process_id=process_id,
num_process=num_process,
experiment_id="slurm")
for metric in data_args.eval_metrics}
```
## Expected results
The training should not fail, due to the failure of the `Metric.compute()` step.
## Actual results
For the test I am executing the world size is 4, with 2 GPUs in 2 nodes. However the process is not finding the necessary lock files
```
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 841, in <module>
main()
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 792, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1497, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1624, in _maybe_log_save_evaluate
metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2291, in evaluate
metric_key_prefix=metric_key_prefix,
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2535, in evaluation_loop
metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in compute_metrics
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}
File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in <dictcomp>
metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()}
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 419, in compute
self.add_batch(**inputs)
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 465, in add_batch
self._init_writer()
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 552, in _init_writer
self._check_rendez_vous() # wait for master to be ready and to let everyone go
File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 342, in _check_rendez_vous
) from None
ValueError: Expected to find locked file /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock from process 3 but it doesn't exist.
```
When I look at the cache directory, I can see all the lock files in principle:
```
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow.lock
/home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-rdv.lock
```
I see that there was another related issue here https://github.com/huggingface/datasets/issues/1942, but it seems to have resolved via https://github.com/huggingface/datasets/pull/1966. Let me know if there is problem with how I am calling the `load_metric` or whether I need to make changes to the `.compute()` steps.
## Environment info
- `datasets` version: 2.0.0
- Platform: Linux-4.18.0-147.8.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core
- Python version: 3.7.4
- PyArrow version: 7.0.0
- Pandas version: 1.3.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4420/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4420/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4419/comments | https://api.github.com/repos/huggingface/datasets/issues/4419/events | https://github.com/huggingface/datasets/issues/4419 | 1,252,652,896 | I_kwDODunzps5Kqfdg | 4,419 | Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! If the only goal is to improve readability, it's better to use `assertTupleEqual` than `assertSequenceEqual` for Python tuples. Also, note that this function is called internally by `assertEqual`, but I guess we can accept a PR to be more verbose.",
"Hi @mariosasko, right! I'll update the issue title/desc with `assertTupleEqual` even though as you said it seems to be internally using `assertEqual` so I'm not sure whether it's worth it or not...\r\n\r\nhttps://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual"
] | 1,653,912,798,000 | 1,654,069,069,000 | null | CONTRIBUTOR | null | **Is your feature request related to a problem? Please describe.**
So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating.
Find an example of an `assertEqual` over a tuple in 🤗 `datasets` unit tests over an `ArrowDataset` at https://github.com/huggingface/datasets/blob/0bb47271910c8a0b628dba157988372307fca1d2/tests/test_arrow_dataset.py#L570
**Describe the solution you'd like**
Start slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`.
**Additional context**
If so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks 🤗
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4419/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4419/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4418/comments | https://api.github.com/repos/huggingface/datasets/issues/4418/events | https://github.com/huggingface/datasets/pull/4418 | 1,252,506,268 | PR_kwDODunzps44q9pG | 4,418 | Add dataset MMChat | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,653,905,440,000 | 1,653,922,698,000 | 1,653,922,698,000 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4418/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4418/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4418",
"html_url": "https://github.com/huggingface/datasets/pull/4418",
"diff_url": "https://github.com/huggingface/datasets/pull/4418.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4418.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4417 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4417/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4417/comments | https://api.github.com/repos/huggingface/datasets/issues/4417/events | https://github.com/huggingface/datasets/issues/4417 | 1,251,933,091 | I_kwDODunzps5Knvuj | 4,417 | how to convert a dict generator into a huggingface dataset. | {
"login": "StephennFernandes",
"id": 32235549,
"node_id": "MDQ6VXNlcjMyMjM1NTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/32235549?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/StephennFernandes",
"html_url": "https://github.com/StephennFernandes",
"followers_url": "https://api.github.com/users/StephennFernandes/followers",
"following_url": "https://api.github.com/users/StephennFernandes/following{/other_user}",
"gists_url": "https://api.github.com/users/StephennFernandes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/StephennFernandes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/StephennFernandes/subscriptions",
"organizations_url": "https://api.github.com/users/StephennFernandes/orgs",
"repos_url": "https://api.github.com/users/StephennFernandes/repos",
"events_url": "https://api.github.com/users/StephennFernandes/events{/privacy}",
"received_events_url": "https://api.github.com/users/StephennFernandes/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | open | false | null | [] | null | [
"@albertvillanova @lhoestq , could you please help me on this issue. ",
"Hi ! As mentioned on the [forum](https://discuss.huggingface.co/t/how-to-wrap-a-generator-with-hf-dataset/18464), the simplest for now would be to define a [dataset script](https://huggingface.co/docs/datasets/dataset_script) which can contain your generator. But we can also explore adding something like `ds = Dataset.from_iterable(seqio_dataset)`"
] | 1,653,841,707,000 | 1,653,919,239,000 | null | NONE | null | ### Link
_No response_
### Description
Hey there, I have used seqio to get a well distributed mixture of samples from multiple dataset. However the resultant output from seqio is a python generator dict, which I cannot produce back into huggingface dataset.
The generator contains all the samples needed for training the model but I cannot convert it into a huggingface dataset.
The code looks like this:
```
for ex in seqio_data:
print(ex[“text”])
```
I need to convert the seqio_data (generator) into huggingface dataset.
the complete seqio code goes here:
```
import functools
import seqio
import tensorflow as tf
import t5.data
from datasets import load_dataset
from t5.data import postprocessors
from t5.data import preprocessors
from t5.evaluation import metrics
from seqio import FunctionDataSource, utils
TaskRegistry = seqio.TaskRegistry
def gen_dataset(split, shuffle=False, seed=None, column="text", dataset_params=None):
dataset = load_dataset(**dataset_params)
if shuffle:
if seed:
dataset = dataset.shuffle(seed=seed)
else:
dataset = dataset.shuffle()
while True:
for item in dataset[str(split)]:
yield item[column]
def dataset_fn(split, shuffle_files, seed=None, dataset_params=None):
return tf.data.Dataset.from_generator(
functools.partial(gen_dataset, split, shuffle_files, seed, dataset_params=dataset_params),
output_signature=tf.TensorSpec(shape=(), dtype=tf.string, name=dataset_name)
)
@utils.map_over_dataset
def target_to_key(x, key_map, target_key):
"""Assign the value from the dataset to target_key in key_map"""
return {**key_map, target_key: x}
dataset_name = 'oscar-corpus/OSCAR-2109'
subset= 'mr'
dataset_params = {"path": dataset_name, "language":subset, "use_auth_token":True}
dataset_shapes = None
TaskRegistry.add(
"oscar_marathi_corpus",
source=seqio.FunctionDataSource(
dataset_fn=functools.partial(dataset_fn, dataset_params=dataset_params),
splits=("train", "validation"),
caching_permitted=False,
num_input_examples=dataset_shapes,
),
preprocessors=[
functools.partial(
target_to_key, key_map={
"targets": None,
}, target_key="targets")],
output_features={"targets": seqio.Feature(vocabulary=seqio.PassThroughVocabulary, add_eos=False, dtype=tf.string, rank=0)},
metric_fns=[]
)
dataset = seqio.get_mixture_or_task("oscar_marathi_corpus").get_dataset(
sequence_length=None,
split="train",
shuffle=True,
num_epochs=1,
shard_info=seqio.ShardInfo(index=0, num_shards=10),
use_cached=False,
seed=42
)
for _, ex in zip(range(5), dataset):
print(ex['targets'].numpy().decode())
```
### Owner
_No response_ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4417/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4417/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4416/comments | https://api.github.com/repos/huggingface/datasets/issues/4416/events | https://github.com/huggingface/datasets/pull/4416 | 1,251,875,763 | PR_kwDODunzps44o7sF | 4,416 | Add LCCC dataset | {
"login": "silverriver",
"id": 2529049,
"node_id": "MDQ6VXNlcjI1MjkwNDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/silverriver",
"html_url": "https://github.com/silverriver",
"followers_url": "https://api.github.com/users/silverriver/followers",
"following_url": "https://api.github.com/users/silverriver/following{/other_user}",
"gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}",
"starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/silverriver/subscriptions",
"organizations_url": "https://api.github.com/users/silverriver/orgs",
"repos_url": "https://api.github.com/users/silverriver/repos",
"events_url": "https://api.github.com/users/silverriver/events{/privacy}",
"received_events_url": "https://api.github.com/users/silverriver/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thank you very much for your help @albertvillanova .\r\n\r\nI think I have fixed all the comments.\r\n\r\nPlease let me know if this PR need further modification ;)",
"@albertvillanova Thank you very much for your kind help.\r\nThese suggestions make the code looks more pythonic.\r\n\r\nI have commited these changes"
] | 1,653,827,239,000 | 1,654,161,759,000 | 1,654,161,226,000 | CONTRIBUTOR | null | Hi, I am trying to add a new dataset lccc.
All tests are passed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4416/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4416/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4416",
"html_url": "https://github.com/huggingface/datasets/pull/4416",
"diff_url": "https://github.com/huggingface/datasets/pull/4416.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4416.patch",
"merged_at": 1654161226000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4415 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4415/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4415/comments | https://api.github.com/repos/huggingface/datasets/issues/4415/events | https://github.com/huggingface/datasets/pull/4415 | 1,251,002,981 | PR_kwDODunzps44mIJk | 4,415 | Update `dataset_infos.json` with new split info in `Dataset.push_to_hub` to avoid verification error | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4415). All of your documentation changes will be reflected on that endpoint."
] | 1,653,671,022,000 | 1,654,001,099,000 | null | CONTRIBUTOR | null | Update `dataset_infos.json` when pushing splits one by one via `Dataset.push_to_hub` to avoid the splits verification error.
TODO:
~~- [ ] handle token + `{Audio, Image}.embed_storage`~~
- [x] tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4415/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4415/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4415",
"html_url": "https://github.com/huggingface/datasets/pull/4415",
"diff_url": "https://github.com/huggingface/datasets/pull/4415.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4415.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4414 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4414/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4414/comments | https://api.github.com/repos/huggingface/datasets/issues/4414/events | https://github.com/huggingface/datasets/pull/4414 | 1,250,546,888 | PR_kwDODunzps44klhY | 4,414 | Rename DatasetBuilder config_name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,643,682,000 | 1,654,009,641,000 | 1,654,009,131,000 | MEMBER | null | This PR renames the DatasetBuilder keyword argument `name` to `config_name` so that:
- it avoids confusion with the attribute `DatasetBuilder.name`, which is different
- it aligns with the Dataset property name `config_name`, defined in `DatasetInfoMixin.config_name`
Other simpler possibility could be to rename it to just `config` instead.
Please note I have only renamed this argument of DatasetBuilder because I think this refactoring has a low impact on users: we can assume this is not a public facing parameter, but private or related to the inners of our library.
It would have a major impact to rename it also in:
- load_dataset
- load_dataset_builder: although this could also be assumed as inners...
- in our CLI commands
Besides the naming of `name`, I also find really confusing the naming of `path` in `load_dataset`. IMHO, they should have a more simpler and precise meaning (currently, they are too vague). I would propose (maybe for next major release):
```
load_dataset(dataset, config,...
```
instead of
```
load_dataset(path, name,...
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4414/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4414/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4414",
"html_url": "https://github.com/huggingface/datasets/pull/4414",
"diff_url": "https://github.com/huggingface/datasets/pull/4414.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4414.patch",
"merged_at": 1654009131000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4413/comments | https://api.github.com/repos/huggingface/datasets/issues/4413/events | https://github.com/huggingface/datasets/issues/4413 | 1,250,259,822 | I_kwDODunzps5KhXNu | 4,413 | Dataset Viewer issue for ett | {
"login": "dgcnz",
"id": 24966039,
"node_id": "MDQ6VXNlcjI0OTY2MDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/24966039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dgcnz",
"html_url": "https://github.com/dgcnz",
"followers_url": "https://api.github.com/users/dgcnz/followers",
"following_url": "https://api.github.com/users/dgcnz/following{/other_user}",
"gists_url": "https://api.github.com/users/dgcnz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dgcnz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dgcnz/subscriptions",
"organizations_url": "https://api.github.com/users/dgcnz/orgs",
"repos_url": "https://api.github.com/users/dgcnz/repos",
"events_url": "https://api.github.com/users/dgcnz/events{/privacy}",
"received_events_url": "https://api.github.com/users/dgcnz/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting @dgcnz.\r\n\r\nI have checked that the dataset works fine in streaming mode.\r\n\r\nAdditionally, other datasets containing timestamps are properly rendered by the viewer: https://huggingface.co/datasets/blbooks\r\n\r\nI have tried to force the refresh of the preview, but the endpoint is not responsive: Connection timed out\r\n\r\nCC: @severo ",
"I've just resent the refresh of the preview to the new endpoint."
] | 1,653,617,555,000 | 1,654,153,287,000 | null | NONE | null | ### Link
https://huggingface.co/datasets/ett
### Description
Timestamp is not JSON serializable.
```
Status code: 500
Exception: Status500Error
Message: Type is not JSON serializable: Timestamp
```
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4413/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4413/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4412/comments | https://api.github.com/repos/huggingface/datasets/issues/4412/events | https://github.com/huggingface/datasets/pull/4412 | 1,249,490,179 | PR_kwDODunzps44hFvq | 4,412 | Skip hidden files/directories in data files resolution and `iter_files` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,567,028,000 | 1,654,089,175,000 | 1,654,088,656,000 | CONTRIBUTOR | null | Fix #4115 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4412/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4412",
"html_url": "https://github.com/huggingface/datasets/pull/4412",
"diff_url": "https://github.com/huggingface/datasets/pull/4412.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4412.patch",
"merged_at": 1654088656000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4411/comments | https://api.github.com/repos/huggingface/datasets/issues/4411/events | https://github.com/huggingface/datasets/pull/4411 | 1,249,462,390 | PR_kwDODunzps44g_yL | 4,411 | Update `_format_columns` in `remove_columns` | {
"login": "alvarobartt",
"id": 36760800,
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alvarobartt",
"html_url": "https://github.com/alvarobartt",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"🤗 This PR closes https://github.com/huggingface/datasets/issues/4398",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4411). All of your documentation changes will be reflected on that endpoint.",
"Hi! Thanks for reporting and providing a fix. I made a small change to make the fix easier to understand.",
"Hi, @mariosasko thanks! It makes sense, sorry I'm not that familiar with `datasets` code 😩 ",
"Sure @albertvillanova I'll do that later today and ping you once done, thanks! :hugs:",
"Hi again @albertvillanova! Let me know if those tests are fine 🤗 ",
"Hi @alvarobartt,\r\n\r\nI think your tests are failing. I don't know why previously, after your last commit, the CI tests were not triggered. \r\n\r\nIn order to force the re-running of the CI tests, I had to edit your file using the GitHub UI.\r\n\r\nFirst I tried to do it using my terminal, but I don't have push right to your PR branch: I would ask you next time you open a PR, please mark the checkbox \"Allow edits from maintainers\": https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/allowing-changes-to-a-pull-request-branch-created-from-a-fork#enabling-repository-maintainer-permissions-on-existing-pull-requests",
"Hi @albertvillanova, let me check those again! And regarding that checkbox I thought it was already checked so my bad there 😩 ",
"@albertvillanova again it seems that the tests were not automatically triggered, but I tested those locally and now they work, as previously those were failing as I created an assertion as `self.assertEqual` over an empty list that was compared as `None` while the value was `[]` so I updated it to be `self.assertListEqual` and changed the comparison value to `[]`.",
"@lhoestq any idea why the CI is not triggered?",
"@alvarobartt I have tested locally and the tests continue failing.\r\n\r\nI think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n",
"You're right @albertvillanova I was indeed running the tests with `datasets==2.2.0` rather than with the branch version, I'll check it again! Sorry for the inconvenience...",
"> @alvarobartt I have tested locally and the tests continue failing.\r\n> \r\n> I think there is a basis error: `new_dset._format_columns` is always `None` in those cases.\r\n\r\nIn order to have some regressions tests for the fixed scenario, I've manually updated the value of `_format_columns` in the `ArrowDataset` so as to check whether it's updated or not right after calling `remove_columns`, and it does behave as expected, so with the latest version of this branch the reported issue doesn't occur anymore."
] | 1,653,565,206,000 | 1,654,173,226,000 | null | CONTRIBUTOR | null | As explained at #4398, when calling `dataset.add_faiss_index` under certain conditions when calling a sequence of operations `cast_column`, `map`, and `remove_columns`, it fails as it's trying to look for already removed columns.
So on, after testing some possible fixes, it seems that setting the dataset format right after removing the columns seems to be working fine, so I had to add a call to `.set_format` in the `remove_columns` function.
Hope this helps! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4411/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4411/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4411",
"html_url": "https://github.com/huggingface/datasets/pull/4411",
"diff_url": "https://github.com/huggingface/datasets/pull/4411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4411.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4410/comments | https://api.github.com/repos/huggingface/datasets/issues/4410/events | https://github.com/huggingface/datasets/pull/4410 | 1,249,148,457 | PR_kwDODunzps44f_Td | 4,410 | Remove Google Drive URL in spider dataset | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,545,855,000 | 1,653,547,722,000 | 1,653,547,212,000 | MEMBER | null | The `spider` dataset is distributed under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode) license.
Fix #4401. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4410/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4410",
"html_url": "https://github.com/huggingface/datasets/pull/4410",
"diff_url": "https://github.com/huggingface/datasets/pull/4410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4410.patch",
"merged_at": 1653547212000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4409/comments | https://api.github.com/repos/huggingface/datasets/issues/4409/events | https://github.com/huggingface/datasets/pull/4409 | 1,249,083,179 | PR_kwDODunzps44fxiH | 4,409 | Update: add using pcm bytes (#4323) | {
"login": "YooSungHyun",
"id": 34292279,
"node_id": "MDQ6VXNlcjM0MjkyMjc5",
"avatar_url": "https://avatars.githubusercontent.com/u/34292279?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YooSungHyun",
"html_url": "https://github.com/YooSungHyun",
"followers_url": "https://api.github.com/users/YooSungHyun/followers",
"following_url": "https://api.github.com/users/YooSungHyun/following{/other_user}",
"gists_url": "https://api.github.com/users/YooSungHyun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/YooSungHyun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/YooSungHyun/subscriptions",
"organizations_url": "https://api.github.com/users/YooSungHyun/orgs",
"repos_url": "https://api.github.com/users/YooSungHyun/repos",
"events_url": "https://api.github.com/users/YooSungHyun/events{/privacy}",
"received_events_url": "https://api.github.com/users/YooSungHyun/received_events",
"type": "User",
"site_admin": false
} | [] | open | false | null | [] | null | [
"@lhoestq Maybe I'm missing something, but what's the reason to read and encode PCM files to WAV in `Audio.encode_example`. Isn't the whole purpose of the decodable types to operate on raw files whenever possible? IMO this PR should only modify `Audio.decode_example` to support PCM files/bytes decoding.",
"Because the PCM file is not enough, we also need the `sampling_rate` associated to it. Therefore the two alternatives are either:\r\n- convert to WAV\r\n- add a `sampling_rate` field to the Audio arrow storage (not sure how it would behave for backward compatibility though)",
"But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.",
"How does it get the sampling rate of a PCM file then ? According to [SO](https://stackoverflow.com/a/57027667/17517845) it's not possible to infer it from the file alone",
"> Awesome thanks ! Could you also add tests in `tests/features/test_audio.py` ?\r\n> \r\n> Maybe add a small pcm file in `tests/features/data` and check that everything works as expected in tests cases like `test_audio_encode_example_pcm` and `test_audio_decode_example_pcm` for example.\r\n\r\n@lhoestq how can i test test_audio.py? where is \"__main__\" func?\r\ndo you have some example or guideline?",
"> But [`scipy.io.wavfile.read`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.io.wavfile.read.html), which is used for reading such files, returns a file's sampling rate. The only tricky part is [resampling](https://stackoverflow.com/questions/33682490/how-to-read-a-wav-file-using-scipy-at-a-different-sampling-rate) to a different sampling rate than the default one.\r\n\r\n@mariosasko @lhoestq \r\nthanks for comment!\r\n\r\nFirst of all, \"PCM file\" can not read alone to any audio library.\r\n\"PCM file\" has not any audio information header. (it just purely audio byte data. therefore, we don't have to encoding and decoding)\r\nbut, \"PCM file\" is audio extension, so we can use `datasets.Audio`\r\n\r\nif you want to read \"PCM file\" to audio file likely, it have to needs additional parameter. (channel, sampling_rate, else....)\r\nbut, in many situation, we only know sampling_rate for PCM\r\n\r\nand, if we want to use `datasets.Audio` for \"PCM file\", we must process encode_example.\r\ntherefore, i have to use sampling_rate for encoding for making wav-style byte. (we only know sampling_rate)\r\n\r\nIn my source code, I don't compare sampling rate(`datasets.Audio's self.sampling_rate` and `read pcm sampling_rate(value[\"sampling_rate\"])`) and checking mono\r\n@mariosasko ! do you want to process resampling and making mono? then i can modify my source\r\n"
] | 1,653,539,196,000 | 1,654,163,001,000 | null | NONE | null | first of all, please look #4323
why i can not use {"path","array","sampling_rate"}
because sf.write(format="wav") and sf.read(BytesIO) is changed my pcm data value
maybe, i think wav got header but, pcm is not.
and variable naming, pcm data is "byte" type. so, "array" name is not fair i think
so, i use scipy lib and numpy (that is huggingface dependency)
and refer to @lhoestq answered,
1. encode -> using sampling_rate and pcm byte -> wav style byte (scipy.wavfile.write to byte)
2. byte converting using fairseq style pcm audio read [FileAudioDataset](https://github.com/facebookresearch/fairseq/blob/main/fairseq/data/audio/raw_audio_dataset.py)
4. decode -> read wavfile.read
that way is not screw up my pcm byte to float data, and another audio type(wav) safety
please check! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4409/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4409",
"html_url": "https://github.com/huggingface/datasets/pull/4409",
"diff_url": "https://github.com/huggingface/datasets/pull/4409.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4409.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4408/comments | https://api.github.com/repos/huggingface/datasets/issues/4408/events | https://github.com/huggingface/datasets/pull/4408 | 1,248,687,574 | PR_kwDODunzps44ecNI | 4,408 | Update imagenet gate | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,510,739,000 | 1,653,511,511,000 | 1,653,511,007,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4408/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4408",
"html_url": "https://github.com/huggingface/datasets/pull/4408",
"diff_url": "https://github.com/huggingface/datasets/pull/4408.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4408.patch",
"merged_at": 1653511007000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4407/comments | https://api.github.com/repos/huggingface/datasets/issues/4407/events | https://github.com/huggingface/datasets/issues/4407 | 1,248,671,778 | I_kwDODunzps5KbTgi | 4,407 | Dataset Viewer issue for conll2012_ontonotesv5 | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] | open | false | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @jiangwy99.\r\n\r\nI guess this could be addressed only once we fix our issue with irresponsive backend endpoint.\r\n\r\nCC: @severo ",
"I've just sent the forcing of the refresh of the preview to the new endpoint."
] | 1,653,509,913,000 | 1,654,157,761,000 | null | NONE | null | ### Link
https://huggingface.co/datasets/conll2012_ontonotesv5
### Description
Dataset viewer outage.
### Owner
No | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4407/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4406/comments | https://api.github.com/repos/huggingface/datasets/issues/4406/events | https://github.com/huggingface/datasets/pull/4406 | 1,248,626,622 | PR_kwDODunzps44ePLU | 4,406 | Improve language tag for PIAF dataset | {
"login": "lbourdois",
"id": 58078086,
"node_id": "MDQ6VXNlcjU4MDc4MDg2",
"avatar_url": "https://avatars.githubusercontent.com/u/58078086?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lbourdois",
"html_url": "https://github.com/lbourdois",
"followers_url": "https://api.github.com/users/lbourdois/followers",
"following_url": "https://api.github.com/users/lbourdois/following{/other_user}",
"gists_url": "https://api.github.com/users/lbourdois/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lbourdois/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lbourdois/subscriptions",
"organizations_url": "https://api.github.com/users/lbourdois/orgs",
"repos_url": "https://api.github.com/users/lbourdois/repos",
"events_url": "https://api.github.com/users/lbourdois/events{/privacy}",
"received_events_url": "https://api.github.com/users/lbourdois/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,653,507,715,000 | 1,653,663,083,000 | 1,653,663,083,000 | NONE | null | Hi,
As pointed out by @lhoestq in this discussion (https://huggingface.co/datasets/asi/wikitext_fr/discussions/1), it is not yet possible to edit datasets outside of a namespace with the Hub PR feature and that you have to go through GitHub.
This modification should allow better referencing since only the xx language tags are currently taken into account and not the xx-xx. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4406/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4406",
"html_url": "https://github.com/huggingface/datasets/pull/4406",
"diff_url": "https://github.com/huggingface/datasets/pull/4406.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4406.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4405/comments | https://api.github.com/repos/huggingface/datasets/issues/4405/events | https://github.com/huggingface/datasets/issues/4405 | 1,248,574,087 | I_kwDODunzps5Ka7qH | 4,405 | [TypeError: Couldn't cast array of type] Cannot process dataset in v2.2.2 | {
"login": "jiangwy99",
"id": 39762734,
"node_id": "MDQ6VXNlcjM5NzYyNzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jiangwy99",
"html_url": "https://github.com/jiangwy99",
"followers_url": "https://api.github.com/users/jiangwy99/followers",
"following_url": "https://api.github.com/users/jiangwy99/following{/other_user}",
"gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions",
"organizations_url": "https://api.github.com/users/jiangwy99/orgs",
"repos_url": "https://api.github.com/users/jiangwy99/repos",
"events_url": "https://api.github.com/users/jiangwy99/events{/privacy}",
"received_events_url": "https://api.github.com/users/jiangwy99/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"And if the problem is that the way I am to construct the {Entity Type: list of spans} makes entity types without any spans hard to handle, is there a better way to meet the demand? Although I have verified that to make entity types without any spans to behave like `entity_chunk[label] = [[\"\"]]` can perform normally, I still wonder if there is a more elegant way?"
] | 1,653,505,003,000 | 1,653,506,271,000 | null | NONE | null | ## Describe the bug
I am trying to process the [conll2012_ontonotesv5](https://huggingface.co/datasets/conll2012_ontonotesv5) dataset in `datasets` v2.2.2 and am running into a type error when casting the features.
## Steps to reproduce the bug
```python
import os
from typing import (
List,
Dict,
)
from collections import (
defaultdict,
)
from dataclasses import (
dataclass,
)
from datasets import (
load_dataset,
)
@dataclass
class ConllConverter:
path: str
name: str
cache_dir: str
def __post_init__(
self,
):
self.dataset = load_dataset(
path=self.path,
name=self.name,
cache_dir=self.cache_dir,
)
def convert(
self,
):
class_label = self.dataset["train"].features["sentences"][0]["named_entities"].feature
# label_set = list(set([
# label.split("-")[1] if label != "O" else label for label in class_label.names
# ]))
def prepare_chunk(token, entity):
assert len(token) == len(entity)
# Sequence length
length = len(token)
# Variable used
entity_chunk = defaultdict(list)
idx = flag = 0
# While loop
while idx < length:
if entity[idx] == "O":
flag += 1
idx += 1
else:
iob_tp, lab_tp = entity[idx].split("-")
assert iob_tp == "B"
idx += 1
while idx < length and entity[idx].startswith("I-"):
idx += 1
entity_chunk[lab_tp].append(token[flag: idx])
flag = idx
entity_chunk = dict(entity_chunk)
# for label in label_set:
# if label != "O" and label not in entity_chunk.keys():
# entity_chunk[label] = None
return entity_chunk
def prepare_features(
batch: Dict[str, List],
) -> Dict[str, List]:
sentence = [
sent for doc_sent in batch["sentences"] for sent in doc_sent
]
feature = {
"sentence": list(),
}
for sent in sentence:
token = sent["words"]
entity = class_label.int2str(sent["named_entities"])
entity_chunk = prepare_chunk(token, entity)
sent_feat = {
"token": token,
"entity": entity,
"entity_chunk": entity_chunk,
}
feature["sentence"].append(sent_feat)
return feature
column_names = self.dataset.column_names["train"]
dataset = self.dataset.map(
function=prepare_features,
with_indices=False,
batched=True,
batch_size=3,
remove_columns=column_names,
num_proc=1,
)
dataset.save_to_disk(
dataset_dict_path=os.path.join("data", self.path, self.name)
)
if __name__ == "__main__":
converter = ConllConverter(
path="conll2012_ontonotesv5",
name="english_v4",
cache_dir="cache",
)
converter.convert()
```
## Expected results
I want to use the dataset to perform NER task and to change the label list into a {Entity Type: list of spans} format.
## Actual results
<details>
<summary>Traceback</summary>
```python
Traceback (most recent call last): | 0/81 [00:00<?, ?ba/s]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 532, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 499, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2751, in _map_single
writer.write_batch(batch)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 503, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarrow/array.pxi", line 230, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 198, in __arrow_array__
out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in cast_array_to_feature
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1793, in <listcomp>
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1675, in wrapper
return func(array, *args, **kwargs)
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/table.py", line 1844, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>>
to
{'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)}
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 110, in <module>
converter.convert()
File "/home2/jiangwangyi/workspace/work/Entity/dataconverter.py", line 91, in convert
dataset = self.dataset.map(
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 770, in map
{
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp>
k: dataset.map(
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2459, in map
transformed_shards[index] = async_result.get()
File "/home2/jiangwangyi/miniconda3/lib/python3.9/site-packages/multiprocess/pool.py", line 771, in get
raise self._value
TypeError: Couldn't cast array of type
struct<CARDINAL: list<item: list<item: string>>, DATE: list<item: list<item: string>>, EVENT: list<item: list<item: string>>, FAC: list<item: list<item: string>>, GPE: list<item: list<item: string>>, LANGUAGE: list<item: list<item: string>>, LAW: list<item: list<item: string>>, LOC: list<item: list<item: string>>, MONEY: list<item: list<item: string>>, NORP: list<item: list<item: string>>, ORDINAL: list<item: list<item: string>>, ORG: list<item: list<item: string>>, PERCENT: list<item: list<item: string>>, PERSON: list<item: list<item: string>>, QUANTITY: list<item: list<item: string>>, TIME: list<item: list<item: string>>, WORK_OF_ART: list<item: list<item: string>>>
to
{'CARDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'DATE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'EVENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'FAC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'GPE': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LAW': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'LOC': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'MONEY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'NORP': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORDINAL': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'ORG': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERCENT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PERSON': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'PRODUCT': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'QUANTITY': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'TIME': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None), 'WORK_OF_ART': Sequence(feature=Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), length=-1, id=None)}
```
</details>
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: Ubuntu 18.04
- Python version: 3.9.7
- PyArrow version: 7.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4405/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4404/comments | https://api.github.com/repos/huggingface/datasets/issues/4404/events | https://github.com/huggingface/datasets/issues/4404 | 1,248,572,899 | I_kwDODunzps5Ka7Xj | 4,404 | Dataset should have a `.name` field | {
"login": "f4hy",
"id": 36440,
"node_id": "MDQ6VXNlcjM2NDQw",
"avatar_url": "https://avatars.githubusercontent.com/u/36440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/f4hy",
"html_url": "https://github.com/f4hy",
"followers_url": "https://api.github.com/users/f4hy/followers",
"following_url": "https://api.github.com/users/f4hy/following{/other_user}",
"gists_url": "https://api.github.com/users/f4hy/gists{/gist_id}",
"starred_url": "https://api.github.com/users/f4hy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/f4hy/subscriptions",
"organizations_url": "https://api.github.com/users/f4hy/orgs",
"repos_url": "https://api.github.com/users/f4hy/repos",
"events_url": "https://api.github.com/users/f4hy/events{/privacy}",
"received_events_url": "https://api.github.com/users/f4hy/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Hi! You can already use `dset.builder_name` and `dset.config_name` for that purpose. And when it comes to versioning, it's better to use `dset._fingerprint` than the `version` attribute as the former represents a deterministic hash that encodes all the mutable ops executed on a dataset, and the latter stays the same unless it's manually updated after each op."
] | 1,653,504,968,000 | 1,653,570,601,000 | null | NONE | null | **Is your feature request related to a problem? Please describe.**
If building pipelines that can evaluate on more than one dataset, it would be nice to be able to log results of things like `Evaluating on {dataset.name}` or `results for {dataset.name} are: {results}`
Without some way of concisely identifying a dataset from the dataset object, tools which might run on more than one dataset must be passed the dataset object _and_ the name/id of the dataset being used.
**Describe the solution you'd like**
The DatasetInfo class should have a `name` field which is the name of a dataset. then for a given dataset if it evolves in time the `version` can be updated but its different versions of the same dataset with a unique `name`. The name could then all be accessed by `dataset.name`
**Describe alternatives you've considered**
For my own purposes I am considering making `NamedDataset[Dataset]` where the subclass just has a .name field.
**Additional context**
My guess is that most usecases are not working with more than one dataset in a given pipeline so a name is not really needed. This has surprised me though as one of the advantages of a standard dataset interface is to be able to build pipelines which can be passed in a dataset and separate responsibilities of the dataset loading from the train or eval pipeline.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4404/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4404/timeline | null | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4403/comments | https://api.github.com/repos/huggingface/datasets/issues/4403/events | https://github.com/huggingface/datasets/pull/4403 | 1,248,390,134 | PR_kwDODunzps44dcpl | 4,403 | Uncomment logging deactivation for ArrowBasedBuilder | {
"login": "thomasw21",
"id": 24695242,
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomasw21",
"html_url": "https://github.com/thomasw21",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,497,175,000 | 1,653,986,016,000 | 1,653,985,502,000 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4403/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4403/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4403",
"html_url": "https://github.com/huggingface/datasets/pull/4403",
"diff_url": "https://github.com/huggingface/datasets/pull/4403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4403.patch",
"merged_at": 1653985502000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4402/comments | https://api.github.com/repos/huggingface/datasets/issues/4402/events | https://github.com/huggingface/datasets/pull/4402 | 1,248,078,067 | PR_kwDODunzps44cdR5 | 4,402 | Skip identical files in `push_to_hub` instead of overwriting | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 1,653,484,371,000 | 1,653,491,796,000 | 1,653,491,283,000 | CONTRIBUTOR | null | Skip identical files instead of overwriting them to save bandwidth and circumvent (user-side/server-side) errors, which can arise when working with large datasets due to long-lasting HTTP connections, by repeating calls to `push_to_hub` to resume an upload.
To be able to check if an upload can be resumed, this PR modifies the shard naming scheme from:
```
data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].parquet
```
to:
```
data/{split}-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9]-<SHARD_FINGERPRINT>.parquet
```
cc @LysandreJik | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4402/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4402/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4402",
"html_url": "https://github.com/huggingface/datasets/pull/4402",
"diff_url": "https://github.com/huggingface/datasets/pull/4402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4402.patch",
"merged_at": 1653491283000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/4401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4401/comments | https://api.github.com/repos/huggingface/datasets/issues/4401/events | https://github.com/huggingface/datasets/issues/4401 | 1,247,695,921 | I_kwDODunzps5KXlQx | 4,401 | "NonMatchingChecksumError" when importing 'spider' dataset | {
"login": "OmarAlaaeldein",
"id": 81417777,
"node_id": "MDQ6VXNlcjgxNDE3Nzc3",
"avatar_url": "https://avatars.githubusercontent.com/u/81417777?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OmarAlaaeldein",
"html_url": "https://github.com/OmarAlaaeldein",
"followers_url": "https://api.github.com/users/OmarAlaaeldein/followers",
"following_url": "https://api.github.com/users/OmarAlaaeldein/following{/other_user}",
"gists_url": "https://api.github.com/users/OmarAlaaeldein/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OmarAlaaeldein/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OmarAlaaeldein/subscriptions",
"organizations_url": "https://api.github.com/users/OmarAlaaeldein/orgs",
"repos_url": "https://api.github.com/users/OmarAlaaeldein/repos",
"events_url": "https://api.github.com/users/OmarAlaaeldein/events{/privacy}",
"received_events_url": "https://api.github.com/users/OmarAlaaeldein/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 4069435429,
"node_id": "LA_kwDODunzps7yjqgl",
"url": "https://api.github.com/repos/huggingface/datasets/labels/hosted-on-google-drive",
"name": "hosted-on-google-drive",
"color": "8B51EF",
"default": false,
"description": ""
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null | [
"Thanks for reporting, @OmarAlaaeldein.\r\n\r\nDatasets hosted at Google Drive give problems quite often due to a change in their service:\r\n- #3786 \r\n\r\nRelated to:\r\n- #3906\r\n\r\nI'm having a look.",
"We have made a Pull Request to replace the Google Drive URL. This fix will be accessible in our next `datasets` library release.\r\n\r\nIn the meantime, once the PR merged into master, you can get this fix by installing our library from the GitHub master branch:\r\n```shell\r\npip install git+https://github.com/huggingface/datasets#egg=datasets\r\n```\r\nThen, if you had previously tried to load the data and got the checksum error, you should force the redownload of the data (before the fix, you just downloaded and cached the virus scan warning page, instead of the data file):\r\n```shell\r\nload_dataset(\"...\", download_mode=\"force_redownload\")\r\n```"
] | 1,653,464,707,000 | 1,653,547,212,000 | 1,653,547,212,000 | NONE | null | ## Describe the bug
When importing 'spider' dataset [https://huggingface.co/datasets/spider] an error occurs
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('spider')
```
## Expected results
Dataset object
## Actual results
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1_AckYkinAnhqmRQtGsQgUKAnTHxxX5J0']
## Environment info
- `datasets` version: 2.2.2
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.7.11
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4401/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4400/comments | https://api.github.com/repos/huggingface/datasets/issues/4400/events | https://github.com/huggingface/datasets/issues/4400 | 1,247,404,237 | I_kwDODunzps5KWeDN | 4,400 | load dataset wikitext-2-raw-v1 failed. Could not reach wikitext-2-raw-v1.py. | {
"login": "cailun01",
"id": 20658907,
"node_id": "MDQ6VXNlcjIwNjU4OTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/20658907?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cailun01",
"html_url": "https://github.com/cailun01",
"followers_url": "https://api.github.com/users/cailun01/followers",
"following_url": "https://api.github.com/users/cailun01/following{/other_user}",
"gists_url": "https://api.github.com/users/cailun01/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cailun01/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cailun01/subscriptions",
"organizations_url": "https://api.github.com/users/cailun01/orgs",
"repos_url": "https://api.github.com/users/cailun01/repos",
"events_url": "https://api.github.com/users/cailun01/events{/privacy}",
"received_events_url": "https://api.github.com/users/cailun01/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [] | 1,653,448,244,000 | 1,653,449,196,000 | 1,653,449,196,000 | NONE | null | ## Describe the bug
Could not reach wikitext-2-raw-v1.py
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikitext-2-raw-v1")
```
## Expected results
Download `wikitext-2-raw-v1` dataset successfully.
## Actual results
```
File "load_datasets.py", line 13, in <module>
load_dataset("wikitext-2-raw-v1")
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1715, in load_dataset
**config_kwargs,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1536, in load_dataset_builder
data_files=data_files,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1282, in dataset_module_factory
raise e1 from None
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 1224, in dataset_module_factory
dynamic_modules_path=dynamic_modules_path,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 559, in get_module
local_path = self.download_loading_script(revision)
File "/root/miniconda3/lib/python3.6/site-packages/datasets/load.py", line 539, in download_loading_script
return cached_path(file_path, download_config=download_config)
File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 246, in cached_path
download_desc=download_config.download_desc,
File "/root/miniconda3/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 582, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})")
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.2.2/datasets/wikitext-2-raw-v1/wikitext-2-raw-v1.py (ReadTimeout(ReadTimeoutError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Read timed out. (read timeout=100)",),))
```
I tried to download wikitext-2-raw-v1.py by chrome and got:
![image](https://user-images.githubusercontent.com/20658907/170171595-0ca9f1da-c05a-4b57-861e-9530bfa3bdb9.png)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.2.2
- Platform: CentOS 7
- Python version: 3.6
- PyArrow version: 3.0.0
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4400/timeline | null | completed | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4399/comments | https://api.github.com/repos/huggingface/datasets/issues/4399/events | https://github.com/huggingface/datasets/issues/4399 | 1,246,948,299 | I_kwDODunzps5KUuvL | 4,399 | LocalDatasetModuleFactoryWithoutScript extracts invalid builder name | {
"login": "apohllo",
"id": 40543,
"node_id": "MDQ6VXNlcjQwNTQz",
"avatar_url": "https://avatars.githubusercontent.com/u/40543?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/apohllo",
"html_url": "https://github.com/apohllo",
"followers_url": "https://api.github.com/users/apohllo/followers",
"following_url": "https://api.github.com/users/apohllo/following{/other_user}",
"gists_url": "https://api.github.com/users/apohllo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/apohllo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/apohllo/subscriptions",
"organizations_url": "https://api.github.com/users/apohllo/orgs",
"repos_url": "https://api.github.com/users/apohllo/repos",
"events_url": "https://api.github.com/users/apohllo/events{/privacy}",
"received_events_url": "https://api.github.com/users/apohllo/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Ok, so\r\n```\r\nos.path.basename(\"/home/user/\")\r\n```\r\ngives `''` while \r\n```\r\nos.path.basename(\"/home/user\")\r\n```\r\ngives `user`. \r\nThe code should check if the last char is a slash.\r\n",
"The fix is:\r\n```\r\n\"name\": os.path.basename(self.path[:-1] if self.path[-1] == \"/\" else self.path)\r\n```"
] | 1,653,415,381,000 | 1,653,416,036,000 | null | NONE | null | ## Describe the bug
Trying to load a local dataset raises an error indicating that the config builder has to have a name.
No error should be reported, since the call is completly valid.
## Steps to reproduce the bug
```python
load_dataset("./data/some-dataset/", name="some-name")
```
## Expected results
The dataset should be loaded.
## Actual results
```
Traceback (most recent call last):
File "train_lquad.py", line 19, in <module>
load(tokenize_target_function, tokenize_target_function, {}, tokenizer)
File "train_lquad.py", line 14, in load
dataset = load_dataset("./data/lquad/", name="lquad")
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1708, in load_dataset
builder_instance = load_dataset_builder(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/load.py", line 1560, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 269, in __init__
self.config, self.config_id = self._create_builder_config(
File "/net/pr2/scratch/people/plgapohl/python-3.8.6/lib/python3.8/site-packages/datasets/builder.py", line 403, in _create_builder_config
raise ValueError(f"BuilderConfig must have a name, got {builder_config.name}")
ValueError: BuilderConfig must have a name, got
```
## Environment info
- `datasets` version: 2.2.2
- Platform: Linux-4.18.0-348.20.1.el8_5.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.6
- PyArrow version: 8.0.0
- Pandas version: 1.4.2
The error is probably in line 795 in load.py:
```
builder_kwargs = {
"hash": hash,
"data_files": data_files,
"name": os.path.basename(self.path),
"base_path": self.path,
**builder_kwargs,
}
```
`os.path.basename` for a directory returns an empty string, rather than the name of the directory.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/4399/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/4399/timeline | null | null | null | null | false |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 9