url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
1.69B
node_id
stringlengths
18
24
number
int64
2
5.8k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
comments
sequencelengths
0
30
created_at
int64
1,587B
1,683B
updated_at
int64
1,588B
1,683B
closed_at
int64
1,588B
1,683B
โŒ€
author_association
stringclasses
3 values
draft
float64
pull_request
dict
body
stringlengths
0
228k
โŒ€
reactions
dict
timeline_url
stringlengths
67
70
state_reason
stringclasses
3 values
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/5799
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5799/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5799/comments
https://api.github.com/repos/huggingface/datasets/issues/5799/events
https://github.com/huggingface/datasets/issues/5799
1,686,334,572
I_kwDODunzps5kg2xs
5,799
Files downloaded to cache do not respect umask
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[]
1,682,582,765,000
1,682,587,817,000
1,682,587,817,000
MEMBER
null
null
As reported by @stas00, files downloaded to the cache do not respect umask: ```bash $ ls -l /path/to/cache/datasets/downloads/ -rw------- 1 uername username 150M Apr 25 16:41 5e646c1d600f065adaeb134e536f6f2f296a6d804bd1f0e1fdcd20ee28c185c6 ``` Related to: - #2065
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5799/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5799/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5798
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5798/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5798/comments
https://api.github.com/repos/huggingface/datasets/issues/5798/events
https://github.com/huggingface/datasets/issues/5798
1,685,904,526
I_kwDODunzps5kfNyO
5,798
Support parallelized downloading and processing in load_dataset with Spark
{ "login": "es94129", "id": 12763339, "node_id": "MDQ6VXNlcjEyNzYzMzM5", "avatar_url": "https://avatars.githubusercontent.com/u/12763339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/es94129", "html_url": "https://github.com/es94129", "followers_url": "https://api.github.com/users/es94129/followers", "following_url": "https://api.github.com/users/es94129/following{/other_user}", "gists_url": "https://api.github.com/users/es94129/gists{/gist_id}", "starred_url": "https://api.github.com/users/es94129/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/es94129/subscriptions", "organizations_url": "https://api.github.com/users/es94129/orgs", "repos_url": "https://api.github.com/users/es94129/repos", "events_url": "https://api.github.com/users/es94129/events{/privacy}", "received_events_url": "https://api.github.com/users/es94129/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[]
1,682,554,571,000
1,682,554,571,000
null
NONE
null
null
### Feature request When calling `load_dataset` for datasets that have multiple files, support using Spark to distribute the downloading and processing job to worker nodes when `cache_dir` is a cloud file system shared among nodes. ```python load_dataset(..., use_spark=True) ``` ### Motivation Further speed up `dl_manager.download` and `_prepare_split` by distributing the workloads to worker nodes. ### Your contribution I can submit a PR to support this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5798/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5798/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5797
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5797/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5797/comments
https://api.github.com/repos/huggingface/datasets/issues/5797/events
https://github.com/huggingface/datasets/issues/5797
1,685,501,199
I_kwDODunzps5kdrUP
5,797
load_dataset is case sentitive?
{ "login": "haonan-li", "id": 34729065, "node_id": "MDQ6VXNlcjM0NzI5MDY1", "avatar_url": "https://avatars.githubusercontent.com/u/34729065?v=4", "gravatar_id": "", "url": "https://api.github.com/users/haonan-li", "html_url": "https://github.com/haonan-li", "followers_url": "https://api.github.com/users/haonan-li/followers", "following_url": "https://api.github.com/users/haonan-li/following{/other_user}", "gists_url": "https://api.github.com/users/haonan-li/gists{/gist_id}", "starred_url": "https://api.github.com/users/haonan-li/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/haonan-li/subscriptions", "organizations_url": "https://api.github.com/users/haonan-li/orgs", "repos_url": "https://api.github.com/users/haonan-li/repos", "events_url": "https://api.github.com/users/haonan-li/events{/privacy}", "received_events_url": "https://api.github.com/users/haonan-li/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi @haonan-li , thank you for the report! It seems to be a bug on the [`huggingface_hub`](https://github.com/huggingface/huggingface_hub) site, there is even no such dataset as `mbzuai/bactrian-x` on the Hub. I opened and [issue](https://github.com/huggingface/huggingface_hub/issues/1453) there.", "I think `load_dataset(\"mbzuai/bactrian-x\")` shouldn't be loaded at all and raise an error but because of [this fallback](https://github.com/huggingface/datasets/blob/main/src/datasets/load.py#L1194) to packaged loaders when no other options are applicable, it loads the dataset with standard `json` loader instead of the custom loading script." ]
1,682,533,144,000
1,682,596,618,000
null
NONE
null
null
### Describe the bug load_dataset() function is case sensitive? ### Steps to reproduce the bug The following two code, get totally different behavior. 1. load_dataset('mbzuai/bactrian-x','en') 2. load_dataset('MBZUAI/Bactrian-X','en') ### Expected behavior Compare 1 and 2. 1 will download all 52 subsets, shell output: ```Downloading and preparing dataset json/MBZUAI--bactrian-X to xxx``` 2 will only download single subset, shell output ```Downloading and preparing dataset bactrian-x/en to xxx``` ### Environment info Python 3.10.11 datasets Version: 2.11.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5797/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5794
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5794/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5794/comments
https://api.github.com/repos/huggingface/datasets/issues/5794/events
https://github.com/huggingface/datasets/issues/5794
1,685,196,061
I_kwDODunzps5kcg0d
5,794
CI ZeroDivisionError
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
[]
1,682,520,923,000
1,682,520,923,000
null
MEMBER
null
null
Sometimes when running our CI on Windows, we get a ZeroDivisionError: ``` FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore - ZeroDivisionError: float division by zero ``` See for example: - https://github.com/huggingface/datasets/actions/runs/4809358266/jobs/8560513110 - https://github.com/huggingface/datasets/actions/runs/4798359836/jobs/8536573688 ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ split = 'test', start_time = 1682516718.8236516, num_samples = 2, num_steps = 1 def speed_metrics(split, start_time, num_samples=None, num_steps=None): """ Measure and return speed performance metrics. This function requires a time snapshot `start_time` before the operation to be measured starts and this function should be run immediately after the operation to be measured has completed. Args: - split: name to prefix metric (like train, eval, test...) - start_time: operation start time - num_samples: number of samples processed """ runtime = time.time() - start_time result = {f"{split}_runtime": round(runtime, 4)} if num_samples is not None: > samples_per_second = num_samples / runtime E ZeroDivisionError: float division by zero C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\site-packages\transformers\trainer_utils.py:354: ZeroDivisionError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5794/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5793
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5793/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5793/comments
https://api.github.com/repos/huggingface/datasets/issues/5793/events
https://github.com/huggingface/datasets/issues/5793
1,684,777,320
I_kwDODunzps5ka6lo
5,793
IterableDataset.with_format("torch") not working
{ "login": "jiangwy99", "id": 39762734, "node_id": "MDQ6VXNlcjM5NzYyNzM0", "avatar_url": "https://avatars.githubusercontent.com/u/39762734?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jiangwy99", "html_url": "https://github.com/jiangwy99", "followers_url": "https://api.github.com/users/jiangwy99/followers", "following_url": "https://api.github.com/users/jiangwy99/following{/other_user}", "gists_url": "https://api.github.com/users/jiangwy99/gists{/gist_id}", "starred_url": "https://api.github.com/users/jiangwy99/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jiangwy99/subscriptions", "organizations_url": "https://api.github.com/users/jiangwy99/orgs", "repos_url": "https://api.github.com/users/jiangwy99/repos", "events_url": "https://api.github.com/users/jiangwy99/events{/privacy}", "received_events_url": "https://api.github.com/users/jiangwy99/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "Hi ! Thanks for reporting, I'm working on it ;)" ]
1,682,506,223,000
1,682,510,927,000
null
NONE
null
null
### Describe the bug After calling the with_format("torch") method on an IterableDataset instance, the data format is unchanged. ### Steps to reproduce the bug ```python from datasets import IterableDataset def gen(): for i in range(4): yield {"a": [i] * 4} dataset = IterableDataset.from_generator(gen).with_format("torch") next(iter(dataset)) ``` ### Expected behavior `{"a": torch.tensor([0, 0, 0, 0])}` is expected, but `{"a": [0, 0, 0, 0]}` is observed. ### Environment info ```bash platform==ubuntu 22.04.01 python==3.10.9 datasets==2.11.0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5793/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5791
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5791/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5791/comments
https://api.github.com/repos/huggingface/datasets/issues/5791/events
https://github.com/huggingface/datasets/issues/5791
1,683,473,943
I_kwDODunzps5kV8YX
5,791
TIFF/TIF support
{ "login": "sebasmos", "id": 31293221, "node_id": "MDQ6VXNlcjMxMjkzMjIx", "avatar_url": "https://avatars.githubusercontent.com/u/31293221?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sebasmos", "html_url": "https://github.com/sebasmos", "followers_url": "https://api.github.com/users/sebasmos/followers", "following_url": "https://api.github.com/users/sebasmos/following{/other_user}", "gists_url": "https://api.github.com/users/sebasmos/gists{/gist_id}", "starred_url": "https://api.github.com/users/sebasmos/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sebasmos/subscriptions", "organizations_url": "https://api.github.com/users/sebasmos/orgs", "repos_url": "https://api.github.com/users/sebasmos/repos", "events_url": "https://api.github.com/users/sebasmos/events{/privacy}", "received_events_url": "https://api.github.com/users/sebasmos/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[]
1,682,439,258,000
1,682,439,258,000
null
NONE
null
null
### Feature request I currently have a dataset (with tiff and json files) where I have to do this: `wget path_to_data/images.zip && unzip images.zip` `wget path_to_data/annotations.zip && unzip annotations.zip` Would it make sense a contribution that supports these type of files? ### Motivation instead of using `load_dataset` have to use wget as these files are not supported for annotations with JSON and images with TIFF files. Additionally to this, the PIL formatting from datasets does not read correctly the image channels with TIFF format, besides multichannel adaptation might be necessary as well (as my data e.g has more than 3 channels) ### Your contribution 1. Support TIFF images over multi channel format 2. Support JSON annotations
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5791/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5791/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5789
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5789/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5789/comments
https://api.github.com/repos/huggingface/datasets/issues/5789/events
https://github.com/huggingface/datasets/issues/5789
1,682,611,179
I_kwDODunzps5kSpvr
5,789
Support streaming datasets that use jsonlines
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[]
1,682,408,402,000
1,682,408,403,000
null
MEMBER
null
null
Extend support for streaming datasets that use `jsonlines.open`. Currently, if `jsonlines` is installed, `datasets` raises a `FileNotFoundError`: ``` FileNotFoundError: [Errno 2] No such file or directory: 'https://...' ``` See: - https://huggingface.co/datasets/masakhane/afriqa/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5789/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5789/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5786
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5786/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5786/comments
https://api.github.com/repos/huggingface/datasets/issues/5786/events
https://github.com/huggingface/datasets/issues/5786
1,680,957,070
I_kwDODunzps5kMV6O
5,786
Multiprocessing in a `filter` or `map` function with a Pytorch model
{ "login": "HugoLaurencon", "id": 44556846, "node_id": "MDQ6VXNlcjQ0NTU2ODQ2", "avatar_url": "https://avatars.githubusercontent.com/u/44556846?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HugoLaurencon", "html_url": "https://github.com/HugoLaurencon", "followers_url": "https://api.github.com/users/HugoLaurencon/followers", "following_url": "https://api.github.com/users/HugoLaurencon/following{/other_user}", "gists_url": "https://api.github.com/users/HugoLaurencon/gists{/gist_id}", "starred_url": "https://api.github.com/users/HugoLaurencon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HugoLaurencon/subscriptions", "organizations_url": "https://api.github.com/users/HugoLaurencon/orgs", "repos_url": "https://api.github.com/users/HugoLaurencon/repos", "events_url": "https://api.github.com/users/HugoLaurencon/events{/privacy}", "received_events_url": "https://api.github.com/users/HugoLaurencon/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Hi ! PyTorch may hang when calling `load_state_dict()` in a subprocess. To fix that, set the multiprocessing start method to \"spawn\". Since `datasets` uses `multiprocess`, you should do:\r\n\r\n```python\r\n# Required to avoid issues with pytorch (otherwise hangs during load_state_dict in multiprocessing)\r\nimport multiprocess.context as ctx\r\nctx._force_start_method('spawn')\r\n```\r\n\r\nAlso make sure to run your main code in `if __name__ == \"__main__\":` to avoid issues with python multiprocesing", "Thanks!" ]
1,682,332,687,000
1,682,333,038,000
1,682,333,038,000
MEMBER
null
null
### Describe the bug I am trying to use a Pytorch model loaded on CPUs with multiple processes with a `.map` or a `.filter` method. Usually, when dealing with models that are non-pickable, creating a class such that the `map` function is the method `__call__`, and adding `reduce` helps to solve the problem. However, here, the command hangs without throwing an error. ### Steps to reproduce the bug ``` from datasets import Dataset import torch from torch import nn from torchvision import models โ€‹ โ€‹ class FilterFunction: #__slots__ = ("path_model", "model") # Doesn't change anything uncommented def __init__(self, path_model): self.path_model = path_model model = models.resnet50() model.fc = nn.Sequential( nn.Linear(2048, 512), nn.ReLU(), nn.Dropout(0.2), nn.Linear(512, 10), nn.LogSoftmax(dim=1) ) model.load_state_dict(torch.load(path_model, map_location=torch.device("cpu"))) model.eval() self.model = model def __call__(self, batch): return [True] * len(batch["id"]) # Comment this to have an error def __reduce__(self): return (self.__class__, (self.path_model,)) โ€‹ โ€‹ dataset = Dataset.from_dict({"id": [0, 1, 2, 4]}) โ€‹ # Download (100 MB) at https://github.com/emiliantolo/pytorch_nsfw_model/raw/master/ResNet50_nsfw_model.pth path_model = "/fsx/hugo/nsfw_image/ResNet50_nsfw_model.pth" โ€‹ filter_function = FilterFunction(path_model=path_model) โ€‹ # Works filtered_dataset = dataset.filter(filter_function, num_proc=1, batched=True, batch_size=2) # Doesn't work filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2) ``` ### Expected behavior The command `filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2)` should work and not hang. ### Environment info Datasets: 2.11.0 Pyarrow: 11.0.0 Ubuntu
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5786/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5786/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5785
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5785/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5785/comments
https://api.github.com/repos/huggingface/datasets/issues/5785/events
https://github.com/huggingface/datasets/issues/5785
1,680,956,964
I_kwDODunzps5kMV4k
5,785
Unsupported data files raise TypeError: 'NoneType' object is not iterable
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[]
1,682,332,683,000
1,682,600,250,000
1,682,600,250,000
MEMBER
null
null
Currently, we raise a TypeError for unsupported data files: ``` TypeError: 'NoneType' object is not iterable ``` See: - https://github.com/huggingface/datasets-server/issues/1073 We should give a more informative error message.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5785/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5785/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5783
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5783/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5783/comments
https://api.github.com/repos/huggingface/datasets/issues/5783/events
https://github.com/huggingface/datasets/issues/5783
1,679,664,393
I_kwDODunzps5kHaUJ
5,783
Offset overflow while doing regex on a text column
{ "login": "nishanthcgit", "id": 5066268, "node_id": "MDQ6VXNlcjUwNjYyNjg=", "avatar_url": "https://avatars.githubusercontent.com/u/5066268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nishanthcgit", "html_url": "https://github.com/nishanthcgit", "followers_url": "https://api.github.com/users/nishanthcgit/followers", "following_url": "https://api.github.com/users/nishanthcgit/following{/other_user}", "gists_url": "https://api.github.com/users/nishanthcgit/gists{/gist_id}", "starred_url": "https://api.github.com/users/nishanthcgit/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nishanthcgit/subscriptions", "organizations_url": "https://api.github.com/users/nishanthcgit/orgs", "repos_url": "https://api.github.com/users/nishanthcgit/repos", "events_url": "https://api.github.com/users/nishanthcgit/events{/privacy}", "received_events_url": "https://api.github.com/users/nishanthcgit/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[]
1,682,190,723,000
1,682,190,741,000
null
NONE
null
null
### Describe the bug `ArrowInvalid: offset overflow while concatenating arrays` Same error as [here](https://github.com/huggingface/datasets/issues/615) ### Steps to reproduce the bug Steps to reproduce: (dataset is a few GB big so try in colab maybe) ``` import datasets import re ds = datasets.load_dataset('nishanthc/dnd_map_dataset_v0.1', split = 'train') def get_text_caption(example): regex_pattern = r'\s\d+x\d+|,\sLQ|,\sgrid|\.\w+$' example['text_caption'] = re.sub(regex_pattern, '', example['picture_text']) return example ds = ds.map(get_text_caption) ``` I am trying to apply a regex to remove certain patterns from a text column. Not sure why this error is showing up. ### Expected behavior Dataset should have a new column with processed text ### Environment info Datasets version - 2.11.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5783/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5782
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5782/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5782/comments
https://api.github.com/repos/huggingface/datasets/issues/5782/events
https://github.com/huggingface/datasets/issues/5782
1,679,622,367
I_kwDODunzps5kHQDf
5,782
Support for various audio-loading backends instead of always relying on SoundFile
{ "login": "BoringDonut", "id": 129098876, "node_id": "U_kgDOB7HkfA", "avatar_url": "https://avatars.githubusercontent.com/u/129098876?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BoringDonut", "html_url": "https://github.com/BoringDonut", "followers_url": "https://api.github.com/users/BoringDonut/followers", "following_url": "https://api.github.com/users/BoringDonut/following{/other_user}", "gists_url": "https://api.github.com/users/BoringDonut/gists{/gist_id}", "starred_url": "https://api.github.com/users/BoringDonut/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BoringDonut/subscriptions", "organizations_url": "https://api.github.com/users/BoringDonut/orgs", "repos_url": "https://api.github.com/users/BoringDonut/repos", "events_url": "https://api.github.com/users/BoringDonut/events{/privacy}", "received_events_url": "https://api.github.com/users/BoringDonut/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[]
1,682,183,365,000
1,682,183,365,000
null
NONE
null
null
### Feature request Introduce an option to select from a variety of audio-loading backends rather than solely relying on the SoundFile library. For instance, if the ffmpeg library is installed, it can serve as a fallback loading option. ### Motivation - The SoundFile library, used in [features/audio.py](https://github.com/huggingface/datasets/blob/649d5a3315f9e7666713b6affe318ee00c7163a0/src/datasets/features/audio.py#L185), supports only a [limited number of audio formats](https://pysoundfile.readthedocs.io/en/latest/index.html?highlight=supported#soundfile.available_formats). - However, current methods for creating audio datasets permit the inclusion of audio files in formats not supported by SoundFile. - As a result, developers may potentially create a dataset they cannot read back. In my most recent project, I dealt with phone call recordings in `.amr` or `.gsm` formats and was genuinely surprised when I couldn't read the dataset I had just packaged a minute prior. Nonetheless, I can still accurately read these files using the librosa library, which employs the audioread library that internally leverages ffmpeg to read such files. Example: ```python audio_dataset_amr = Dataset.from_dict({"audio": ["audio_samples/audio.amr"]}).cast_column("audio", Audio()) audio_dataset_amr.save_to_disk("audio_dataset_amr") audio_dataset_amr = Dataset.load_from_disk("audio_dataset_amr") print(audio_dataset_amr[0]) ``` Results in: ``` Traceback (most recent call last): ... raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f316323e4d0>: Format not recognised. ``` While I acknowledge that support for these rare file types may not be a priority, I believe it's quite unfortunate that it's possible to create an unreadable dataset in this manner. ### Your contribution I've created a [simple demo repository](https://github.com/BoringDonut/hf-datasets-ffmpeg-audio) that highlights the mentioned issue. It demonstrates how to create an .amr dataset that results in an error when attempting to read it just a few lines later. Additionally, I've made a [fork with a rudimentary solution](https://github.com/BoringDonut/datasets/blob/fea73a8fbbc8876467c7e6422c9360546c6372d8/src/datasets/features/audio.py#L189) that utilizes ffmpeg to load files not supported by SoundFile. Here you may see github actions fails to read `.amr` dataset using the version of the current dataset, but will work with the patched version: - https://github.com/BoringDonut/hf-datasets-ffmpeg-audio/actions/runs/4773780420/jobs/8487063785 - https://github.com/BoringDonut/hf-datasets-ffmpeg-audio/actions/runs/4773780420/jobs/8487063829 As evident from the GitHub action above, this solution resolves the previously mentioned problem. I'd be happy to create a proper pull request, provide runtime benchmarks and tests if you could offer some guidance on the following: - Where should I incorporate the ffmpeg (or other backends) code? For example, should I create a new file or simply add a function within the Audio class? - Is it feasible to pass the audio-loading function as an argument within the current architecture? This would be useful if I know in advance that I'll be reading files not supported by SoundFile. A few more notes: - In theory, it's possible to load audio using librosa/audioread since librosa is already expected to be installed. However, librosa [will soon discontinue audioread support](https://github.com/librosa/librosa/blob/aacb4c134002903ae56bbd4b4a330519a5abacc0/librosa/core/audio.py#L227). Moreover, using audioread on its own seems inconvenient because it requires a file [path as input](https://github.com/beetbox/audioread/blob/ff9535df934c48038af7be9617fdebb12078cc07/audioread/__init__.py#L108) and cannot work with bytes already loaded into memory or an open file descriptor (as mentioned in [librosa docs](https://librosa.org/doc/main/generated/librosa.load.html#librosa.load), only SoundFile backend supports an open file descriptor as an input).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5782/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5782/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5781
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5781/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5781/comments
https://api.github.com/repos/huggingface/datasets/issues/5781/events
https://github.com/huggingface/datasets/issues/5781
1,679,580,460
I_kwDODunzps5kHF0s
5,781
Error using `load_datasets`
{ "login": "gjyoungjr", "id": 61463108, "node_id": "MDQ6VXNlcjYxNDYzMTA4", "avatar_url": "https://avatars.githubusercontent.com/u/61463108?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gjyoungjr", "html_url": "https://github.com/gjyoungjr", "followers_url": "https://api.github.com/users/gjyoungjr/followers", "following_url": "https://api.github.com/users/gjyoungjr/following{/other_user}", "gists_url": "https://api.github.com/users/gjyoungjr/gists{/gist_id}", "starred_url": "https://api.github.com/users/gjyoungjr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gjyoungjr/subscriptions", "organizations_url": "https://api.github.com/users/gjyoungjr/orgs", "repos_url": "https://api.github.com/users/gjyoungjr/repos", "events_url": "https://api.github.com/users/gjyoungjr/events{/privacy}", "received_events_url": "https://api.github.com/users/gjyoungjr/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "It looks like an issue with your installation of scipy, can you try reinstalling it ?" ]
1,682,176,244,000
1,682,511,042,000
null
NONE
null
null
### Describe the bug I tried to load a dataset using the `datasets` library in a conda jupyter notebook and got the below error. ``` ImportError: dlopen(/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib Referenced from: <65B094A2-59D7-31AC-A966-4DB9E11D2A15> /Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so Reason: tried: '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache) ``` ### Steps to reproduce the bug Run the `load_datasets` function ### Expected behavior I expected the dataset to be loaded into my notebook. ### Environment info name: review_sense channels: - apple - conda-forge dependencies: - python=3.8 - pip>=19.0 - jupyter - tensorflow-deps #- scikit-learn #- scipy - pandas - pandas-datareader - matplotlib - pillow - tqdm - requests - h5py - pyyaml - flask - boto3 - ipykernel - seaborn - pip: - tensorflow-macos==2.9 - tensorflow-metal==0.5.0 - bayesian-optimization - gym - kaggle - huggingface_hub - datasets - numpy - huggingface
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5781/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5781/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5780
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5780/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5780/comments
https://api.github.com/repos/huggingface/datasets/issues/5780/events
https://github.com/huggingface/datasets/issues/5780
1,679,367,149
I_kwDODunzps5kGRvt
5,780
TypeError: 'NoneType' object does not support item assignment
{ "login": "v-yunbin", "id": 38179632, "node_id": "MDQ6VXNlcjM4MTc5NjMy", "avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/v-yunbin", "html_url": "https://github.com/v-yunbin", "followers_url": "https://api.github.com/users/v-yunbin/followers", "following_url": "https://api.github.com/users/v-yunbin/following{/other_user}", "gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}", "starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions", "organizations_url": "https://api.github.com/users/v-yunbin/orgs", "repos_url": "https://api.github.com/users/v-yunbin/repos", "events_url": "https://api.github.com/users/v-yunbin/events{/privacy}", "received_events_url": "https://api.github.com/users/v-yunbin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,682,144,563,000
1,682,239,758,000
1,682,239,758,000
NONE
null
null
command๏ผš ``` def load_datasets(formats, data_dir=datadir, data_files=datafile๏ผ‰๏ผš dataset = load_dataset(formats, data_dir=datadir, data_files=datafile, split=split, streaming=True, **kwargs) return dataset raw_datasets = DatasetDict() raw_datasets["train"] = load_datasets(โ€œcsvโ€, args.datadir, "train.csv", split=train_split) raw_datasets["test"] = load_datasets(โ€œcsvโ€, args.datadir, "dev.csv", split=test_split) raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000)) ``` error๏ผš ``` main() File "peft_adalora_whisper_large_training.py", line 502, in main raw_datasets = raw_datasets.cast_column("audio", Audio(sampling_rate=16000)) File "/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/datasets/dataset_dict.py", line 2015, in cast_column info.features[column] = feature TypeError: 'NoneType' object does not support item assignment ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5780/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5778
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5778/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5778/comments
https://api.github.com/repos/huggingface/datasets/issues/5778/events
https://github.com/huggingface/datasets/issues/5778
1,678,125,951
I_kwDODunzps5kBit_
5,778
Schrรถdinger's dataset_dict
{ "login": "liujuncn", "id": 902005, "node_id": "MDQ6VXNlcjkwMjAwNQ==", "avatar_url": "https://avatars.githubusercontent.com/u/902005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/liujuncn", "html_url": "https://github.com/liujuncn", "followers_url": "https://api.github.com/users/liujuncn/followers", "following_url": "https://api.github.com/users/liujuncn/following{/other_user}", "gists_url": "https://api.github.com/users/liujuncn/gists{/gist_id}", "starred_url": "https://api.github.com/users/liujuncn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liujuncn/subscriptions", "organizations_url": "https://api.github.com/users/liujuncn/orgs", "repos_url": "https://api.github.com/users/liujuncn/repos", "events_url": "https://api.github.com/users/liujuncn/events{/privacy}", "received_events_url": "https://api.github.com/users/liujuncn/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Hi ! Passing `data_files=\"path/test.json\"` is equivalent to `data_files={\"train\": [\"path/test.json\"]}`, that's why you end up with a train split. If you don't pass `data_files=`, then split names are inferred from the data files names" ]
1,682,066,292,000
1,682,088,914,000
null
NONE
null
null
### Describe the bug If you use load_dataset('json', data_files="path/test.json"), it will return DatasetDict({train:...}). And if you use load_dataset("path"), it will return DatasetDict({test:...}). Why can't the output behavior be unified? ### Steps to reproduce the bug as description above. ### Expected behavior consistent predictable output. ### Environment info '2.11.0'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5778/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5777
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5777/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5777/comments
https://api.github.com/repos/huggingface/datasets/issues/5777/events
https://github.com/huggingface/datasets/issues/5777
1,677,655,969
I_kwDODunzps5j_v-h
5,777
datasets.load_dataset("code_search_net", "python") : NotADirectoryError: [Errno 20] Not a directory
{ "login": "jason-brian-anderson", "id": 34688597, "node_id": "MDQ6VXNlcjM0Njg4NTk3", "avatar_url": "https://avatars.githubusercontent.com/u/34688597?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jason-brian-anderson", "html_url": "https://github.com/jason-brian-anderson", "followers_url": "https://api.github.com/users/jason-brian-anderson/followers", "following_url": "https://api.github.com/users/jason-brian-anderson/following{/other_user}", "gists_url": "https://api.github.com/users/jason-brian-anderson/gists{/gist_id}", "starred_url": "https://api.github.com/users/jason-brian-anderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jason-brian-anderson/subscriptions", "organizations_url": "https://api.github.com/users/jason-brian-anderson/orgs", "repos_url": "https://api.github.com/users/jason-brian-anderson/repos", "events_url": "https://api.github.com/users/jason-brian-anderson/events{/privacy}", "received_events_url": "https://api.github.com/users/jason-brian-anderson/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[ "Note:\r\nI listed the datasets and grepped around to find what appears to be an alternative source for this:\r\n\r\nraw_datasets = load_dataset(\"espejelomar/code_search_net_python_10000_examples\", \"python\")", "Thanks for reporting, @jason-brian-anderson.\r\n\r\nYes, this is a known issue: the [CodeSearchNet](https://github.com/github/CodeSearchNet) repo has been archived (Apr 11, 2023) and their source data files are no longer accessible in their S3: e.g. https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip gives 403 Forbidden error. See:\r\n- https://huggingface.co/datasets/code_search_net/discussions/3\r\n\r\nWe have contacted one of the authors of the dataset to find a solution. I'll keep you informed.\r\n\r\nCC: @hamelsmu", "cc: @julianeagu" ]
1,682,042,887,000
1,682,077,222,000
null
NONE
null
null
### Describe the bug While checking out the [tokenizer tutorial](https://huggingface.co/course/chapter6/2?fw=pt), i noticed getting an error while initially downloading the python dataset used in the examples. The [collab with the error is here](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/en/chapter6/section2.ipynb#scrollTo=hGb69Yo3eV8S) ``` from datasets import load_dataset import os os.environ["HF_DATASETS_CACHE"] = "/workspace" # This can take a few minutes to load, so grab a coffee or tea while you wait! raw_datasets = load_dataset("code_search_net", "python") ``` yeilds: ``` ile /opt/conda/lib/python3.10/site-packages/datasets/download/streaming_download_manager.py:524, in xlistdir(path, use_auth_token) 522 main_hop, *rest_hops = _as_str(path).split("::") 523 if is_local_path(main_hop): --> 524 return os.listdir(path) 525 else: 526 # globbing inside a zip in a private repo requires authentication 527 if not rest_hops and (main_hop.startswith("http://") or main_hop.startswith("https://")): NotADirectoryError: [Errno 20] Not a directory: '/workspace/downloads/25ceeb4c25ab737d688bd56ea92bfbb1f199fe572470456cf2d675479f342ac7/python/final/jsonl/train' ``` I was able to reproduce this erro both in the collab and on my own pytorch/pytorch container pulled from the dockerhub official pytorch image, so i think it may be a server side thing. ### Steps to reproduce the bug Steps to reproduce the issue: 1. run `raw_datasets = load_dataset("code_search_net", "python")` ### Expected behavior expect the code to not exception during dataset pull. ### Environment info i tried both the default HF_DATASETS_CACHE on Collab, and on my local container. i then pointed to the HF_DATASETS_CACHE to a large capacity local storage and the problem was consisten across all 3 scenarios.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5777/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5776
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5776/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5776/comments
https://api.github.com/repos/huggingface/datasets/issues/5776/events
https://github.com/huggingface/datasets/issues/5776
1,677,116,100
I_kwDODunzps5j9sLE
5,776
Use Pandas' `read_json` in the JSON builder
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
[]
1,682,010,949,000
1,682,010,949,000
null
CONTRIBUTOR
null
null
Instead of PyArrow's `read_json`, we should use `pd.read_json` in the JSON builder for consistency with the CSV and SQL builders (e.g., to address https://github.com/huggingface/datasets/issues/5725). In Pandas2.0, to get the same performance, we can set the `engine` to "pyarrow". The issue is that Colab still doesn't install Pandas 2.0 by default, so I think it's best to wait for this to be resolved on their side to avoid downgrading decoding performance in scenarios when Pandas 2.0 is not installed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5776/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5775
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5775/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5775/comments
https://api.github.com/repos/huggingface/datasets/issues/5775/events
https://github.com/huggingface/datasets/issues/5775
1,677,089,901
I_kwDODunzps5j9lxt
5,775
ArrowDataset.save_to_disk lost some logic of remote
{ "login": "Zoupers", "id": 29817738, "node_id": "MDQ6VXNlcjI5ODE3NzM4", "avatar_url": "https://avatars.githubusercontent.com/u/29817738?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Zoupers", "html_url": "https://github.com/Zoupers", "followers_url": "https://api.github.com/users/Zoupers/followers", "following_url": "https://api.github.com/users/Zoupers/following{/other_user}", "gists_url": "https://api.github.com/users/Zoupers/gists{/gist_id}", "starred_url": "https://api.github.com/users/Zoupers/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Zoupers/subscriptions", "organizations_url": "https://api.github.com/users/Zoupers/orgs", "repos_url": "https://api.github.com/users/Zoupers/repos", "events_url": "https://api.github.com/users/Zoupers/events{/privacy}", "received_events_url": "https://api.github.com/users/Zoupers/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
[ "We just fixed this on `main` and will do a new release soon :)" ]
1,682,009,881,000
1,682,511,096,000
1,682,511,077,000
NONE
null
null
### Describe the bug https://github.com/huggingface/datasets/blob/e7ce0ac60c7efc10886471932854903a7c19f172/src/datasets/arrow_dataset.py#L1371 Here is the bug point, when I want to save from a `DatasetDict` class and the items of the instance is like `[('train', Dataset({features: ..., num_rows: ...}))]` , there is no guarantee that there exists a directory name `train` under `dataset_dict_path`. ### Steps to reproduce the bug 1. Mock a DatasetDict with items like what I said. 2. using save_to_disk with storage_options, u can use local sftp. code may like below ```python from datasets import load_dataset dataset = load_dataset(...) dataset.save_to_disk('sftp:///tmp', storage_options={'host': 'localhost', 'username': 'admin'}) ``` I suppose u can reproduce the bug by these steps. ### Expected behavior Should create the folder if it does not exists, just like we do locally. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-6.2.10-arch1-1-x86_64-with-glibc2.35 - Python version: 3.10.9 - Huggingface_hub version: 0.13.2 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5775/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5775/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5773
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5773/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5773/comments
https://api.github.com/repos/huggingface/datasets/issues/5773/events
https://github.com/huggingface/datasets/issues/5773
1,675,984,633
I_kwDODunzps5j5X75
5,773
train_dataset does not implement __len__
{ "login": "v-yunbin", "id": 38179632, "node_id": "MDQ6VXNlcjM4MTc5NjMy", "avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/v-yunbin", "html_url": "https://github.com/v-yunbin", "followers_url": "https://api.github.com/users/v-yunbin/followers", "following_url": "https://api.github.com/users/v-yunbin/following{/other_user}", "gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}", "starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions", "organizations_url": "https://api.github.com/users/v-yunbin/orgs", "repos_url": "https://api.github.com/users/v-yunbin/repos", "events_url": "https://api.github.com/users/v-yunbin/events{/privacy}", "received_events_url": "https://api.github.com/users/v-yunbin/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Thanks for reporting, @v-yunbin.\r\n\r\nCould you please give more details, the steps to reproduce the bug, the complete error back trace and the environment information (`datasets-cli env`)?", "this is a detail error info from transformers๏ผš\r\n```\r\nTraceback (most recent call last):\r\n File \"finetune.py\", line 177, in <module>\r\n whisper_finetune(traindir,devdir,outdir)\r\n File \"finetune.py\", line 161, in whisper_finetune\r\n trainer = Seq2SeqTrainer(\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer_seq2seq.py\", line 56, in __init__\r\n super().__init__(\r\n File \"/home/ybZhang/miniconda3/envs/whister/lib/python3.8/site-packages/transformers/trainer.py\", line 567, in __init__\r\n raise ValueError(\r\nValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.\r\n```\r\n", "How did you create `train_dataset`? The `datasets` library does not appear in your stack trace.\r\n\r\nWe need more information in order to reproduce the issue...", "```\r\ndef asr_dataset(traindir,devdir):\r\n we_voice = IterableDatasetDict()\r\n #we_voice[\"train\"] = load_from_disk(traindir,streaming=True)\r\n #we_voice[\"test\"]= load_from_disk(devdir,streaming=True)\r\n we_voice[\"train\"] = load_dataset(\"csv\",data_files=os.path.join(traindir,\"train.csv\"),split=\"train\",streaming=True)\r\n #print(load_dataset(\"csv\",data_files=os.path.join(traindir,\"train.csv\"),split=\"train\"))\r\n we_voice[\"test\"] = load_dataset(\"csv\",data_files=os.path.join(devdir,\"dev.csv\"), split=\"train\",streaming=True)\r\n we_voice = we_voice.remove_columns([\"id\"])\r\n we_voice = we_voice.cast_column(\"audio\", Audio(sampling_rate=16000))\r\n return we_voice\r\n\r\n```", "As you are using iterable datasets (`streaming=True`), their length is not defined.\r\n\r\nYou should:\r\n- Either use non-iterable datasets, which have a defined length: use `DatasetDict` and not passing `streaming=True`\r\n- Or pass `args.max_steps` to the `Trainer`", "I don't know how to give a reasonable args.max_steps...........................", "Then you should not use streaming." ]
1,681,965,425,000
1,681,987,319,000
null
NONE
null
null
when train using data precessored by the datasets, I get follow warning and it leads to that I can not set epoch numbers: `ValueError: The train_dataset does not implement __len__, max_steps has to be specified. The number of steps needs to be known in advance for the learning rate scheduler.`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5773/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5773/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5771
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5771/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5771/comments
https://api.github.com/repos/huggingface/datasets/issues/5771/events
https://github.com/huggingface/datasets/issues/5771
1,674,828,380
I_kwDODunzps5j09pc
5,771
Support cloud storage for loading datasets
{ "login": "eli-osherovich", "id": 2437102, "node_id": "MDQ6VXNlcjI0MzcxMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eli-osherovich", "html_url": "https://github.com/eli-osherovich", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" }, { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[ "A duplicate of https://github.com/huggingface/datasets/issues/5281" ]
1,681,908,233,000
1,681,999,688,000
null
CONTRIBUTOR
null
null
### Feature request It seems that the the current implementation supports cloud storage only for `load_from_disk`. It would be nice if a similar functionality existed in `load_dataset`. ### Motivation Motivation is pretty clear -- let users work with datasets located in the cloud. ### Your contribution I can help implementing this.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5771/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5771/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5769
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5769/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5769/comments
https://api.github.com/repos/huggingface/datasets/issues/5769/events
https://github.com/huggingface/datasets/issues/5769
1,673,441,182
I_kwDODunzps5jvq-e
5,769
Tiktoken tokenizers are not pickable
{ "login": "markovalexander", "id": 22663468, "node_id": "MDQ6VXNlcjIyNjYzNDY4", "avatar_url": "https://avatars.githubusercontent.com/u/22663468?v=4", "gravatar_id": "", "url": "https://api.github.com/users/markovalexander", "html_url": "https://github.com/markovalexander", "followers_url": "https://api.github.com/users/markovalexander/followers", "following_url": "https://api.github.com/users/markovalexander/following{/other_user}", "gists_url": "https://api.github.com/users/markovalexander/gists{/gist_id}", "starred_url": "https://api.github.com/users/markovalexander/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/markovalexander/subscriptions", "organizations_url": "https://api.github.com/users/markovalexander/orgs", "repos_url": "https://api.github.com/users/markovalexander/repos", "events_url": "https://api.github.com/users/markovalexander/events{/privacy}", "received_events_url": "https://api.github.com/users/markovalexander/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "Thanks for reporting, @markovalexander.\r\n\r\nUnfortunately, I'm not able to reproduce the issue: the `tiktoken` tokenizer can be used within `Dataset.map`, both in my local machine and in a Colab notebook: https://colab.research.google.com/drive/1DhJroZgk0sNFJ2Mrz-jYgrmh9jblXaCG?usp=sharing\r\n\r\nAre you sure you are using `datasets` version 2.11.0?" ]
1,681,834,060,000
1,681,970,583,000
null
NONE
null
null
### Describe the bug Since tiktoken tokenizer is not pickable, it is not possible to use it inside `dataset.map()` with multiprocessing enabled. However, you [made](https://github.com/huggingface/datasets/issues/5536) tiktoken's tokenizers pickable in `datasets==2.10.0` for caching. For some reason, this logic does not work in dataset processing and raises `TypeError: cannot pickle 'builtins.CoreBPE' object` ### Steps to reproduce the bug ``` from datasets import load_dataset import tiktoken dataset = load_dataset("stas/openwebtext-10k") enc = tiktoken.get_encoding("gpt2") tokenized = dataset.map( process, remove_columns=['text'], desc="tokenizing the OWT splits", num_proc=2, ) def process(example): ids = enc.encode(example['text']) ids.append(enc.eot_token) out = {'ids': ids, 'len': len(ids)} return out ``` ### Expected behavior starts processing dataset ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.0-1021-oracle-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.4 - PyArrow version: 9.0.0 - Pandas version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5769/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5769/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5768
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5768/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5768/comments
https://api.github.com/repos/huggingface/datasets/issues/5768/events
https://github.com/huggingface/datasets/issues/5768
1,672,494,561
I_kwDODunzps5jsD3h
5,768
load_dataset("squad") doesn't work in 2.7.1 and 2.10.1
{ "login": "yaseen157", "id": 57412770, "node_id": "MDQ6VXNlcjU3NDEyNzcw", "avatar_url": "https://avatars.githubusercontent.com/u/57412770?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yaseen157", "html_url": "https://github.com/yaseen157", "followers_url": "https://api.github.com/users/yaseen157/followers", "following_url": "https://api.github.com/users/yaseen157/following{/other_user}", "gists_url": "https://api.github.com/users/yaseen157/gists{/gist_id}", "starred_url": "https://api.github.com/users/yaseen157/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yaseen157/subscriptions", "organizations_url": "https://api.github.com/users/yaseen157/orgs", "repos_url": "https://api.github.com/users/yaseen157/repos", "events_url": "https://api.github.com/users/yaseen157/events{/privacy}", "received_events_url": "https://api.github.com/users/yaseen157/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for reporting, @yaseen157.\r\n\r\nCould you please give the complete error stack trace?", "I am not able to reproduce your issue: the dataset loads perfectly on my local machine and on a Colab notebook: https://colab.research.google.com/drive/1Fbdoa1JdNz8DOdX6gmIsOK1nCT8Abj4O?usp=sharing\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"squad\")\r\nDownloading builder script: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 5.27k/5.27k [00:00<00:00, 3.22MB/s]\r\nDownloading metadata: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2.36k/2.36k [00:00<00:00, 1.60MB/s]\r\nDownloading readme: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 7.67k/7.67k [00:00<00:00, 4.58MB/s]\r\nDownloading and preparing dataset squad/plain_text to ...t/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\nDownloading data: 30.3MB [00:00, 91.8MB/s] | 0/2 [00:00<?, ?it/s]\r\nDownloading data: 4.85MB [00:00, 75.3MB/s] \r\nDownloading data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:00<00:00, 2.31it/s]\r\nExtracting data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:00<00:00, 2157.01it/s]\r\nDataset squad downloaded and prepared to .../.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:00<00:00, 463.95it/s]\r\n\r\nIn [3]: ds\r\nOut[3]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 87599\r\n })\r\n validation: Dataset({\r\n features: ['id', 'title', 'context', 'question', 'answers'],\r\n num_rows: 10570\r\n })\r\n})\r\n```", "I am at a complete loss for what's happening here. A quick summary, I have 3 machines to try this with:\r\n1) My windows 10 laptop\r\n2) Linux machine1, super computer login node\r\n3) Linux machine2, super computer compute node\r\n\r\nLet's define the following as a test script for the machines:\r\n\r\n```\r\nimport traceback\r\nimport datasets\r\nprint(f\"{datasets.__version__=}\")\r\ntry:\r\n ds = datasets.load_dataset(\"squad\")\r\nexcept:\r\n traceback.print_exc()\r\nelse:\r\n print(\"Success!\")\r\n```\r\n\r\nThe Windows laptop enters some sort of traceback recursion loop:\r\n\r\n> datasets.__version__='2.7.1'\r\n> Downloading and preparing dataset squad/plain_text to C:/Users/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:00<?, ?it/s]\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 236, in prepare\r\n> _fixup_main_from_path(data['init_main_from_path'])\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 287, in _fixup_main_from_path\r\n> main_content = runpy.run_path(main_path,\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 267, in run_path\r\n> code, fname = _get_code_from_file(run_name, path_name)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\runpy.py\", line 237, in _get_code_from_file\r\n> with io.open_code(decoded_path) as f:\r\n> OSError: [Errno 22] Invalid argument: 'C:\\\\Users\\\\yr3g17\\\\OneDrive - University of Southampton\\\\Documents\\\\PhD-repository\\\\<input>'\r\n> Traceback (most recent call last):\r\n> File \"<string>\", line 1, in <module>\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 116, in spawn_main\r\n> exitcode = _main(fd, parent_sentinel)\r\n> File \"C:\\Users\\yr3g17\\AppData\\Local\\Programs\\Python\\Python39\\lib\\multiprocessing\\spawn.py\", line 125, in _main\r\n> prepare(preparation_data)\r\n**this error traceback is endlessly recursive**\r\n\r\nThis is a brand new issue that started today and I didn't even realise was a thing, as I had been using my windows machine to follow tracebacks for the other machines...\r\n\r\nI suspect this issue had something to do with my filepath naming, but I couldn't confirm this when I spent time trying to debug this myself weeks ago, something to do with files being locked and never released. I'm not too concerned about my laptop not working here because I've had so many issues with Microsoft OneDrive and my filesystem.\r\n\r\nLinux machines 1 and 2 were working fine for months, but have all of a sudden stopped working. Trying to run linux machine 1 (login node), I get:\r\n\r\n> datasets.__version__='2.10.1'\r\n> Downloading and preparing dataset json/squad to /home/yr3g17/.cache/hugg\r\ningface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2\r\nb650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n> Downloading data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ\r\nโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:00<00:00, 4042.70\r\nit/s]\r\n>Extracting data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ\r\nโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:00<00:00, 1\r\n11.15it/s]\r\n> Generating train split: 0 examples [00:00, ? examples/s]\r\n\r\n and hangs here. This has not happened to me before on the Linux machine. If I forcefully keyboard interrupt, I get:\r\n \r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 2, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/load.py\", line 1782, in load_dataset\r\n> builder_instance.download_and_prepare(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/builder.py\", line 793, in download_and_prepare\r\n> with FileLock(lock_path) if is_local else contextlib.nullcontext():\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 320, in __enter__\r\n> self.acquire()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/d\r\n> atasets/utils/filelock.py\", line 282, in acquire\r\n> time.sleep(poll_intervall)\r\n\r\nWhich also appears to be file lock related! I resolved this by navigating to my ~/.cache/huggingface/datasets directory and wiping out anything to do with the squad dataset in *.lock files. Now I get:\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset_load(\"squad\")\r\n\r\n```\r\n> Downloading and preparing dataset squad/plain_text to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb\r\n> 2511d223b9150cce08a837ef62ffea453...\r\n> Downloading data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:00<00:00, 44.75it/s]\r\n> Extracting data files: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:00<00:00, 8.54it/s]\r\n> Dataset squad downloaded and prepared to /home/yr3g17/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150\r\n> cce08a837ef62ffea453. Subsequent calls will reuse this data.\r\n> 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2/2 [00:00<00:00, 19.77it/s]\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 87599\r\n> })\r\n> validation: Dataset({\r\n> features: ['id', 'title', 'context', 'question', 'answers'],\r\n> num_rows: 10570\r\n> })\r\n> })\r\n> \r\n\r\nWhich all seems fine right, it's doing what it should be. But now, without ever leaving the IDE, I \"make a subsequent call\" to reuse the data by repeating the command. I encounter the following traceback\r\n\r\n`load_dataset(\"squad\")`\r\n\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1496, in load_dataset_builder\r\n> dataset_module = dataset_module_factory(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1151, in dataset_module_factory\r\n> ).get_module()\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 631, in get_module\r\n> data_files = DataFilesDict.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 796, in from_local_or_remote\r\n> DataFilesList.from_local_or_remote(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 764, in from_local_or_remote\r\n> data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/data_files.py\", line 369, in resolve_patterns_locally_or_by_urls\r\n> raise FileNotFoundError(error_msg)\r\n> FileNotFoundError: Unable to resolve any data file that matches '['train[-._ 0-9/]**', '**[-._ 0-9/]train[-._ 0-9/]**', 'training[-._ 0-9/]**', '**[-\r\n> ._ 0-9/]training[-._ 0-9/]**']' at /mainfs/home/yr3g17/.cache/huggingface/datasets/squad with any supported extension ['csv', 'tsv', 'json', 'jsonl',\r\n> 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'gr\r\n> ib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', '\r\n> mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', '\r\n> emf', 'xbm', 'xpm', 'BLP', 'BMP', 'DIB', 'BUFR', 'CUR', 'PCX', 'DCX', 'DDS', 'PS', 'EPS', 'FIT', 'FITS', 'FLI', 'FLC', 'FTC', 'FTU', 'GBR', 'GIF', 'G\r\n> RIB', 'H5', 'HDF', 'PNG', 'APNG', 'JP2', 'J2K', 'JPC', 'JPF', 'JPX', 'J2C', 'ICNS', 'ICO', 'IM', 'IIM', 'TIF', 'TIFF', 'JFIF', 'JPE', 'JPG', 'JPEG',\r\n> 'MPG', 'MPEG', 'MSP', 'PCD', 'PXR', 'PBM', 'PGM', 'PPM', 'PNM', 'PSD', 'BW', 'RGB', 'RGBA', 'SGI', 'RAS', 'TGA', 'ICB', 'VDA', 'VST', 'WEBP', 'WMF',\r\n> 'EMF', 'XBM', 'XPM', 'aiff', 'au', 'avr', 'caf', 'flac', 'htk', 'svx', 'mat4', 'mat5', 'mpc2k', 'ogg', 'paf', 'pvf', 'raw', 'rf64', 'sd2', 'sds', 'ir\r\n> cam', 'voc', 'w64', 'wav', 'nist', 'wavex', 'wve', 'xi', 'mp3', 'opus', 'AIFF', 'AU', 'AVR', 'CAF', 'FLAC', 'HTK', 'SVX', 'MAT4', 'MAT5', 'MPC2K', 'O\r\n> GG', 'PAF', 'PVF', 'RAW', 'RF64', 'SD2', 'SDS', 'IRCAM', 'VOC', 'W64', 'WAV', 'NIST', 'WAVEX', 'WVE', 'XI', 'MP3', 'OPUS', 'zip']\r\n\r\nIt doesn't even appear like I can reliably repeat this process. I'll nuke squad files in my dataset cache and run the Python code again (which downloads a new copy of the dataset to cache). It will either fail (as it just did in the quote above), or it will successfully recall the dataset.\r\n\r\nI repeated this nuking process a few times until calling load_dataset was reliably giving me the correct result (no filelocking issues or tracebacks). I then sent the test script as a job to the supercomputer compute nodes (which do not have internet access and therefore depend on cached data from Linux machine 1 login nodes)\r\n\r\n> Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810\r\n> ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n> Traceback (most recent call last):\r\n> File \"/mainfs/scratch/yr3g17/squad_qanswering/3054408/0/../../main.py\", line 5, in <module>\r\n> dataset = load_dataset(\"squad\")\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nand I have absolutely no idea why the second and third machines are producing different tracebacks. I have previously run these exact scripts successfully on the login and compute nodes of the supercomputer, this issue I'm raising has appeared fairly recently for me. This, is where I encounter the TypeError that I opened this issue with, which I was able to traceback (using my laptop before it too started not working) to whatever was dynamically importing \"builder_cls\". That bit of code wasn't doing importing builder_cls correctly and would effectively make the assignment \"builder_cls=None\" resulting in the TypeError. Does any of this help?", "I'm back on linux machine 1 (login node) now. After submitting that as a job to machine 2 and it failing with TypeError, linux machine 1 now produces identical traceback to machine 2:\r\n\r\n> (arkroyal) [yr3g17@cyan52 squad_qanswering]$ python\r\n> Python 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] on linux\r\n> Type \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>\r\n> from datasets import load_dataset\r\n> load_dataset(\"squad\")\r\n>\r\n> Traceback (most recent call last):\r\n> File \"<stdin>\", line 1, in <module>\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1759, in load_dataset\r\n> builder_instance = load_dataset_builder(\r\n> File \"/home/yr3g17/.conda/envs/arkroyal/lib/python3.10/site-packages/datasets/load.py\", line 1522, in load_dataset_builder\r\n> builder_instance: DatasetBuilder = builder_cls(\r\n> TypeError: 'NoneType' object is not callable\r\n\r\nI thought it might be useful to provide you with my cache file structure:\r\n\r\n>_home_yr3g17_.cache_huggingface_datasets_casino_default_1.1.0_302c3b1ac78c48091deabe83a11f4003c7b472a4e11a8eb92799653785bd5da1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_imdb_plain_text_1.0.0_2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_squad_plain_text_1.0.0_d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.lock\r\n>_home_yr3g17_.cache_huggingface_datasets_yelp_review_full_yelp_review_full_1.0.0_e8e18e19d7be9e75642fc66b198abadb116f73599ec89a69ba5dd8d1e57ba0bf.lock\r\n> casino\r\n> downloads\r\n> imdb\r\n> json\r\n> squad\r\n> squad_v2\r\n> yelp_review_full\r\n\r\nThe inside of squad/plain_text/1.0.0/ looks like\r\n\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453.incomplete_info.lock\r\n> d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453_builder.lock\r\n", "I see this is quite a complex use case...\r\n\r\nLet's try multiple things:\r\n- First, update `datasets` and make sure you use the same version in all machines, so that we can easily compare different behaviors.\r\n ```\r\n pip install -U datasets\r\n ```\r\n- Second, wherever you run the `load_dataset(\"squad\")` command, make sure there is not a local directory named \"squad\". The datasets library gives priority to any local file/directory over the datasets on the Hugging Face Hub\r\n - I tell you this, because in one of your trace backs, it seems it refers to a local directory:\r\n ```\r\n Downloading and preparing dataset json/squad to /home/yr3g17/.cache/huggingface/datasets/json/squad-d733af945be1d2c2/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...\r\n ```\r\n- Third, to use the \"squad\" dataset from the Hub, you need to have internet connection, so that you can download the \"squad\" Python loading script from the Hub. Do all your machines have internet connection?\r\n - I ask this because of this error message:\r\n ```\r\n Using the latest cached version of the module from /home/yr3g17/.cache/huggingface/modules/datasets_modules/datasets/squad/8730650fed465361f38ac4d810ccdd16e8fc87b56498e52fb7e2cadaefc1f177 (last modified on Tue Feb 14 10:12:56 2023) since it couldn't be found locally at squad., or remotely on the Hugging Face Hub.\r\n ```\r\n- Fourth, to be sure that we avoid any issues with the cache, it is a good idea to remove it and regenerate it. Remove `.cache/huggingface/datasets` and also `.cache/huggingface/modules`\r\n- Fifth, as an additional debugging tool, let's be sure we use the latest \"squad\" Python loading script by passing the revision parameter:\r\n ```\r\n ds = load_dataset(\"squad\", revision=\"5fe18c4c680f9922d794e3f4dd673a751c74ee37\")\r\n ```", "Additionally, we just had an infrastructure issue on the Hugging Face Hub at around 11:30 today. That might have contributed to the connectivity issue... It is fixed now.\r\n\r\nhttps://status.huggingface.co/", "Hi again, thanks for your help and insight Albert Villanova.\r\n\r\nIt's all working now, so thank you for that. For the benefit of anyone else who ends up in this thread, I solved the problem by addressing Albert's advice:\r\n\r\n(1) Both Windows and Linux machine 1 (have internet access) and can now access the SQuAD dataset. The supercomputer login node can only access version 2.7.1, but my Windows laptop is running on datasets 2.11.0 just fine. I suspect it was just a perfect storm alongside the aforementioned \"infrastructure issue\".\r\n\r\n(2) I did have a local directory called squad, because I was using a local copy of evaluate's \"SQuAD\" metric. The supercomputer compute nodes do not have internet access and treat `metric = evaluate.load('<x>')` as a way of loading a metric at the local path `./<x>/<x>.py`, which could've been a related issue as I was storing the metric under `squad/squad.py`. Don't be lazy like me and store the evaluation code under a path with a name that can be misinterpreted.\r\n\r\n(3) I can't give internet access to the supercomputer compute nodes, so local files do just fine here.\r\n\r\n(4) The windows and Linux machine 1 can both access the internet and were getting fresh copies of the dataset from the huggingface hub. Linux machine 2 was working after I cleared the contents of ~/.cache/huggingface/....\r\n\r\nI feel silly now, knowing it was all so simple! Sorry about that Albert, and thanks again for the help. I've not raised a Github issue like this before, so I'm not sure if I should be close my own issues or if this is something you guys do?", "Thanks for your detailed feedback which for sure will be useful to other community members." ]
1,681,801,856,000
1,681,986,443,000
1,681,986,442,000
NONE
null
null
### Describe the bug There is an issue that seems to be unique to the "squad" dataset, in which it cannot be loaded using standard methods. This issue is most quickly reproduced from the command line, using the HF examples to verify a dataset is loaded properly. This is not a problem with "squad_v2" dataset for example. ### Steps to reproduce the bug cmd line > $ python -c "from datasets import load_dataset; print(load_dataset('squad', split='train')[0])" OR Python IDE > from datasets import load_dataset > load_dataset("squad") ### Expected behavior I expected to either see the output described here from running the very same command in command line ([https://huggingface.co/docs/datasets/installation]), or any output that does not raise Python's TypeError. There is some funky behaviour in the dataset builder portion of the codebase that means it is trying to import the squad dataset with an incorrect path, or the squad dataset couldn't be downloaded. I'm not really sure what the problem is beyond that. Messing around with caching I did manage to get it to load the dataset once, and then couldn't repeat this. ### Environment info datasets=2.7.1 **or** 2.10.1, python=3.10.8, Linux 3.10.0-1160.36.2.el7.x86_64 **or** Windows 10-64
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5768/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5767
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5767/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5767/comments
https://api.github.com/repos/huggingface/datasets/issues/5767/events
https://github.com/huggingface/datasets/issues/5767
1,672,433,979
I_kwDODunzps5jr1E7
5,767
How to use Distill-BERT with different datasets?
{ "login": "sauravtii", "id": 109907638, "node_id": "U_kgDOBo0Otg", "avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sauravtii", "html_url": "https://github.com/sauravtii", "followers_url": "https://api.github.com/users/sauravtii/followers", "following_url": "https://api.github.com/users/sauravtii/following{/other_user}", "gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}", "starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions", "organizations_url": "https://api.github.com/users/sauravtii/orgs", "repos_url": "https://api.github.com/users/sauravtii/repos", "events_url": "https://api.github.com/users/sauravtii/events{/privacy}", "received_events_url": "https://api.github.com/users/sauravtii/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[ "Closing this one in favor of the same issue opened in the `transformers` repo." ]
1,681,799,112,000
1,682,009,525,000
1,682,009,525,000
NONE
null
null
### Describe the bug - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.12.0+cu102 (True) - Tensorflow version (GPU?): 2.10.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Steps to reproduce the bug I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)? ### Expected behavior Distill-BERT should work with different datasets. ### Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 11.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5767/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5767/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5766
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5766/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5766/comments
https://api.github.com/repos/huggingface/datasets/issues/5766/events
https://github.com/huggingface/datasets/issues/5766
1,671,485,882
I_kwDODunzps5joNm6
5,766
Support custom feature types
{ "login": "jmontalt", "id": 37540982, "node_id": "MDQ6VXNlcjM3NTQwOTgy", "avatar_url": "https://avatars.githubusercontent.com/u/37540982?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jmontalt", "html_url": "https://github.com/jmontalt", "followers_url": "https://api.github.com/users/jmontalt/followers", "following_url": "https://api.github.com/users/jmontalt/following{/other_user}", "gists_url": "https://api.github.com/users/jmontalt/gists{/gist_id}", "starred_url": "https://api.github.com/users/jmontalt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jmontalt/subscriptions", "organizations_url": "https://api.github.com/users/jmontalt/orgs", "repos_url": "https://api.github.com/users/jmontalt/repos", "events_url": "https://api.github.com/users/jmontalt/events{/privacy}", "received_events_url": "https://api.github.com/users/jmontalt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[ "Hi ! Interesting :) What kind of new types would you like to use ?\r\n\r\nNote that you can already implement your own decoding by using `set_transform` that can decode data on-the-fly when rows are accessed" ]
1,681,746,401,000
1,682,091,172,000
null
NONE
null
null
### Feature request I think it would be nice to allow registering custom feature types with the ๐Ÿค— Datasets library. For example, allow to do something along the following lines: ``` from datasets.features import register_feature_type # this would be a new function @register_feature_type class CustomFeatureType: def encode_example(self, value): """User-provided logic to encode an example of this feature.""" pass def decode_example(self, value, token_per_repo_id=None): """User-provided logic to decode an example of this feature.""" pass ``` ### Motivation Users of ๐Ÿค— Datasets, such as myself, may want to use the library to load datasets with unsupported feature types (i.e., beyond `ClassLabel`, `Image`, or `Audio`). This would be useful for prototyping new feature types and for feature types that aren't used widely enough to warrant inclusion in ๐Ÿค— Datasets. At the moment, this is only possible by monkey-patching ๐Ÿค— Datasets, which obfuscates the code and is prone to breaking with library updates. It also requires the user to write some custom code which could be easily avoided. ### Your contribution I would be happy to contribute this feature. My proposed solution would involve changing the following call to `globals()` to an explicit feature type registry, which a user-facing `register_feature_type` decorator could update. https://github.com/huggingface/datasets/blob/fd893098627230cc734f6009ad04cf885c979ac4/src/datasets/features/features.py#L1329 I would also provide an abstract base class for custom feature types which users could inherit. This would have at least an `encode_example` method and a `decode_example` method, similar to `Image` or `Audio`. The existing `encode_nested_example` and `decode_nested_example` functions would also need to be updated to correctly call the corresponding functions for the new type.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5766/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5765
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5765/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5765/comments
https://api.github.com/repos/huggingface/datasets/issues/5765/events
https://github.com/huggingface/datasets/issues/5765
1,671,388,824
I_kwDODunzps5jn16Y
5,765
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text']
{ "login": "sauravtii", "id": 109907638, "node_id": "U_kgDOBo0Otg", "avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sauravtii", "html_url": "https://github.com/sauravtii", "followers_url": "https://api.github.com/users/sauravtii/followers", "following_url": "https://api.github.com/users/sauravtii/following{/other_user}", "gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}", "starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions", "organizations_url": "https://api.github.com/users/sauravtii/orgs", "repos_url": "https://api.github.com/users/sauravtii/repos", "events_url": "https://api.github.com/users/sauravtii/events{/privacy}", "received_events_url": "https://api.github.com/users/sauravtii/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
[ "You need to remove the `text` and `text_en` columns before passing the dataset to the `DataLoader` to avoid this error:\r\n```python\r\ntokenized_datasets = tokenized_datasets.remove_columns([\"text\", \"text_en\"])\r\n```\r\n", "Thanks @mariosasko. Now I am getting this error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"client_2.py\", line 138, in <module>\r\n main()\r\n File \"client_2.py\", line 134, in main\r\n fl.client.start_numpy_client(server_address=\"localhost:8080\", client=IMDBClient())\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 208, in start_numpy_client\r\n start_client(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 142, in start_client\r\n client_message, sleep_duration, keep_going = handle(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py\", line 68, in handle\r\n return _fit(client, server_msg.fit_ins), 0, True\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py\", line 157, in _fit\r\n fit_res = client.fit(fit_ins)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py\", line 252, in _fit\r\n results = self.numpy_client.fit(parameters, ins.config) # type: ignore\r\n File \"client_2.py\", line 124, in fit\r\n train(net, trainloader, epochs=1)\r\n File \"client_2.py\", line 78, in train\r\n for batch in trainloader:\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 652, in __next__\r\n data = self._next_data()\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py\", line 692, in _next_data\r\n data = self._dataset_fetcher.fetch(index) # may raise StopIteration\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 49, in fetch\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py\", line 49, in <listcomp>\r\n data = [self.dataset[idx] for idx in possibly_batched_index]\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1525, in __getitem__\r\n return self._getitem(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py\", line 1517, in _getitem\r\n pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 373, in query_table\r\n pa_subtable = _query_table_with_indices_mapping(table, key, indices=indices)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 55, in _query_table_with_indices_mapping\r\n return _query_table(table, key)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py\", line 79, in _query_table\r\n return table.fast_slice(key % table.num_rows, 1)\r\nZeroDivisionError: integer division or modulo by zero\r\n```\r\n\r\nThis is my code:\r\n\r\n```\r\nfrom collections import OrderedDict\r\nimport warnings\r\n\r\nimport flwr as fl\r\nimport torch\r\nimport numpy as np\r\n\r\nimport random\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, load_metric\r\n\r\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\r\nfrom transformers import AutoModelForSequenceClassification\r\nfrom transformers import AdamW\r\n#from transformers import tokenized_datasets\r\n\r\n\r\nwarnings.filterwarnings(\"ignore\", category=UserWarning)\r\n# DEVICE = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\r\n\r\nDEVICE = \"cpu\"\r\n\r\nCHECKPOINT = \"distilbert-base-uncased\" # transformer model checkpoint\r\n\r\n\r\ndef load_data():\r\n \"\"\"Load IMDB data (training and eval)\"\"\"\r\n raw_datasets = load_dataset(\"yhavinga/imdb_dutch\")\r\n raw_datasets = raw_datasets.shuffle(seed=42)\r\n\r\n # remove unnecessary data split\r\n del raw_datasets[\"unsupervised\"]\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], truncation=True)\r\n\r\n # random 100 samples\r\n population = random.sample(range(len(raw_datasets[\"train\"])), 100)\r\n\r\n tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\r\n tokenized_datasets[\"train\"] = tokenized_datasets[\"train\"].select(population)\r\n tokenized_datasets[\"test\"] = tokenized_datasets[\"test\"].select(population)\r\n\r\n # tokenized_datasets = tokenized_datasets.remove_columns(\"text\")\r\n # tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\r\n\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"attention_mask\")\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"input_ids\")\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"label\")\r\n # tokenized_datasets = tokenized_datasets.remove_columns(\"text_en\")\r\n\r\n # tokenized_datasets = tokenized_datasets.remove_columns(raw_datasets[\"train\"].column_names)\r\n \r\n tokenized_datasets = tokenized_datasets.remove_columns([\"text\", \"text_en\"])\r\n \r\n data_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\n trainloader = DataLoader(\r\n tokenized_datasets[\"train\"],\r\n shuffle=True,\r\n batch_size=32,\r\n collate_fn=data_collator,\r\n )\r\n\r\n testloader = DataLoader(\r\n tokenized_datasets[\"test\"], batch_size=32, collate_fn=data_collator\r\n )\r\n\r\n return trainloader, testloader\r\n\r\n\r\ndef train(net, trainloader, epochs):\r\n optimizer = AdamW(net.parameters(), lr=5e-4)\r\n net.train()\r\n for _ in range(epochs):\r\n for batch in trainloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n outputs = net(**batch)\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n\r\n\r\ndef test(net, testloader):\r\n metric = load_metric(\"accuracy\")\r\n loss = 0\r\n net.eval()\r\n for batch in testloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n with torch.no_grad():\r\n outputs = net(**batch)\r\n logits = outputs.logits\r\n loss += outputs.loss.item()\r\n predictions = torch.argmax(logits, dim=-1)\r\n metric.add_batch(predictions=predictions, references=batch[\"labels\"])\r\n loss /= len(testloader.dataset)\r\n accuracy = metric.compute()[\"accuracy\"]\r\n return loss, accuracy\r\n\r\n\r\ndef main():\r\n net = AutoModelForSequenceClassification.from_pretrained(\r\n CHECKPOINT, num_labels=2\r\n ).to(DEVICE)\r\n\r\n trainloader, testloader = load_data()\r\n\r\n # Flower client\r\n class IMDBClient(fl.client.NumPyClient):\r\n def get_parameters(self, config):\r\n return [val.cpu().numpy() for _, val in net.state_dict().items()]\r\n\r\n def set_parameters(self, parameters):\r\n params_dict = zip(net.state_dict().keys(), parameters)\r\n state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})\r\n net.load_state_dict(state_dict, strict=True)\r\n\r\n def fit(self, parameters, config):\r\n self.set_parameters(parameters)\r\n print(\"Training Started...\")\r\n train(net, trainloader, epochs=1)\r\n print(\"Training Finished.\")\r\n return self.get_parameters(config={}), len(trainloader), {}\r\n\r\n def evaluate(self, parameters, config):\r\n self.set_parameters(parameters)\r\n loss, accuracy = test(net, testloader)\r\n return float(loss), len(testloader), {\"accuracy\": float(accuracy)}\r\n\r\n # Start client\r\n fl.client.start_numpy_client(server_address=\"localhost:8080\", client=IMDBClient())\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```", "Please also remove/comment these lines:\r\n```python\r\ntokenized_datasets = tokenized_datasets.remove_columns(\"attention_mask\")\r\ntokenized_datasets = tokenized_datasets.remove_columns(\"input_ids\")\r\ntokenized_datasets = tokenized_datasets.remove_columns(\"label\")\r\n```", "Thanks @mariosasko .\r\n\r\nNow, I am trying out this [tutorial](https://flower.dev/docs/quickstart-huggingface.html) which basically trains distil-BERT with IMDB dataset (very similar to this [tutorial](https://huggingface.co/docs/transformers/main/tasks/sequence_classification)). But I don't know why my accuracy isn't increasing even after training for a significant amount of time and also by using the entire dataset. Below I have attached `client.py` file:\r\n\r\n`client.py`:\r\n\r\n```\r\nfrom collections import OrderedDict\r\nimport warnings\r\n\r\nimport flwr as fl\r\nimport torch\r\nimport numpy as np\r\n\r\nimport random\r\nfrom torch.utils.data import DataLoader\r\n\r\nfrom datasets import load_dataset, load_metric\r\n\r\nfrom transformers import AutoTokenizer, DataCollatorWithPadding\r\nfrom transformers import AutoModelForSequenceClassification\r\nfrom transformers import AdamW\r\n\r\nwarnings.filterwarnings(\"ignore\", category=UserWarning)\r\n\r\nDEVICE = \"cuda:1\"\r\n\r\nCHECKPOINT = \"distilbert-base-uncased\" # transformer model checkpoint\r\n\r\n\r\ndef load_data():\r\n \"\"\"Load IMDB data (training and eval)\"\"\"\r\n raw_datasets = load_dataset(\"imdb\")\r\n raw_datasets = raw_datasets.shuffle(seed=42)\r\n\r\n # remove unnecessary data split\r\n del raw_datasets[\"unsupervised\"]\r\n\r\n tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)\r\n\r\n def tokenize_function(examples):\r\n return tokenizer(examples[\"text\"], truncation=True)\r\n\r\n tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)\r\n\r\n tokenized_datasets = tokenized_datasets.remove_columns(\"text\")\r\n tokenized_datasets = tokenized_datasets.rename_column(\"label\", \"labels\")\r\n\r\n data_collator = DataCollatorWithPadding(tokenizer=tokenizer)\r\n trainloader = DataLoader(\r\n tokenized_datasets[\"train\"],\r\n shuffle=True,\r\n batch_size=32,\r\n collate_fn=data_collator,\r\n )\r\n\r\n testloader = DataLoader(\r\n tokenized_datasets[\"test\"], batch_size=32, collate_fn=data_collator\r\n )\r\n\r\n return trainloader, testloader\r\n\r\n\r\ndef train(net, trainloader, epochs):\r\n optimizer = AdamW(net.parameters(), lr=5e-5)\r\n net.train()\r\n for i in range(epochs):\r\n print(\"Epoch: \", i+1)\r\n j = 1\r\n print(\"####################### The length of the trainloader is: \", len(trainloader)) \r\n for batch in trainloader:\r\n print(\"####################### The batch number is: \", j)\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n outputs = net(**batch)\r\n loss = outputs.loss\r\n loss.backward()\r\n optimizer.step()\r\n optimizer.zero_grad()\r\n j += 1\r\n\r\n\r\ndef test(net, testloader):\r\n metric = load_metric(\"accuracy\")\r\n loss = 0\r\n net.eval()\r\n for batch in testloader:\r\n batch = {k: v.to(DEVICE) for k, v in batch.items()}\r\n with torch.no_grad():\r\n outputs = net(**batch)\r\n logits = outputs.logits\r\n loss += outputs.loss.item()\r\n predictions = torch.argmax(logits, dim=-1)\r\n metric.add_batch(predictions=predictions, references=batch[\"labels\"])\r\n loss /= len(testloader.dataset)\r\n accuracy = metric.compute()[\"accuracy\"]\r\n return loss, accuracy\r\n\r\n\r\ndef main():\r\n net = AutoModelForSequenceClassification.from_pretrained(\r\n CHECKPOINT, num_labels=2\r\n ).to(DEVICE)\r\n\r\n trainloader, testloader = load_data()\r\n\r\n # Flower client\r\n class IMDBClient(fl.client.NumPyClient):\r\n def get_parameters(self, config):\r\n return [val.cpu().numpy() for _, val in net.state_dict().items()]\r\n\r\n def set_parameters(self, parameters):\r\n params_dict = zip(net.state_dict().keys(), parameters)\r\n state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})\r\n net.load_state_dict(state_dict, strict=True)\r\n\r\n def fit(self, parameters, config):\r\n self.set_parameters(parameters)\r\n print(\"Training Started...\")\r\n train(net, trainloader, epochs=1)\r\n print(\"Training Finished.\")\r\n return self.get_parameters(config={}), len(trainloader), {}\r\n\r\n def evaluate(self, parameters, config):\r\n self.set_parameters(parameters)\r\n loss, accuracy = test(net, testloader)\r\n print({\"loss\": float(loss), \"accuracy\": float(accuracy)})\r\n return float(loss), len(testloader), {\"loss\": float(loss), \"accuracy\": float(accuracy)}\r\n\r\n # Start client\r\n fl.client.start_numpy_client(server_address=\"localhost:5040\", client=IMDBClient())\r\n\r\n\r\nif __name__ == \"__main__\":\r\n main()\r\n```\r\n\r\nCan I get any help, please?" ]
1,681,743,650,000
1,682,430,645,000
null
NONE
null
null
### Describe the bug Following is my code that I am trying to run, but facing an error (have attached the whole error below): My code: ``` from collections import OrderedDict import warnings import flwr as fl import torch import numpy as np import random from torch.utils.data import DataLoader from datasets import load_dataset, load_metric from transformers import AutoTokenizer, DataCollatorWithPadding from transformers import AutoModelForSequenceClassification from transformers import AdamW #from transformers import tokenized_datasets warnings.filterwarnings("ignore", category=UserWarning) # DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") DEVICE = "cpu" CHECKPOINT = "distilbert-base-uncased" # transformer model checkpoint def load_data(): """Load IMDB data (training and eval)""" raw_datasets = load_dataset("yhavinga/imdb_dutch") raw_datasets = raw_datasets.shuffle(seed=42) # remove unnecessary data split del raw_datasets["unsupervised"] tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT) def tokenize_function(examples): return tokenizer(examples["text"], truncation=True) # random 100 samples population = random.sample(range(len(raw_datasets["train"])), 100) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) tokenized_datasets["train"] = tokenized_datasets["train"].select(population) tokenized_datasets["test"] = tokenized_datasets["test"].select(population) # tokenized_datasets = tokenized_datasets.remove_columns("text") # tokenized_datasets = tokenized_datasets.rename_column("label", "labels") tokenized_datasets = tokenized_datasets.remove_columns("attention_mask") tokenized_datasets = tokenized_datasets.remove_columns("input_ids") tokenized_datasets = tokenized_datasets.remove_columns("label") tokenized_datasets = tokenized_datasets.remove_columns("text_en") # tokenized_datasets = tokenized_datasets.remove_columns(raw_datasets["train"].column_names) data_collator = DataCollatorWithPadding(tokenizer=tokenizer) trainloader = DataLoader( tokenized_datasets["train"], shuffle=True, batch_size=32, collate_fn=data_collator, ) testloader = DataLoader( tokenized_datasets["test"], batch_size=32, collate_fn=data_collator ) return trainloader, testloader def train(net, trainloader, epochs): optimizer = AdamW(net.parameters(), lr=5e-4) net.train() for _ in range(epochs): for batch in trainloader: batch = {k: v.to(DEVICE) for k, v in batch.items()} outputs = net(**batch) loss = outputs.loss loss.backward() optimizer.step() optimizer.zero_grad() def test(net, testloader): metric = load_metric("accuracy") loss = 0 net.eval() for batch in testloader: batch = {k: v.to(DEVICE) for k, v in batch.items()} with torch.no_grad(): outputs = net(**batch) logits = outputs.logits loss += outputs.loss.item() predictions = torch.argmax(logits, dim=-1) metric.add_batch(predictions=predictions, references=batch["labels"]) loss /= len(testloader.dataset) accuracy = metric.compute()["accuracy"] return loss, accuracy def main(): net = AutoModelForSequenceClassification.from_pretrained( CHECKPOINT, num_labels=2 ).to(DEVICE) trainloader, testloader = load_data() # Flower client class IMDBClient(fl.client.NumPyClient): def get_parameters(self, config): return [val.cpu().numpy() for _, val in net.state_dict().items()] def set_parameters(self, parameters): params_dict = zip(net.state_dict().keys(), parameters) state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict}) net.load_state_dict(state_dict, strict=True) def fit(self, parameters, config): self.set_parameters(parameters) print("Training Started...") train(net, trainloader, epochs=1) print("Training Finished.") return self.get_parameters(config={}), len(trainloader), {} def evaluate(self, parameters, config): self.set_parameters(parameters) loss, accuracy = test(net, testloader) return float(loss), len(testloader), {"accuracy": float(accuracy)} # Start client fl.client.start_numpy_client(server_address="localhost:8080", client=IMDBClient()) if __name__ == "__main__": main() ``` Error: ``` Traceback (most recent call last): File "client_2.py", line 136, in <module> main() File "client_2.py", line 132, in main fl.client.start_numpy_client(server_address="localhost:8080", client=IMDBClient()) File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 208, in start_numpy_client start_client( File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 142, in start_client client_message, sleep_duration, keep_going = handle( File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 68, in handle return _fit(client, server_msg.fit_ins), 0, True File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 157, in _fit fit_res = client.fit(fit_ins) File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 252, in _fit results = self.numpy_client.fit(parameters, ins.config) # type: ignore File "client_2.py", line 122, in fit train(net, trainloader, epochs=1) File "client_2.py", line 76, in train for batch in trainloader: File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 652, in __next__ data = self._next_data() File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 692, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch return self.collate_fn(data) File "/home/saurav/.local/lib/python3.8/site-packages/transformers/data/data_collator.py", line 221, in __call__ batch = self.tokenizer.pad( File "/home/saurav/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2713, in pad raise ValueError( ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text'] ``` ### Steps to reproduce the bug Run the above code. ### Expected behavior Don't know, doing it for the first time. ### Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 11.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5765/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5765/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5764/comments
https://api.github.com/repos/huggingface/datasets/issues/5764/events
https://github.com/huggingface/datasets/issues/5764
1,670,740,198
I_kwDODunzps5jlXjm
5,764
ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1
{ "login": "sauravtii", "id": 109907638, "node_id": "U_kgDOBo0Otg", "avatar_url": "https://avatars.githubusercontent.com/u/109907638?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sauravtii", "html_url": "https://github.com/sauravtii", "followers_url": "https://api.github.com/users/sauravtii/followers", "following_url": "https://api.github.com/users/sauravtii/following{/other_user}", "gists_url": "https://api.github.com/users/sauravtii/gists{/gist_id}", "starred_url": "https://api.github.com/users/sauravtii/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sauravtii/subscriptions", "organizations_url": "https://api.github.com/users/sauravtii/orgs", "repos_url": "https://api.github.com/users/sauravtii/repos", "events_url": "https://api.github.com/users/sauravtii/events{/privacy}", "received_events_url": "https://api.github.com/users/sauravtii/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for reporting, @sauravtii.\r\n\r\nUnfortunately, I'm not able to reproduce the issue:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset(\"josianem/imdb\")\r\n\r\nIn [2]: ds\r\nOut[2]: \r\nDatasetDict({\r\n train: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25799\r\n })\r\n test: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 25000\r\n })\r\n unsupervised: Dataset({\r\n features: ['text', 'label'],\r\n num_rows: 50000\r\n })\r\n})\r\n```\r\n\r\nCould you please retry to load the dataset? Maybe there was a temporary connection issue to Dropbox.", "Thanks @albertvillanova. I am facing another issue now\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 738, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]\r\n```\r\n\r\nThis is my code\r\n\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\")\r\n```", "Your connection didn't work and you got an empty dataset (`num_bytes=0, num_examples=0`):\r\n```\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: \r\n[\r\n {\r\n 'expected': SplitInfo(name='train', num_bytes=34501348, num_examples=25799, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }, \r\n {\r\n 'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), \r\n 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')\r\n }\r\n]\r\n```\r\n\r\nCould you please try the link in your browser and see if it works? https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n- If it does not work, you should contact the author of the dataset in their Community tab (https://huggingface.co/datasets/josianem/imdb/discussions) and inform them, so that they can host their data elsewhere, for example on the Hugging Face Hub itself\r\n\r\nIf the link works, you should try to load the dataset but forcing the re-download of the data files (so that the cache is refreshed with the actual data file), by passing `download_mode=\"force_redownload\"`:\r\n```python\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```", "After pasting the link in the browser, it did start the download so it seems that the link is working. But even after including the `download_mode` in my code I am facing the same issue:\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"sample.py\", line 4, in <module>\r\n dataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py\", line 1112, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 636, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py\", line 704, in _download_and_prepare\r\n split_generators = self._split_generators(dl_manager, **split_generators_kwargs)\r\n File \"/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py\", line 79, in _split_generators\r\n archive = dl_manager.download(_DOWNLOAD_URL)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 196, in download\r\n downloaded_path_or_paths = map_nested(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py\", line 197, in map_nested\r\n return function(data_struct)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py\", line 217, in _download\r\n return cached_path(url_or_filename, download_config=download_config)\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 289, in cached_path\r\n output_path = get_from_cache(\r\n File \"/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py\", line 606, in get_from_cache\r\n raise ConnectionError(\"Couldn't reach {}\".format(url))\r\nConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1\r\n```\r\n\r\nMy code:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n```", "I have tried again to reproduce your issue without success: the dataset loads perfectly, both in my local machine and in a Colab notebook.\r\n- See: https://colab.research.google.com/drive/1dky3T0XGFuldggy22NNQQN-UqOFqvnuY?usp=sharing\r\n\r\nI think the cause maight be that you are using a very old version of `datasets`. Please, could you update it and retry?\r\n```\r\npip install -U datasets\r\n```", "That worked!! Thanks @albertvillanova : )\r\n\r\n```\r\nDownloading builder script: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 4.20k/4.20k [00:00<00:00, 6.69MB/s]\r\nDownloading metadata: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 2.60k/2.60k [00:00<00:00, 3.41MB/s]\r\nDownloading readme: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 7.52k/7.52k [00:00<00:00, 12.6MB/s]\r\nDownloading and preparing dataset imdb/plain_text to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f...\r\nDownloading data: 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 301M/301M [01:32<00:00, 3.25MB/s]\r\nDataset imdb downloaded and prepared to /home/saurav/.cache/huggingface/datasets/josianem___imdb/plain_text/1.0.0/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f. Subsequent calls will reuse this data.\r\n100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 3/3 [00:00<00:00, 794.83it/s]\r\n```\r\n\r\nThe code I used:\r\n```\r\nfrom datasets import load_dataset, load_metric\r\n\r\ndataset = load_dataset(\"josianem/imdb\", download_mode=\"force_redownload\")\r\n\r\n```\r\n\r\nBut when I remove `download_mode=\"force_redownload\"` I get the same error. Any guess on that?", "That is because the cache got the \"empty\" download file the first time you tried and got the connection error.\r\n\r\nThen, once you no longer get the connection error, you need to refresh the cache by passing `download_mode=\"force_redownload\"`." ]
1,681,722,498,000
1,681,802,300,000
1,681,802,300,000
NONE
null
null
### Describe the bug I want to use this (https://huggingface.co/datasets/josianem/imdb) dataset therefore I am trying to load it using the following code: ``` dataset = load_dataset("josianem/imdb") ``` The dataset is not getting loaded and gives the error message as the following: ``` Traceback (most recent call last): File "sample.py", line 3, in <module> dataset = load_dataset("josianem/imdb") File "/home/saurav/.local/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/builder.py", line 704, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/saurav/.cache/huggingface/modules/datasets_modules/datasets/imdb/cc6ab4acab2799be15d5d217c24548b856156dafdc850165fdc4f2031f27ff2f/imdb.py", line 79, in _split_generators archive = dl_manager.download(_DOWNLOAD_URL) File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 197, in map_nested return function(data_struct) File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 289, in cached_path output_path = get_from_cache( File "/home/saurav/.local/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 606, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://www.dropbox.com/s/zts98j4vkqtsns6/aclImdb_v2.tar?dl=1 ``` ### Steps to reproduce the bug You can reproduce the error by using the following code: ``` from datasets import load_dataset, load_metric dataset = load_dataset("josianem/imdb") ``` ### Expected behavior The dataset should get loaded (I am using this dataset for the first time so not much aware of the exact behavior). ### Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 11.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5764/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5764/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5762/comments
https://api.github.com/repos/huggingface/datasets/issues/5762/events
https://github.com/huggingface/datasets/issues/5762
1,670,326,470
I_kwDODunzps5jjyjG
5,762
Not able to load the pile
{ "login": "surya-narayanan", "id": 17240858, "node_id": "MDQ6VXNlcjE3MjQwODU4", "avatar_url": "https://avatars.githubusercontent.com/u/17240858?v=4", "gravatar_id": "", "url": "https://api.github.com/users/surya-narayanan", "html_url": "https://github.com/surya-narayanan", "followers_url": "https://api.github.com/users/surya-narayanan/followers", "following_url": "https://api.github.com/users/surya-narayanan/following{/other_user}", "gists_url": "https://api.github.com/users/surya-narayanan/gists{/gist_id}", "starred_url": "https://api.github.com/users/surya-narayanan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surya-narayanan/subscriptions", "organizations_url": "https://api.github.com/users/surya-narayanan/orgs", "repos_url": "https://api.github.com/users/surya-narayanan/repos", "events_url": "https://api.github.com/users/surya-narayanan/events{/privacy}", "received_events_url": "https://api.github.com/users/surya-narayanan/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for reporting, @surya-narayanan.\r\n\r\nI see you already started a discussion about this on the Community tab of the corresponding dataset: https://huggingface.co/datasets/EleutherAI/the_pile/discussions/10\r\nLet's continue the discussion there!" ]
1,681,700,950,000
1,681,724,247,000
1,681,724,247,000
NONE
null
null
### Describe the bug Got this error when I am trying to load the pile dataset ``` TypeError: Couldn't cast array of type struct<file: string, id: string> to {'id': Value(dtype='string', id=None)} ``` ### Steps to reproduce the bug Please visit the following sample notebook https://colab.research.google.com/drive/1JHcjawcHL6QHhi5VcqYd07W2QCEj2nWK#scrollTo=ulJP3eJCI-tB ### Expected behavior The pile should work ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.31 - Python version: 3.9.16 - Huggingface_hub version: 0.13.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5762/timeline
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5761
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5761/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5761/comments
https://api.github.com/repos/huggingface/datasets/issues/5761/events
https://github.com/huggingface/datasets/issues/5761
1,670,034,582
I_kwDODunzps5jirSW
5,761
One or several metadata.jsonl were found, but not in the same directory or in a parent directory
{ "login": "blghtr", "id": 69686152, "node_id": "MDQ6VXNlcjY5Njg2MTUy", "avatar_url": "https://avatars.githubusercontent.com/u/69686152?v=4", "gravatar_id": "", "url": "https://api.github.com/users/blghtr", "html_url": "https://github.com/blghtr", "followers_url": "https://api.github.com/users/blghtr/followers", "following_url": "https://api.github.com/users/blghtr/following{/other_user}", "gists_url": "https://api.github.com/users/blghtr/gists{/gist_id}", "starred_url": "https://api.github.com/users/blghtr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/blghtr/subscriptions", "organizations_url": "https://api.github.com/users/blghtr/orgs", "repos_url": "https://api.github.com/users/blghtr/repos", "events_url": "https://api.github.com/users/blghtr/events{/privacy}", "received_events_url": "https://api.github.com/users/blghtr/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[ "Also, when generated from a zip archive, the dataset contains only a few images. In my case, 20 versus 2000+ contained in the archive. The generation from folders works as expected.", "Thanks for reporting, @blghtr.\r\n\r\nYou should include the `metadata.jsonl` in your ZIP archives, at the root level directory.\r\n\r\nI agree that our documentation is not clear enough. Maybe we could improve it.", "You can find a dummy dataset example here: https://huggingface.co/datasets/albertvillanova/tmp-imagefolder-metadata\r\n\r\n```\r\ntmp-imagefolder-metadata/\r\nโ””โ”€โ”€ data/\r\n โ”œโ”€โ”€ train.zip\r\n โ””โ”€โ”€ valid.zip\r\n```\r\nwhere, the directory structure within the `train.zip` archive is:\r\n```\r\nmetadata.jsonl\r\ntrain/\r\n โ”œโ”€โ”€ bharatanatyam/\r\n โ””โ”€โ”€ bharatanatyam_original_113.jpg_70c297a2-e2f2-4ed8-b93c-0c03d0809fe2.jpg\r\n โ””โ”€โ”€ kathak/\r\n โ””โ”€โ”€ kathak_original_10.jpg_2c4a2c3d-47fc-4b33-9c09-38b542826632.jpg\r\n```\r\nand the metadata file contains:\r\n```\r\n{\"file_name\": \"train/bharatanatyam/bharatanatyam_original_113.jpg_70c297a2-e2f2-4ed8-b93c-0c03d0809fe2.jpg\", \"text\": \"first\"}\r\n{\"file_name\": \"train/kathak/kathak_original_10.jpg_2c4a2c3d-47fc-4b33-9c09-38b542826632.jpg\", \"text\": \"second\"}\r\n```" ]
1,681,662,115,000
1,681,905,204,000
null
NONE
null
null
### Describe the bug An attempt to generate a dataset from a zip archive using imagefolder and metadata.jsonl does not lead to the expected result. Tried all possible locations of the json file: the file in the archive is ignored (generated dataset contains only images), the file next to the archive like [here](https://huggingface.co/docs/datasets/image_dataset#imagefolder) leads to an error: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1610, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1609 _time = time.time() -> 1610 for key, record in generator: 1611 if max_shard_size is not None and writer._num_bytes > max_shard_size: File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\packaged_modules\folder_based_builder\folder_based_builder.py:370, in FolderBasedBuilder._generate_examples(self, files, metadata_files, split_name, add_metadata, add_labels) 369 else: --> 370 raise ValueError( 371 f"One or several metadata.{metadata_ext} were found, but not in the same directory or in a parent directory of {downloaded_dir_file}." 372 ) 373 if metadata_dir is not None and downloaded_metadata_file is not None: ValueError: One or several metadata.jsonl were found, but not in the same directory or in a parent directory of C:\Users\User\.cache\huggingface\datasets\downloads\extracted\f7fb7de25fb28ae63089974524f2d271a39d83888bc456d04aa3b3d45f33e6a6\ff0745a0-a741-4d9e-b228-a93b851adf61.png. The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Cell In[3], line 1 ----> 1 dataset = load_dataset("imagefolder", data_dir=r'C:\Users\User\data') File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\load.py:1791, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 1788 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1790 # Download and prepare data -> 1791 builder_instance.download_and_prepare( 1792 download_config=download_config, 1793 download_mode=download_mode, 1794 verification_mode=verification_mode, 1795 try_from_hf_gcs=try_from_hf_gcs, 1796 num_proc=num_proc, 1797 storage_options=storage_options, 1798 ) 1800 # Build dataset for splits 1801 keep_in_memory = ( 1802 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1803 ) File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:891, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 889 if num_proc is not None: 890 prepare_split_kwargs["num_proc"] = num_proc --> 891 self._download_and_prepare( 892 dl_manager=dl_manager, 893 verification_mode=verification_mode, 894 **prepare_split_kwargs, 895 **download_and_prepare_kwargs, 896 ) 897 # Sync info 898 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1651, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs) 1650 def _download_and_prepare(self, dl_manager, verification_mode, **prepare_splits_kwargs): -> 1651 super()._download_and_prepare( 1652 dl_manager, 1653 verification_mode, 1654 check_duplicate_keys=verification_mode == VerificationMode.BASIC_CHECKS 1655 or verification_mode == VerificationMode.ALL_CHECKS, 1656 **prepare_splits_kwargs, 1657 ) File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:986, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 982 split_dict.add(split_generator.split_info) 984 try: 985 # Prepare split will record examples associated to the split --> 986 self._prepare_split(split_generator, **prepare_split_kwargs) 987 except OSError as e: 988 raise OSError( 989 "Cannot find data file. " 990 + (self.manual_download_instructions or "") 991 + "\nOriginal error:\n" 992 + str(e) 993 ) from None File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1490, in GeneratorBasedBuilder._prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1488 gen_kwargs = split_generator.gen_kwargs 1489 job_id = 0 -> 1490 for job_id, done, content in self._prepare_split_single( 1491 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1492 ): 1493 if done: 1494 result = content File ~\PycharmProjects\testproj\venv\lib\site-packages\datasets\builder.py:1646, in GeneratorBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, split_info, check_duplicate_keys, job_id) 1644 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1645 e = e.__context__ -> 1646 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1648 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug 1. Organize directory structure like in the docs: folder/metadata.jsonl folder/train.zip 2. Run load_dataset("imagefolder", data_dir='folder/metadata.jsonl', split='train') ### Expected behavior Dataset generated with all additional features from metadata.jsonl ### Environment info - `datasets` version: 2.11.0 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.9.0 - Huggingface_hub version: 0.13.4 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5761/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5760
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5760/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5760/comments
https://api.github.com/repos/huggingface/datasets/issues/5760/events
https://github.com/huggingface/datasets/issues/5760
1,670,028,072
I_kwDODunzps5jipso
5,760
Multi-image loading in Imagefolder dataset
{ "login": "vvvm23", "id": 44398246, "node_id": "MDQ6VXNlcjQ0Mzk4MjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vvvm23", "html_url": "https://github.com/vvvm23", "followers_url": "https://api.github.com/users/vvvm23/followers", "following_url": "https://api.github.com/users/vvvm23/following{/other_user}", "gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}", "starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions", "organizations_url": "https://api.github.com/users/vvvm23/orgs", "repos_url": "https://api.github.com/users/vvvm23/repos", "events_url": "https://api.github.com/users/vvvm23/events{/privacy}", "received_events_url": "https://api.github.com/users/vvvm23/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
[]
1,681,660,865,000
1,681,660,865,000
null
NONE
null
null
### Feature request Extend the `imagefolder` dataloading script to support loading multiple images per dataset entry. This only really makes sense if a metadata file is present. Currently you can use the following format (example `metadata.jsonl`: ``` {'file_name': 'path_to_image.png', 'metadata': ...} ... ``` which will return a batch with key `image` and any other metadata. I would propose extending `file_name` to also accept a list of files, which would return a batch with key `images` and any other metadata. ### Motivation This is useful for example in segmentation tasks in computer vision models, or in text-to-image models that also accept conditioning signals such as another image, feature map, or similar. Currently if I want to do this, I would need to write a custom dataset, rather than just use `imagefolder`. ### Your contribution Would be open to doing a PR, but also happy for someone else to take it as I am not familiar with the datasets library.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5760/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5760/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5759
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5759/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5759/comments
https://api.github.com/repos/huggingface/datasets/issues/5759/events
https://github.com/huggingface/datasets/issues/5759
1,669,977,848
I_kwDODunzps5jidb4
5,759
Can I load in list of list of dict format?
{ "login": "LZY-the-boys", "id": 72137647, "node_id": "MDQ6VXNlcjcyMTM3NjQ3", "avatar_url": "https://avatars.githubusercontent.com/u/72137647?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LZY-the-boys", "html_url": "https://github.com/LZY-the-boys", "followers_url": "https://api.github.com/users/LZY-the-boys/followers", "following_url": "https://api.github.com/users/LZY-the-boys/following{/other_user}", "gists_url": "https://api.github.com/users/LZY-the-boys/gists{/gist_id}", "starred_url": "https://api.github.com/users/LZY-the-boys/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LZY-the-boys/subscriptions", "organizations_url": "https://api.github.com/users/LZY-the-boys/orgs", "repos_url": "https://api.github.com/users/LZY-the-boys/repos", "events_url": "https://api.github.com/users/LZY-the-boys/events{/privacy}", "received_events_url": "https://api.github.com/users/LZY-the-boys/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
[ "Thanks for reporting, @LZY-the-boys.\r\n\r\nCould you please give more details about what is your intended dataset structure? What are the names of the columns and the value of each row?\r\n\r\nCurrently, the JSON-Lines format is supported:\r\n- Each line correspond to one row of the dataset\r\n- Each line is composed of one JSON object, where the names are the names of the columns, and the values are the values for the row-column pair." ]
1,681,653,014,000
1,681,905,876,000
null
NONE
null
null
### Feature request my jsonl dataset has following format: ``` [{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...] [{'input':xxx, 'output':xxx},{'input:xxx,'output':xxx},...] ``` I try to use `datasets.load_dataset('json', data_files=path)` or `datasets.Dataset.from_json`, it raises ``` File "site-packages/datasets/arrow_dataset.py", line 1078, in from_json ).read() File "site-packages/datasets/io/json.py", line 59, in read self.builder.download_and_prepare( File "site-packages/datasets/builder.py", line 872, in download_and_prepare self._download_and_prepare( File "site-packages/datasets/builder.py", line 967, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "site-packages/datasets/builder.py", line 1749, in _prepare_split for job_id, done, content in self._prepare_split_single( File "site-packages/datasets/builder.py", line 1892, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Motivation I wanna use features like `Datasets.map` or `Datasets.shuffle`, so i need the dataset in memory to be `arrow_dataset.Datasets` format ### Your contribution PR
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5759/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5759/timeline
null
false
https://api.github.com/repos/huggingface/datasets/issues/5757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5757/comments
https://api.github.com/repos/huggingface/datasets/issues/5757/events
https://github.com/huggingface/datasets/issues/5757
1,669,910,503
I_kwDODunzps5jiM_n
5,757
Tilde (~) is not supported
{ "login": "eli-osherovich", "id": 2437102, "node_id": "MDQ6VXNlcjI0MzcxMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2437102?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eli-osherovich", "html_url": "https://github.com/eli-osherovich", "followers_url": "https://api.github.com/users/eli-osherovich/followers", "following_url": "https://api.github.com/users/eli-osherovich/following{/other_user}", "gists_url": "https://api.github.com/users/eli-osherovich/gists{/gist_id}", "starred_url": "https://api.github.com/users/eli-osherovich/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eli-osherovich/subscriptions", "organizations_url": "https://api.github.com/users/eli-osherovich/orgs", "repos_url": "https://api.github.com/users/eli-osherovich/repos", "events_url": "https://api.github.com/users/eli-osherovich/events{/privacy}", "received_events_url": "https://api.github.com/users/eli-osherovich/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
[]
1,681,645,690,000
1,682,004,651,000
1,682,004,651,000
CONTRIBUTOR
null
null
### Describe the bug It seems that `~` is not recognized correctly in local paths. Whenever I try to use it I get an exception ### Steps to reproduce the bug ```python load_dataset("imagefolder", data_dir="~/data/my_dataset") ``` Will generate the following error: ``` EmptyDatasetError: The directory at /path/to/cwd/~/data/datasets/clementine_tagged_per_cam doesn't contain any data files ``` ### Expected behavior Load the dataset. ### Environment info datasets==2.11.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5757/timeline
completed
false

Dataset Card for "github-issues"

More Information needed

Downloads last month
0
Edit dataset card