url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
51
51
id
int64
1.29B
1.57B
node_id
stringlengths
18
18
number
int64
4.59k
5.51k
title
stringlengths
10
165
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
int64
0
48
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
51
33.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
0 classes
pull_request
dict
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/5508
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5508/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5508/comments
https://api.github.com/repos/huggingface/datasets/issues/5508/events
https://github.com/huggingface/datasets/issues/5508
1,573,290,359
I_kwDODunzps5dxoF3
5,508
Saving a dataset after setting format to torch doesn't work, but only if filtering
{ "login": "joebhakim", "id": 13984157, "node_id": "MDQ6VXNlcjEzOTg0MTU3", "avatar_url": "https://avatars.githubusercontent.com/u/13984157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joebhakim", "html_url": "https://github.com/joebhakim", "followers_url": "https://api.github.com/users/joebhakim/followers", "following_url": "https://api.github.com/users/joebhakim/following{/other_user}", "gists_url": "https://api.github.com/users/joebhakim/gists{/gist_id}", "starred_url": "https://api.github.com/users/joebhakim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joebhakim/subscriptions", "organizations_url": "https://api.github.com/users/joebhakim/orgs", "repos_url": "https://api.github.com/users/joebhakim/repos", "events_url": "https://api.github.com/users/joebhakim/events{/privacy}", "received_events_url": "https://api.github.com/users/joebhakim/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
"2023-02-06T21:08:58"
"2023-02-06T21:08:58"
null
NONE
null
### Describe the bug Saving a dataset after setting format to torch doesn't work, but only if filtering ### Steps to reproduce the bug ``` a = Dataset.from_dict({"b": [1, 2]}) a.set_format('torch') a.save_to_disk("test_save") # saves successfully a.filter(None).save_to_disk("test_save_filter") # does not >> [...] TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types [<class 'torch.Tensor'>]. When using `batched=True`, make sure provided `function` returns a `dict` of types like `(<class 'list'>, <class 'numpy.ndarray'>)`. # note: skipping the format change to torch lets this work. ### Expected behavior Saving to work ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-6.1.9-arch1-1-x86_64-with-glibc2.36 - Python version: 3.10.9 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5508/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5507/comments
https://api.github.com/repos/huggingface/datasets/issues/5507/events
https://github.com/huggingface/datasets/issues/5507
1,572,667,036
I_kwDODunzps5dvP6c
5,507
Optimise behaviour in respect to indices mapping
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
0
"2023-02-06T14:25:55"
"2023-02-06T14:25:55"
null
CONTRIBUTOR
null
_Originally [posted](https://huggingface.slack.com/archives/C02V51Q3800/p1675443873878489?thread_ts=1675418893.373479&cid=C02V51Q3800) on Slack_ Considering all this, perhaps for Datasets 3.0, we can do the following: * have `continuous=True` by default in `.shard` (requested in the survey and makes more sense for us since it doesn't create an indices mapping) * allow calling `save_to_disk` on "unflattened" datasets * remove "hidden" expensive calls in `save_to_disk`, `unique`, `concatenate_datasets`, etc. For instance, instead of silently calling `flatten_indices` where it's needed, it's probably better to be explicit (considering how expensive these ops can be) and raise an error instead
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5507/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5506
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5506/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5506/comments
https://api.github.com/repos/huggingface/datasets/issues/5506/events
https://github.com/huggingface/datasets/issues/5506
1,571,838,641
I_kwDODunzps5dsFqx
5,506
IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs
{ "login": "kheyer", "id": 38166299, "node_id": "MDQ6VXNlcjM4MTY2Mjk5", "avatar_url": "https://avatars.githubusercontent.com/u/38166299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kheyer", "html_url": "https://github.com/kheyer", "followers_url": "https://api.github.com/users/kheyer/followers", "following_url": "https://api.github.com/users/kheyer/following{/other_user}", "gists_url": "https://api.github.com/users/kheyer/gists{/gist_id}", "starred_url": "https://api.github.com/users/kheyer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kheyer/subscriptions", "organizations_url": "https://api.github.com/users/kheyer/orgs", "repos_url": "https://api.github.com/users/kheyer/repos", "events_url": "https://api.github.com/users/kheyer/events{/privacy}", "received_events_url": "https://api.github.com/users/kheyer/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
"2023-02-06T03:26:03"
"2023-02-06T03:26:03"
null
NONE
null
### Describe the bug I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256. Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous shards and passing those to an `IterableDataset`. I observed an unexpected drop in GPU memory utilization, and found the batch size returned from the model had been cut in half. When using `Trainer` with 2 GPUs and a batch size of 256, `Dataset` returns a batch of size 512 (256 per GPU), while `IterableDataset` returns a batch size of 256 (256 total). My guess is `IterableDataset` isn't accounting for multiple cards. ### Steps to reproduce the bug ```python import datasets from datasets import IterableDataset from transformers import RobertaConfig from transformers import RobertaTokenizerFast from transformers import RobertaForMaskedLM from transformers import DataCollatorForLanguageModeling from transformers import Trainer, TrainingArguments use_iterable_dataset = True def gen_from_shards(shards): for shard in shards: for example in shard: yield example dataset = datasets.load_from_disk('my_dataset.hf') if use_iterable_dataset: n_shards = 100 shards = [dataset.shard(num_shards=n_shards, index=i) for i in range(n_shards)] dataset = IterableDataset.from_generator(gen_from_shards, gen_kwargs={"shards": shards}) tokenizer = RobertaTokenizerFast.from_pretrained("./my_tokenizer", max_len=160, use_fast=True) config = RobertaConfig( vocab_size=8248, max_position_embeddings=256, num_attention_heads=8, num_hidden_layers=6, type_vocab_size=1) model = RobertaForMaskedLM(config=config) data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15) training_args = TrainingArguments( per_device_train_batch_size=256 # other args removed for brevity ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, ) trainer.train() ``` ### Expected behavior Expected `Dataset` and `IterableDataset` to have the same batch size behavior. If the current behavior is intentional, the batch size printout at the start of training should be updated. Currently, both dataset classes result in `Trainer` printing the same total batch size, even though the batch size sent to the GPUs are different. ### Environment info datasets 2.7.1 transformers 4.25.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5506/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5506/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5505
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5505/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5505/comments
https://api.github.com/repos/huggingface/datasets/issues/5505/events
https://github.com/huggingface/datasets/issues/5505
1,571,720,814
I_kwDODunzps5dro5u
5,505
PyTorch BatchSampler still loads from Dataset one-by-one
{ "login": "davidgilbertson", "id": 4443482, "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidgilbertson", "html_url": "https://github.com/davidgilbertson", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
"2023-02-06T01:14:55"
"2023-02-06T01:14:55"
null
NONE
null
### Describe the bug In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue. I'm not sure if this is a mistake in the docs or the code, but it seems that the only way for a Dataset to be passed a list of indexes by PyTorch (instead of one index at a time) is to define a `__getitems__` method (note the plural) on the Dataset object, and since the HF Dataset doesn't have this, PyTorch executes [this line of code](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/fetch.py#L58), reverting to fetching one-by-one. ### Steps to reproduce the bug You can put a breakpoint in `Dataset.__getitem__()` or just print the args from there and see that it's called multiple times for a single `next(iter(dataloader))`, even when using the code from the docs: ```py from torch.utils.data.sampler import BatchSampler, RandomSampler batch_sampler = BatchSampler(RandomSampler(ds), batch_size=32, drop_last=False) dataloader = DataLoader(ds, batch_sampler=batch_sampler) ``` ### Expected behavior The expected behaviour would be for it to fetch batches from the dataset, rather than one-by-one. To demonstrate that there is room for improvement: once I have a HF dataset `ds`, if I just add this line: ```py ds.__getitems__ = ds.__getitem__ ``` ...then the time taken to loop over the dataset improves considerably (for wikitext-103, from one minute to 13 seconds with batch size 32). Probably not a big deal in the grand scheme of things, but seems like an easy win. ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.8 - PyArrow version: 10.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5505/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5505/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5500/comments
https://api.github.com/repos/huggingface/datasets/issues/5500/events
https://github.com/huggingface/datasets/issues/5500
1,569,257,240
I_kwDODunzps5diPcY
5,500
WMT19 custom download checksum error
{ "login": "Hannibal046", "id": 38466901, "node_id": "MDQ6VXNlcjM4NDY2OTAx", "avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hannibal046", "html_url": "https://github.com/Hannibal046", "followers_url": "https://api.github.com/users/Hannibal046/followers", "following_url": "https://api.github.com/users/Hannibal046/following{/other_user}", "gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions", "organizations_url": "https://api.github.com/users/Hannibal046/orgs", "repos_url": "https://api.github.com/users/Hannibal046/repos", "events_url": "https://api.github.com/users/Hannibal046/events{/privacy}", "received_events_url": "https://api.github.com/users/Hannibal046/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2023-02-03T05:45:37"
"2023-02-03T05:52:56"
"2023-02-03T05:52:56"
NONE
null
### Describe the bug I use the following scripts to download data from WMT19: ```python import datasets from datasets import inspect_dataset, load_dataset_builder from wmt19.wmt_utils import _TRAIN_SUBSETS,_DEV_SUBSETS ## this is a must due to: https://discuss.huggingface.co/t/load-dataset-hangs-with-local-files/28034/3 if __name__ == '__main__': dev_subsets,train_subsets = [],[] for subset in _TRAIN_SUBSETS: if subset.target=='en' and 'de' in subset.sources: train_subsets.append(subset.name) for subset in _DEV_SUBSETS: if subset.target=='en' and 'de' in subset.sources: dev_subsets.append(subset.name) inspect_dataset("wmt19", "./wmt19") builder = load_dataset_builder( "./wmt19/wmt_utils.py", language_pair=("de", "en"), subsets={ datasets.Split.TRAIN: train_subsets, datasets.Split.VALIDATION: dev_subsets, }, ) builder.download_and_prepare() ds = builder.as_dataset() ds.to_json("../data/wmt19/ende/data.json") ``` And I got the following error: ``` Traceback (most recent call last): | 0/2 [00:00<?, ?obj/s] File "draft.py", line 26, in <module> builder.download_and_prepare() | 0/1 [00:00<?, ?obj/s] File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 605, in download_and_prepare self._download_and_prepare(%| | 0/1 [00:00<?, ?obj/s] File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 1104, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) | 0/1 [00:00<?, ?obj/s] File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 676, in _download_and_prepare verify_checksums(s #13: 0%| | 0/1 [00:00<?, ?obj/s] File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 35, in verify_checksums raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) | 0/1 [00:00<?, ?obj/s] datasets.utils.info_utils.UnexpectedDownloadedFile: {'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-de.zipporah0-dedup-clean.tgz', 'https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-europarl-v7.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/rapid2016.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/training-parallel-nc-v13.zip', 'https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip', 'https://huggingface.co/datasets/wmt/wmt14/resolve/main-zip/training-parallel-nc-v9.zip', 'https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/training-parallel-nc-v10.zip', 'https://huggingface.co/datasets/wmt/wmt16/resolve/main-zip/translation-task/training-parallel-nc-v11.zip'} ``` ### Steps to reproduce the bug see above ### Expected behavior download data successfully ### Environment info datasets==2.1.0 python==3.8
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5500/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5499/comments
https://api.github.com/repos/huggingface/datasets/issues/5499/events
https://github.com/huggingface/datasets/issues/5499
1,568,937,026
I_kwDODunzps5dhBRC
5,499
`load_dataset` has ~4 seconds of overhead for cached data
{ "login": "davidgilbertson", "id": 4443482, "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidgilbertson", "html_url": "https://github.com/davidgilbertson", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
"2023-02-02T23:34:50"
"2023-02-02T23:34:50"
null
NONE
null
### Feature request When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory). This is particularly noticeable for smaller datasets. For example, wikitext-2, comparing `load_data` (once cached) and `load_from_disk`, the `load_dataset` method takes 40 times longer. ⏱ 4.84s ⮜ load_dataset ⏱ 119ms ⮜ load_from_disk ### Motivation I assume this is doing something like checking for a newer version. If so, that's an age old problem: do you make the user wait _every single time they load from cache_ or do you do something like load from cache always, _then_ check for a newer version and alert if they have stale data. The decision usually revolves around what percentage of the time the data will have been updated, and how dangerous old data is. For most datasets it's extremely unlikely that there will be a newer version on any given run, so 99% of the time this is just wasted time. Maybe you don't want to make that decision for all users, but at least having the _option_ to not wait for checks would be an improvement. ### Your contribution .
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5499/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5498/comments
https://api.github.com/repos/huggingface/datasets/issues/5498/events
https://github.com/huggingface/datasets/issues/5498
1,568,190,529
I_kwDODunzps5deLBB
5,498
TypeError: 'bool' object is not iterable when filtering a datasets.arrow_dataset.Dataset
{ "login": "vmuel", "id": 91255010, "node_id": "MDQ6VXNlcjkxMjU1MDEw", "avatar_url": "https://avatars.githubusercontent.com/u/91255010?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vmuel", "html_url": "https://github.com/vmuel", "followers_url": "https://api.github.com/users/vmuel/followers", "following_url": "https://api.github.com/users/vmuel/following{/other_user}", "gists_url": "https://api.github.com/users/vmuel/gists{/gist_id}", "starred_url": "https://api.github.com/users/vmuel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vmuel/subscriptions", "organizations_url": "https://api.github.com/users/vmuel/orgs", "repos_url": "https://api.github.com/users/vmuel/repos", "events_url": "https://api.github.com/users/vmuel/events{/privacy}", "received_events_url": "https://api.github.com/users/vmuel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2023-02-02T14:46:49"
"2023-02-04T17:19:37"
"2023-02-04T17:19:36"
NONE
null
### Describe the bug Hi, Thanks for the amazing work on the library! **Describe the bug** I think I might have noticed a small bug in the filter method. Having loaded a dataset using `load_dataset`, when I try to filter out empty entries with `batched=True`, I get a TypeError. ### Steps to reproduce the bug ``` train_dataset = train_dataset.filter( function=lambda example: example["image"] is not None, batched=True, batch_size=10) ``` Error message: ``` File .../lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) ... -> 5666 indices_array = [i for i, to_keep in zip(indices, mask) if to_keep] 5667 if indices_mapping is not None: 5668 indices_array = pa.array(indices_array, type=pa.uint64()) TypeError: 'bool' object is not iterable ``` **Removing batched=True allows to bypass the issue.** ### Expected behavior According to the doc, "[batch_size corresponds to the] number of examples per batch provided to function if batched = True", so we shouldn't need to remove the batchd=True arg? source: https://huggingface.co/docs/datasets/v2.9.0/en/package_reference/main_classes#datasets.Dataset.filter ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31 - Python version: 3.9.10 - PyArrow version: 10.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5498/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5496/comments
https://api.github.com/repos/huggingface/datasets/issues/5496/events
https://github.com/huggingface/datasets/issues/5496
1,567,301,765
I_kwDODunzps5dayCF
5,496
Add a `reduce` method
{ "login": "zhangir-azerbayev", "id": 59542043, "node_id": "MDQ6VXNlcjU5NTQyMDQz", "avatar_url": "https://avatars.githubusercontent.com/u/59542043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhangir-azerbayev", "html_url": "https://github.com/zhangir-azerbayev", "followers_url": "https://api.github.com/users/zhangir-azerbayev/followers", "following_url": "https://api.github.com/users/zhangir-azerbayev/following{/other_user}", "gists_url": "https://api.github.com/users/zhangir-azerbayev/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhangir-azerbayev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhangir-azerbayev/subscriptions", "organizations_url": "https://api.github.com/users/zhangir-azerbayev/orgs", "repos_url": "https://api.github.com/users/zhangir-azerbayev/repos", "events_url": "https://api.github.com/users/zhangir-azerbayev/events{/privacy}", "received_events_url": "https://api.github.com/users/zhangir-azerbayev/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
"2023-02-02T04:30:22"
"2023-02-03T14:11:32"
null
NONE
null
### Feature request Right now the `Dataset` class implements `map()` and `filter()`, but leaves out the third functional idiom popular among Python users: `reduce`. ### Motivation A `reduce` method is often useful when calculating dataset statistics, for example, the occurrence of a particular n-gram or the average line length of a code dataset. ### Your contribution I haven't contributed to `datasets` before, but I don't expect this will be too difficult, since the implementation will closely follow that of `map` and `filter`. I could have a crack over the weekend.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5496/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5495/comments
https://api.github.com/repos/huggingface/datasets/issues/5495/events
https://github.com/huggingface/datasets/issues/5495
1,566,803,452
I_kwDODunzps5dY4X8
5,495
to_tf_dataset fails with datetime UTC columns even if not included in columns argument
{ "login": "dwyatte", "id": 2512762, "node_id": "MDQ6VXNlcjI1MTI3NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwyatte", "html_url": "https://github.com/dwyatte", "followers_url": "https://api.github.com/users/dwyatte/followers", "following_url": "https://api.github.com/users/dwyatte/following{/other_user}", "gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}", "starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions", "organizations_url": "https://api.github.com/users/dwyatte/orgs", "repos_url": "https://api.github.com/users/dwyatte/repos", "events_url": "https://api.github.com/users/dwyatte/events{/privacy}", "received_events_url": "https://api.github.com/users/dwyatte/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
null
[]
null
2
"2023-02-01T20:47:33"
"2023-02-04T01:56:55"
null
NONE
null
### Describe the bug There appears to be some eager behavior in `to_tf_dataset` that runs against every column in a dataset even if they aren't included in the columns argument. This is problematic with datetime UTC columns due to them not working with zero copy. If I don't have UTC information in my datetime column, then everything works as expected. ### Steps to reproduce the bug ```python import numpy as np import pandas as pd from datasets import Dataset df = pd.DataFrame(np.random.rand(2, 1), columns=["x"]) # df["dt"] = pd.to_datetime(["2023-01-01", "2023-01-01"]) # works fine df["dt"] = pd.to_datetime(["2023-01-01 00:00:00.00000+00:00", "2023-01-01 00:00:00.00000+00:00"]) df.to_parquet("test.pq") ds = Dataset.from_parquet("test.pq") tf_ds = ds.to_tf_dataset(columns=["x"], batch_size=2, shuffle=True) ``` ``` ArrowInvalid Traceback (most recent call last) Cell In[1], line 12 8 df.to_parquet("test.pq") 11 ds = Dataset.from_parquet("test.pq") ---> 12 tf_ds = ds.to_tf_dataset(columns=["r"], batch_size=2, shuffle=True) File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:411, in TensorflowDatasetMixin.to_tf_dataset(self, batch_size, columns, shuffle, collate_fn, drop_remainder, collate_fn_args, label_cols, prefetch, num_workers) 407 dataset = self 409 # TODO(Matt, QL): deprecate the retention of label_ids and label --> 411 output_signature, columns_to_np_types = dataset._get_output_signature( 412 dataset, 413 collate_fn=collate_fn, 414 collate_fn_args=collate_fn_args, 415 cols_to_retain=cols_to_retain, 416 batch_size=batch_size if drop_remainder else None, 417 ) 419 if "labels" in output_signature: 420 if ("label_ids" in columns or "label" in columns) and "labels" not in columns: File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:254, in TensorflowDatasetMixin._get_output_signature(dataset, collate_fn, collate_fn_args, cols_to_retain, batch_size, num_test_batches) 252 for _ in range(num_test_batches): 253 indices = sample(range(len(dataset)), test_batch_size) --> 254 test_batch = dataset[indices] 255 if cols_to_retain is not None: 256 test_batch = {key: value for key, value in test_batch.items() if key in cols_to_retain} File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2590, in Dataset.__getitem__(self, key) 2588 def __getitem__(self, key): # noqa: F811 2589 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2590 return self._getitem( 2591 key, 2592 ) File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2575, in Dataset._getitem(self, key, **kwargs) 2573 formatter = get_formatter(format_type, features=self.features, **format_kwargs) 2574 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2575 formatted_output = format_table( 2576 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2577 ) 2578 return formatted_output File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:634, in format_table(table, key, formatter, format_columns, output_all_columns) 632 python_formatter = PythonFormatter(features=None) 633 if format_columns is None: --> 634 return formatter(pa_table, query_type=query_type) 635 elif query_type == "column": 636 if key in format_columns: File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:410, in Formatter.__call__(self, pa_table, query_type) 408 return self.format_column(pa_table) 409 elif query_type == "batch": --> 410 return self.format_batch(pa_table) File ~/venv/lib/python3.8/site-packages/datasets/formatting/np_formatter.py:78, in NumpyFormatter.format_batch(self, pa_table) 77 def format_batch(self, pa_table: pa.Table) -> Mapping: ---> 78 batch = self.numpy_arrow_extractor().extract_batch(pa_table) 79 batch = self.python_features_decoder.decode_batch(batch) 80 batch = self.recursive_tensorize(batch) File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in NumpyArrowExtractor.extract_batch(self, pa_table) 163 def extract_batch(self, pa_table: pa.Table) -> dict: --> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names} File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in <dictcomp>(.0) 163 def extract_batch(self, pa_table: pa.Table) -> dict: --> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names} File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:185, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array) 181 else: 182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all( 183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks 184 ) --> 185 array: List = [ 186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only) 187 ] 188 else: 189 if isinstance(pa_array.type, _ArrayXDExtensionType): 190 # don't call to_pylist() to preserve dtype of the fixed-size array File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:186, in <listcomp>(.0) 181 else: 182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all( 183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks 184 ) 185 array: List = [ --> 186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only) 187 ] 188 else: 189 if isinstance(pa_array.type, _ArrayXDExtensionType): 190 # don't call to_pylist() to preserve dtype of the fixed-size array File ~/venv/lib/python3.8/site-packages/pyarrow/array.pxi:1475, in pyarrow.lib.Array.to_numpy() File ~/venv/lib/python3.8/site-packages/pyarrow/error.pxi:100, in pyarrow.lib.check_status() ArrowInvalid: Needed to copy 1 chunks with 0 nulls, but zero_copy_only was True ``` ### Expected behavior I think there are two potential issues/fixes 1. Proper handling of datetime UTC columns (perhaps there is something incorrect with zero copy handling here) 2. Not eagerly running against every column in a dataset when the columns argument of `to_tf_dataset` specifies a subset of columns (although I'm not sure if this is unavoidable) ### Environment info - `datasets` version: 2.9.0 - Platform: macOS-13.2-x86_64-i386-64bit - Python version: 3.8.12 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5495/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5494/comments
https://api.github.com/repos/huggingface/datasets/issues/5494/events
https://github.com/huggingface/datasets/issues/5494
1,566,655,348
I_kwDODunzps5dYUN0
5,494
Update audio installation doc page
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
3
"2023-02-01T19:07:50"
"2023-02-02T13:11:58"
null
CONTRIBUTOR
null
Our [installation documentation page](https://huggingface.co/docs/datasets/installation#audio) says that one can use Datasets for mp3 only with `torchaudio<0.12`. `torchaudio>0.12` is actually supported too but requires a specific version of ffmpeg which is not easily installed on all linux versions but there is a custom ubuntu repo for it, we have insctructions in the code: https://github.com/huggingface/datasets/blob/main/src/datasets/features/audio.py#L327 So we should update the doc page. But first investigate [this issue](5488).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5494/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/5494/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5492/comments
https://api.github.com/repos/huggingface/datasets/issues/5492/events
https://github.com/huggingface/datasets/issues/5492
1,566,604,216
I_kwDODunzps5dYHu4
5,492
Push_to_hub in a pull request
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[ { "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false } ]
null
1
"2023-02-01T18:32:14"
"2023-02-01T18:40:46"
null
MEMBER
null
Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name cc @nateraw It should be possible to tweak the use of `huggingface_hub` in `push_to_hub` to make it open a PR or push to an existing PR
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5492/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5488/comments
https://api.github.com/repos/huggingface/datasets/issues/5488/events
https://github.com/huggingface/datasets/issues/5488
1,565,025,262
I_kwDODunzps5dSGPu
5,488
Error loading MP3 files from CommonVoice
{ "login": "kradonneoh", "id": 110259722, "node_id": "U_kgDOBpJuCg", "avatar_url": "https://avatars.githubusercontent.com/u/110259722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kradonneoh", "html_url": "https://github.com/kradonneoh", "followers_url": "https://api.github.com/users/kradonneoh/followers", "following_url": "https://api.github.com/users/kradonneoh/following{/other_user}", "gists_url": "https://api.github.com/users/kradonneoh/gists{/gist_id}", "starred_url": "https://api.github.com/users/kradonneoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kradonneoh/subscriptions", "organizations_url": "https://api.github.com/users/kradonneoh/orgs", "repos_url": "https://api.github.com/users/kradonneoh/repos", "events_url": "https://api.github.com/users/kradonneoh/events{/privacy}", "received_events_url": "https://api.github.com/users/kradonneoh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
"2023-01-31T21:25:33"
"2023-02-01T15:28:56"
null
NONE
null
### Describe the bug When loading a CommonVoice dataset with `datasets==2.9.0` and `torchaudio>=0.12.0`, I get an error reading the audio arrays: ```python --------------------------------------------------------------------------- LibsndfileError Traceback (most recent call last) ~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3(self, path_or_file) 310 try: # try torchaudio anyway because sometimes it works (depending on the os and os packages installed) --> 311 array, sampling_rate = self._decode_mp3_torchaudio(path_or_file) 312 except RuntimeError: ~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3_torchaudio(self, path_or_file) 351 --> 352 array, sampling_rate = torchaudio.load(path_or_file, format="mp3") 353 if self.sampling_rate and self.sampling_rate != sampling_rate: ~/.local/lib/python3.8/site-packages/torchaudio/backend/soundfile_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 204 """ --> 205 with soundfile.SoundFile(filepath, "r") as file_: 206 if file_.format != "WAV" or normalize: ~/.local/lib/python3.8/site-packages/soundfile.py in __init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 654 format, subtype, endian) --> 655 self._file = self._open(file, mode_int, closefd) 656 if set(mode).issuperset('r+') and self.seekable(): ~/.local/lib/python3.8/site-packages/soundfile.py in _open(self, file, mode_int, closefd) 1212 err = _snd.sf_error(file_ptr) -> 1213 raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) 1214 if mode_int == _snd.SFM_WRITE: LibsndfileError: Error opening <_io.BytesIO object at 0x7fa539462090>: File contains data in an unknown format. ``` I assume this is because there's some issue with the mp3 decoding process. I've verified that I have `ffmpeg>=4` (on a Linux distro), which appears to be the fallback backend for `torchaudio,` (at least according to #4889). ### Steps to reproduce the bug ```python dataset = load_dataset("mozilla-foundation/common_voice_11_0", "be", split="train") dataset[0] ``` ### Expected behavior Similar behavior to `torchaudio<0.12.0`, which doesn't result in a `LibsndfileError` ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 10.0.1 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5488/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5487/comments
https://api.github.com/repos/huggingface/datasets/issues/5487/events
https://github.com/huggingface/datasets/issues/5487
1,564,480,121
I_kwDODunzps5dQBJ5
5,487
Incorrect filepath for dill module
{ "login": "avivbrokman", "id": 35349273, "node_id": "MDQ6VXNlcjM1MzQ5Mjcz", "avatar_url": "https://avatars.githubusercontent.com/u/35349273?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avivbrokman", "html_url": "https://github.com/avivbrokman", "followers_url": "https://api.github.com/users/avivbrokman/followers", "following_url": "https://api.github.com/users/avivbrokman/following{/other_user}", "gists_url": "https://api.github.com/users/avivbrokman/gists{/gist_id}", "starred_url": "https://api.github.com/users/avivbrokman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avivbrokman/subscriptions", "organizations_url": "https://api.github.com/users/avivbrokman/orgs", "repos_url": "https://api.github.com/users/avivbrokman/repos", "events_url": "https://api.github.com/users/avivbrokman/events{/privacy}", "received_events_url": "https://api.github.com/users/avivbrokman/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
5
"2023-01-31T15:01:08"
"2023-02-02T07:07:55"
null
NONE
null
### Describe the bug I installed the `datasets` package and when I try to `import` it, I get the following error: ``` Traceback (most recent call last): File "/var/folders/jt/zw5g74ln6tqfdzsl8tx378j00000gn/T/ipykernel_3805/3458380017.py", line 1, in <module> import datasets File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module> from .arrow_dataset import Dataset File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 66, in <module> from .arrow_writer import ArrowWriter, OptimizedTypedSequence File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 27, in <module> from .features import Features, Image, Value File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/features/__init__.py", line 17, in <module> from .audio import Audio File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/features/audio.py", line 12, in <module> from ..download.streaming_download_manager import xopen File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/download/__init__.py", line 9, in <module> from .download_manager import DownloadManager, DownloadMode File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/download/download_manager.py", line 36, in <module> from ..utils.py_utils import NestedDataStructure, map_nested, size_str File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 602, in <module> class Pickler(dill.Pickler): File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 605, in Pickler dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy()) AttributeError: module 'dill' has no attribute '_dill' ``` Looking at the github source code for dill, it appears that `datasets` has a bug or is not compatible with the latest `dill`. Specifically, rather than `dill._dill.XXXX` it should be `dill.dill._dill.XXXX`. But given the popularity of `datasets` I feel confused about me being the first person to have this issue, so it makes me wonder if I'm misdiagnosing the issue. ### Steps to reproduce the bug Install `dill` and `datasets` packages and then `import datasets` ### Expected behavior I expect `datasets` to import. ### Environment info - `datasets` version: 2.9.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.13 - PyArrow version: 11.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5487/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5486/comments
https://api.github.com/repos/huggingface/datasets/issues/5486/events
https://github.com/huggingface/datasets/issues/5486
1,564,059,749
I_kwDODunzps5dOahl
5,486
Adding `sep` to TextConfig
{ "login": "omar-araboghli", "id": 29576434, "node_id": "MDQ6VXNlcjI5NTc2NDM0", "avatar_url": "https://avatars.githubusercontent.com/u/29576434?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omar-araboghli", "html_url": "https://github.com/omar-araboghli", "followers_url": "https://api.github.com/users/omar-araboghli/followers", "following_url": "https://api.github.com/users/omar-araboghli/following{/other_user}", "gists_url": "https://api.github.com/users/omar-araboghli/gists{/gist_id}", "starred_url": "https://api.github.com/users/omar-araboghli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omar-araboghli/subscriptions", "organizations_url": "https://api.github.com/users/omar-araboghli/orgs", "repos_url": "https://api.github.com/users/omar-araboghli/repos", "events_url": "https://api.github.com/users/omar-araboghli/events{/privacy}", "received_events_url": "https://api.github.com/users/omar-araboghli/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
"2023-01-31T10:39:53"
"2023-01-31T14:50:18"
null
NONE
null
I have a local a `.txt` file that follows the `CONLL2003` format which I need to load using `load_script`. However, by using `sample_by='line'`, one can only split the dataset into lines without splitting each line into columns. Would it be reasonable to add a `sep` argument in combination with `sample_by='paragraph'` to parse a paragraph into an array for each column ? If so, I am happy to contribute! ## Environment * `python 3.8.10` * `datasets 2.9.0` ## Snippet of `train.txt` ```txt Distribution NN O O and NN O O dynamics NN O O of NN O O electron NN O B-RP complexes NN O I-RP in NN O O cyanobacterial NN O B-R membranes NN O I-R The NN O O occurrence NN O O of NN O O prostaglandin NN O B-R F2α NN O I-R in NN O O Pharbitis NN O B-R seedlings NN O I-R grown NN O O under NN O O short NN O B-P days NN O I-P or NN O I-P days NN O I-P ``` ## Current Behaviour ```python # defining 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags'] here would fail with `ValueError: Length of names (4) does not match length of arrays (1)` dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='line') dataset['train']['tokens'][0] >>> 'Distribution\tNN\tO\tO' ``` ## Expected Behaviour / Suggestion ```python # suppose we defined 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags'] dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='paragraph', sep='\t') dataset['train']['tokens'][0] >>> ['Distribution', 'and', 'dynamics', ... ] dataset['train']['ner_tags'][0] >>> ['O', 'O', 'O', ... ] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5486/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5483/comments
https://api.github.com/repos/huggingface/datasets/issues/5483/events
https://github.com/huggingface/datasets/issues/5483
1,560,894,690
I_kwDODunzps5dCVzi
5,483
Unable to upload dataset
{ "login": "yuvalkirstain", "id": 57996478, "node_id": "MDQ6VXNlcjU3OTk2NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuvalkirstain", "html_url": "https://github.com/yuvalkirstain", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2023-01-28T15:18:26"
"2023-01-29T08:09:49"
"2023-01-29T08:09:49"
NONE
null
### Describe the bug Uploading a simple dataset ends with an exception ### Steps to reproduce the bug I created a new conda env with python 3.10, pip installed datasets and: ```python >>> from datasets import load_dataset, load_from_disk, Dataset >>> d = Dataset.from_dict({"text": ["hello"] * 2}) >>> d.push_to_hub("ttt111") /home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_hf_folder.py:92: UserWarning: A token has been found in `/a/home/cc/students/cs/kirstain/.huggingface/token`. This is the old path where tokens were stored. The new location is `/home/olab/kirstain/.cache/huggingface/token` which is configurable using `HF_HOME` environment variable. Your token has been copied to this new location. You can now safely delete the old token file manually or use `huggingface-cli logout`. warnings.warn( Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 279.94ba/s] Upload 1 LFS files: 0%| | 0/1 [00:02<?, ?it/s] Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:04<?, ?it/s] Traceback (most recent call last): File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 264, in hf_raise_for_status response.raise_for_status() File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 334, in _inner_upload_lfs_object return _upload_lfs_object( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 391, in _upload_lfs_object lfs_upload( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 273, in lfs_upload _upload_single_part( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 305, in _upload_single_part hf_raise_for_status(upload_res) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 318, in hf_raise_for_status raise HfHubHTTPError(str(e), response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4909, in push_to_hub repo_id, split, uploaded_size, dataset_nbytes, repo_files, deleted_size = self._push_parquet_shards_to_hub( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4804, in _push_parquet_shards_to_hub _retry( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 281, in _retry return func(*func_args, **func_kwargs) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn return fn(*args, **kwargs) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2537, in upload_file commit_info = self.create_commit( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn return fn(*args, **kwargs) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2346, in create_commit upload_lfs_files( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn return fn(*args, **kwargs) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 346, in upload_lfs_files thread_map( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 94, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 76, in _executor_map return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs)) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator yield _result_or_cancel(fs.pop()) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel return fut.result(timeout) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 458, in result return self.__get_result() File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 338, in _inner_upload_lfs_object raise RuntimeError( RuntimeError: Error while uploading 'data/train-00000-of-00001-6df93048e66df326.parquet' to the Hub. ``` ### Expected behavior The dataset should be uploaded without any exceptions ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.27 - Python version: 3.10.9 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5483/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5482/comments
https://api.github.com/repos/huggingface/datasets/issues/5482/events
https://github.com/huggingface/datasets/issues/5482
1,560,853,137
I_kwDODunzps5dCLqR
5,482
Reload features from Parquet metadata
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
open
false
null
[]
null
2
"2023-01-28T13:12:31"
"2023-02-05T18:09:54"
null
MEMBER
null
The idea would be to allow this : ```python ds.to_parquet("my_dataset/ds.parquet") reloaded = load_dataset("my_dataset") assert ds.features == reloaded.features ``` And it should also work with Image and Audio types (right now they're reloaded as a dict type) This can be implemented by storing and reading the feature types in the parquet metadata, as we do for arrow files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5482/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5482/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5481/comments
https://api.github.com/repos/huggingface/datasets/issues/5481/events
https://github.com/huggingface/datasets/issues/5481
1,560,468,195
I_kwDODunzps5dAtrj
5,481
Load a cached dataset as iterable
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
open
false
null
[]
null
10
"2023-01-27T21:43:51"
"2023-02-01T16:28:48"
null
MEMBER
null
The idea would be to allow something like ```python ds = load_dataset("c4", "en", as_iterable=True) ``` To be used to train models. It would load an IterableDataset from the cached Arrow files. Cc @stas00 Edit : from the discussions we may load from cache when streaming=True
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5481/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5481/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5479/comments
https://api.github.com/repos/huggingface/datasets/issues/5479/events
https://github.com/huggingface/datasets/issues/5479
1,560,357,590
I_kwDODunzps5dASrW
5,479
audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated
{ "login": "jcho19", "id": 107211437, "node_id": "U_kgDOBmPqrQ", "avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcho19", "html_url": "https://github.com/jcho19", "followers_url": "https://api.github.com/users/jcho19/followers", "following_url": "https://api.github.com/users/jcho19/following{/other_user}", "gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcho19/subscriptions", "organizations_url": "https://api.github.com/users/jcho19/orgs", "repos_url": "https://api.github.com/users/jcho19/repos", "events_url": "https://api.github.com/users/jcho19/events{/privacy}", "received_events_url": "https://api.github.com/users/jcho19/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2023-01-27T20:01:22"
"2023-01-29T05:23:14"
"2023-01-29T05:23:14"
NONE
null
### Describe the bug I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what could be missing/need to be updated in the one that doesn't work? On the remote env, libsndfile is 1.0.28 and ffmpeg is 4.2.1. from datasets import load_dataset ds = load_dataset("audiofolder", data_dir="...") Here is the output (should be generating 400+ rows): Downloading and preparing dataset audiofolder/default to ... Downloading data files: 0%| | 0/2 [00:00<?, ?it/s] Downloading data files: 0it [00:00, ?it/s] Extracting data files: 0it [00:00, ?it/s] Generating train split: 0 examples [00:00, ? examples/s] Dataset audiofolder downloaded and prepared to ... Subsequent calls will reuse this data. 0%| | 0/1 [00:00<?, ?it/s] DatasetDict({ train: Dataset({ features: ['audio', 'transcription'], num_rows: 1 }) }) Here is my pip environment in the one that doesn't work (uses torch 1.11.a0 from shared env): Package Version ------------------- ------------------- aiofiles 22.1.0 aiohttp 3.8.3 aiosignal 1.3.1 altair 4.2.1 anyio 3.6.2 appdirs 1.4.4 argcomplete 2.0.0 argon2-cffi 20.1.0 astunparse 1.6.3 async-timeout 4.0.2 attrs 21.2.0 audioread 3.0.0 backcall 0.2.0 bleach 4.0.0 certifi 2021.10.8 cffi 1.14.6 charset-normalizer 2.0.12 click 8.1.3 contourpy 1.0.7 cycler 0.11.0 datasets 2.9.0 debugpy 1.4.1 decorator 5.0.9 defusedxml 0.7.1 dill 0.3.6 distlib 0.3.4 entrypoints 0.3 evaluate 0.4.0 expecttest 0.1.3 fastapi 0.89.1 ffmpy 0.3.0 filelock 3.6.0 fonttools 4.38.0 frozenlist 1.3.3 fsspec 2023.1.0 future 0.18.2 gradio 3.16.2 h11 0.14.0 httpcore 0.16.3 httpx 0.23.3 huggingface-hub 0.12.0 idna 3.3 ipykernel 6.2.0 ipython 7.26.0 ipython-genutils 0.2.0 ipywidgets 7.6.3 jedi 0.18.0 Jinja2 3.0.1 jiwer 2.5.1 joblib 1.2.0 jsonschema 3.2.0 jupyter 1.0.0 jupyter-client 6.1.12 jupyter-console 6.4.0 jupyter-core 4.7.1 jupyterlab-pygments 0.1.2 jupyterlab-widgets 1.0.0 kiwisolver 1.4.4 Levenshtein 0.20.2 librosa 0.9.2 linkify-it-py 1.0.3 llvmlite 0.39.1 markdown-it-py 2.1.0 MarkupSafe 2.0.1 matplotlib 3.6.3 matplotlib-inline 0.1.2 mdit-py-plugins 0.3.3 mdurl 0.1.2 mistune 0.8.4 multidict 6.0.4 multiprocess 0.70.14 nbclient 0.5.4 nbconvert 6.1.0 nbformat 5.1.3 nest-asyncio 1.5.1 notebook 6.4.3 numba 0.56.4 numpy 1.20.3 orjson 3.8.5 packaging 21.0 pandas 1.5.3 pandocfilters 1.4.3 parso 0.8.2 pexpect 4.8.0 pickleshare 0.7.5 Pillow 9.4.0 pip 22.3.1 pipx 1.1.0 platformdirs 2.5.2 pooch 1.6.0 prometheus-client 0.11.0 prompt-toolkit 3.0.19 psutil 5.9.0 ptyprocess 0.7.0 pyarrow 10.0.1 pycparser 2.20 pycryptodome 3.16.0 pydantic 1.10.4 pydub 0.25.1 Pygments 2.10.0 pyparsing 2.4.7 pyrsistent 0.18.0 python-dateutil 2.8.2 python-multipart 0.0.5 pytz 2022.7.1 PyYAML 6.0 pyzmq 22.2.1 qtconsole 5.1.1 QtPy 1.10.0 rapidfuzz 2.13.7 regex 2022.10.31 requests 2.27.1 resampy 0.4.2 responses 0.18.0 rfc3986 1.5.0 scikit-learn 1.2.1 scipy 1.6.3 Send2Trash 1.8.0 setuptools 65.5.1 shiboken6 6.3.1 shiboken6-generator 6.3.1 six 1.16.0 sniffio 1.3.0 soundfile 0.11.0 starlette 0.22.0 terminado 0.11.0 testpath 0.5.0 threadpoolctl 3.1.0 tokenizers 0.13.2 toolz 0.12.0 torch 1.11.0a0+gitunknown tornado 6.1 tqdm 4.64.1 traitlets 5.0.5 transformers 4.27.0.dev0 types-dataclasses 0.6.4 typing_extensions 4.1.1 uc-micro-py 1.0.1 urllib3 1.26.9 userpath 1.8.0 uvicorn 0.20.0 virtualenv 20.14.1 wcwidth 0.2.5 webencodings 0.5.1 websockets 10.4 wheel 0.37.1 widgetsnbextension 3.5.1 xxhash 3.2.0 yarl 1.8.2 ### Steps to reproduce the bug Create a pip environment with the packages listed above (make sure ffmpeg and libsndfile is installed with same versions listed above). Create a custom audio dataset and load it in with load_dataset("audiofolder", ...) ### Expected behavior load_dataset should create a dataset with 400+ rows. ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.0 - PyArrow version: 10.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5479/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5477/comments
https://api.github.com/repos/huggingface/datasets/issues/5477/events
https://github.com/huggingface/datasets/issues/5477
1,559,909,892
I_kwDODunzps5c-lYE
5,477
Unpin sqlalchemy once issue is fixed
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
"2023-01-27T15:01:55"
"2023-01-27T15:01:55"
null
MEMBER
null
Once the source issue is fixed: - pandas-dev/pandas#51015 we should revert the pin introduced in: - #5476
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5477/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5475/comments
https://api.github.com/repos/huggingface/datasets/issues/5475/events
https://github.com/huggingface/datasets/issues/5475
1,559,030,149
I_kwDODunzps5c7OmF
5,475
Dataset scan time is much slower than using native arrow
{ "login": "jonny-cyberhaven", "id": 121845112, "node_id": "U_kgDOB0M1eA", "avatar_url": "https://avatars.githubusercontent.com/u/121845112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonny-cyberhaven", "html_url": "https://github.com/jonny-cyberhaven", "followers_url": "https://api.github.com/users/jonny-cyberhaven/followers", "following_url": "https://api.github.com/users/jonny-cyberhaven/following{/other_user}", "gists_url": "https://api.github.com/users/jonny-cyberhaven/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonny-cyberhaven/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonny-cyberhaven/subscriptions", "organizations_url": "https://api.github.com/users/jonny-cyberhaven/orgs", "repos_url": "https://api.github.com/users/jonny-cyberhaven/repos", "events_url": "https://api.github.com/users/jonny-cyberhaven/events{/privacy}", "received_events_url": "https://api.github.com/users/jonny-cyberhaven/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2023-01-27T01:32:25"
"2023-01-30T16:17:11"
"2023-01-30T16:17:11"
CONTRIBUTOR
null
### Describe the bug I'm basically running the same scanning experiment from the tutorials https://huggingface.co/course/chapter5/4?fw=pt except now I'm comparing to a native pyarrow version. I'm finding that the native pyarrow approach is much faster (2 orders of magnitude). Is there something I'm missing that explains this phenomenon? ### Steps to reproduce the bug https://colab.research.google.com/drive/11EtHDaGAf1DKCpvYnAPJUW-LFfAcDzHY?usp=sharing ### Expected behavior I expect scan times to be on par with using pyarrow directly. ### Environment info standard colab environment
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5475/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5474
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5474/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5474/comments
https://api.github.com/repos/huggingface/datasets/issues/5474/events
https://github.com/huggingface/datasets/issues/5474
1,558,827,155
I_kwDODunzps5c6dCT
5,474
Column project operation on `datasets.Dataset`
{ "login": "daskol", "id": 9336514, "node_id": "MDQ6VXNlcjkzMzY1MTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daskol", "html_url": "https://github.com/daskol", "followers_url": "https://api.github.com/users/daskol/followers", "following_url": "https://api.github.com/users/daskol/following{/other_user}", "gists_url": "https://api.github.com/users/daskol/gists{/gist_id}", "starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daskol/subscriptions", "organizations_url": "https://api.github.com/users/daskol/orgs", "repos_url": "https://api.github.com/users/daskol/repos", "events_url": "https://api.github.com/users/daskol/events{/privacy}", "received_events_url": "https://api.github.com/users/daskol/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" }, { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
"2023-01-26T21:47:53"
"2023-02-01T16:44:09"
null
NONE
null
### Feature request There is no operation to select a subset of columns of original dataset. Expected API follows. ```python a = Dataset.from_dict({ 'int': [0, 1, 2] 'char': ['a', 'b', 'c'], 'none': [None] * 3, }) b = a.project('int', 'char') # usually, .select() print(a.column_names) # stdout: ['int', 'char', 'none'] print(b.column_names) # stdout: ['int', 'char'] ``` Method project can easily accept not only column names (as a `str)` but univariant function applied to corresponding column as an example. Or keyword arguments can be used in order to rename columns in advance (see `pandas`, `pyspark`, `pyarrow`, and SQL).. ### Motivation Projection is a typical operation in every data processing library. And it is a basic block of a well-known data manipulation language like SQL. Without this operation `datasets.Dataset` interface is not complete. ### Your contribution Not sure. Some of my PRs are still open and some do not have any discussions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5474/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5468/comments
https://api.github.com/repos/huggingface/datasets/issues/5468/events
https://github.com/huggingface/datasets/issues/5468
1,558,066,625
I_kwDODunzps5c3jXB
5,468
Allow opposite of remove_columns on Dataset and DatasetDict
{ "login": "hollance", "id": 346853, "node_id": "MDQ6VXNlcjM0Njg1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hollance", "html_url": "https://github.com/hollance", "followers_url": "https://api.github.com/users/hollance/followers", "following_url": "https://api.github.com/users/hollance/following{/other_user}", "gists_url": "https://api.github.com/users/hollance/gists{/gist_id}", "starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hollance/subscriptions", "organizations_url": "https://api.github.com/users/hollance/orgs", "repos_url": "https://api.github.com/users/hollance/repos", "events_url": "https://api.github.com/users/hollance/events{/privacy}", "received_events_url": "https://api.github.com/users/hollance/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
null
[]
null
9
"2023-01-26T12:28:09"
"2023-01-31T10:48:56"
null
NONE
null
### Feature request In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code: ```python COLUMNS_TO_KEEP = ["text", "audio"] all_columns = gigaspeech["train"].column_names columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP) gigaspeech = gigaspeech.remove_columns(columns_to_remove) ``` This kind of thing happens a lot when you don't need to keep all columns from the dataset. It would be more convenient (and less error prone) if you could just write: ```python gigaspeech = gigaspeech.keep_columns(["text", "audio"]) ``` Internally, `keep_columns` could still call `remove_columns`, but it expresses more clearly what the user's intent is. ### Motivation Less code to write for the user of the dataset. ### Your contribution -
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5468/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5465/comments
https://api.github.com/repos/huggingface/datasets/issues/5465/events
https://github.com/huggingface/datasets/issues/5465
1,557,510,618
I_kwDODunzps5c1bna
5,465
audiofolder creates empty dataset even though the dataset passed in follows the correct structure
{ "login": "jcho19", "id": 107211437, "node_id": "U_kgDOBmPqrQ", "avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcho19", "html_url": "https://github.com/jcho19", "followers_url": "https://api.github.com/users/jcho19/followers", "following_url": "https://api.github.com/users/jcho19/following{/other_user}", "gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcho19/subscriptions", "organizations_url": "https://api.github.com/users/jcho19/orgs", "repos_url": "https://api.github.com/users/jcho19/repos", "events_url": "https://api.github.com/users/jcho19/events{/privacy}", "received_events_url": "https://api.github.com/users/jcho19/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2023-01-26T01:45:45"
"2023-01-26T08:48:45"
"2023-01-26T08:48:45"
NONE
null
### Describe the bug The structure of my dataset folder called "my_dataset" is : data metadata.csv The data folder consists of all mp3 files and metadata.csv consist of file locations like 'data/...mp3 and transcriptions. There's 400+ mp3 files and corresponding transcriptions for my dataset. When I run the following: ds = load_dataset("audiofolder", data_dir="my_dataset") I get: Using custom data configuration default-... Downloading and preparing dataset audiofolder/default to /... Downloading data files: 0%| | 0/2 [00:00<?, ?it/s] Downloading data files: 0it [00:00, ?it/s] Extracting data files: 0it [00:00, ?it/s] Generating train split: 0 examples [00:00, ? examples/s] Dataset audiofolder downloaded and prepared to /.... Subsequent calls will reuse this data. 0%| | 0/1 [00:00<?, ?it/s] DatasetDict({ train: Dataset({ features: ['audio', 'transcription'], num_rows: 1 }) }) ### Steps to reproduce the bug Create a dataset folder called 'my_dataset' with a subfolder called 'data' that has mp3 files. Also, create metadata.csv that has file locations like 'data/...mp3' and their corresponding transcription. Run: ds = load_dataset("audiofolder", data_dir="my_dataset") ### Expected behavior It should generate a dataset with numerous rows. ### Environment info Run on Jupyter notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5465/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5464/comments
https://api.github.com/repos/huggingface/datasets/issues/5464/events
https://github.com/huggingface/datasets/issues/5464
1,557,462,104
I_kwDODunzps5c1PxY
5,464
NonMatchingChecksumError for hendrycks_test
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "repos_url": "https://api.github.com/users/sarahwie/repos", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2023-01-26T00:43:23"
"2023-01-27T05:44:31"
"2023-01-26T07:41:58"
NONE
null
### Describe the bug The checksum of the file has likely changed on the remote host. ### Steps to reproduce the bug `dataset = nlp.load_dataset("hendrycks_test", "anatomy")` ### Expected behavior no error thrown ### Environment info - `datasets` version: 2.2.1 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5464/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5461/comments
https://api.github.com/repos/huggingface/datasets/issues/5461/events
https://github.com/huggingface/datasets/issues/5461
1,555,532,719
I_kwDODunzps5ct4uv
5,461
Discrepancy in `nyu_depth_v2` dataset
{ "login": "awsaf49", "id": 36858976, "node_id": "MDQ6VXNlcjM2ODU4OTc2", "avatar_url": "https://avatars.githubusercontent.com/u/36858976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/awsaf49", "html_url": "https://github.com/awsaf49", "followers_url": "https://api.github.com/users/awsaf49/followers", "following_url": "https://api.github.com/users/awsaf49/following{/other_user}", "gists_url": "https://api.github.com/users/awsaf49/gists{/gist_id}", "starred_url": "https://api.github.com/users/awsaf49/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awsaf49/subscriptions", "organizations_url": "https://api.github.com/users/awsaf49/orgs", "repos_url": "https://api.github.com/users/awsaf49/repos", "events_url": "https://api.github.com/users/awsaf49/events{/privacy}", "received_events_url": "https://api.github.com/users/awsaf49/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
37
"2023-01-24T19:15:46"
"2023-02-06T20:52:00"
null
CONTRIBUTOR
null
### Describe the bug I think there is a discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are different from actual ones. Here is a side-by-side comparison, ![image](https://user-images.githubusercontent.com/36858976/214381162-1d9582c2-6750-4114-a01a-61ca1cd5f872.png) I tried to find the origin of this issue but sadly as I mentioned in tensorflow/datasets/issues/4674, the download link from `fast-depth` doesn't work anymore hence couldn't verify if the error originated there or during porting data from there to HF. Hi @sayakpaul, as you worked on huggingface/datasets/issues/5255, if you still have access to that data could you please share the data or perhaps checkout this issue? ### Steps to reproduce the bug This [notebook](https://colab.research.google.com/drive/1K3ZU8XUPRDOYD38MQS9nreQXJYitlKSW?usp=sharing#scrollTo=UEW7QSh0jf0i) from @sayakpaul could be used to generate depth maps and actual ground truths could be checked from this [dataset](https://www.kaggle.com/datasets/awsaf49/nyuv2-bts-dataset) from BTS repo. > Note: BTS dataset has only 36K data compared to the train-test 50K. They sampled the data as adjacent frames look quite the same ### Expected behavior Expected depth maps should be smooth rather than discrete/clipped. ### Environment info - `datasets` version: 2.8.1.dev0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 9.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5461/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5458/comments
https://api.github.com/repos/huggingface/datasets/issues/5458/events
https://github.com/huggingface/datasets/issues/5458
1,555,054,737
I_kwDODunzps5csECR
5,458
slice split while streaming
{ "login": "SvenDS9", "id": 122370631, "node_id": "U_kgDOB0s6Rw", "avatar_url": "https://avatars.githubusercontent.com/u/122370631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SvenDS9", "html_url": "https://github.com/SvenDS9", "followers_url": "https://api.github.com/users/SvenDS9/followers", "following_url": "https://api.github.com/users/SvenDS9/following{/other_user}", "gists_url": "https://api.github.com/users/SvenDS9/gists{/gist_id}", "starred_url": "https://api.github.com/users/SvenDS9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SvenDS9/subscriptions", "organizations_url": "https://api.github.com/users/SvenDS9/orgs", "repos_url": "https://api.github.com/users/SvenDS9/repos", "events_url": "https://api.github.com/users/SvenDS9/events{/privacy}", "received_events_url": "https://api.github.com/users/SvenDS9/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2023-01-24T14:08:17"
"2023-01-24T15:11:47"
"2023-01-24T15:11:47"
NONE
null
### Describe the bug When using the `load_dataset` function with streaming set to True, slicing splits is apparently not supported. Did I miss this in the documentation? ### Steps to reproduce the bug `load_dataset("lhoestq/demo1",revision=None, streaming=True, split="train[:3]")` causes ValueError: Bad split: train[:3]. Available splits: ['train', 'test'] in builder.py, line 1213, in as_streaming_dataset ### Expected behavior The first 3 entries of the dataset as a stream ### Environment info - `datasets` version: 2.8.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.9 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5458/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5458/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5457/comments
https://api.github.com/repos/huggingface/datasets/issues/5457/events
https://github.com/huggingface/datasets/issues/5457
1,554,171,264
I_kwDODunzps5cosWA
5,457
prebuilt dataset relies on `downloads/extracted`
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
"2023-01-24T02:09:32"
"2023-01-24T18:14:10"
null
MEMBER
null
### Describe the bug I pre-built the dataset: ``` python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing ``` and it can be used just fine. now I wipe out `downloads/extracted` and it no longer works. ``` rm -r ~/.cache/huggingface/datasets/downloads ``` That is I can still load it: ``` python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing No config specified, defaulting to: general-pmd-synthetic-testing/100.unique Found cached dataset general-pmd-synthetic-testing (/home/stas/.cache/huggingface/datasets/HuggingFaceM4___general-pmd-synthetic-testing/100.unique/1.1.1/86bc445e3e48cb5ef79de109eb4e54ff85b318cd55c3835c4ee8f86eae33d9d2) ``` but if I try to use it: ``` E stderr: Traceback (most recent call last): E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/main.py", line 116, in <module> E stderr: train_loader, val_loader = get_dataloaders( E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 170, in get_dataloaders E stderr: train_loader = get_dataloader_from_config( E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 443, in get_dataloader_from_config E stderr: dataloader = get_dataloader( E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 264, in get_dataloader E stderr: is_pmd = "meta" in hf_dataset[0] and "source" in hf_dataset[0] E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 2601, in __getitem__ E stderr: return self._getitem( E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 2586, in _getitem E stderr: formatted_output = format_table( E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 634, in format_table E stderr: return formatter(pa_table, query_type=query_type) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 406, in __call__ E stderr: return self.format_row(pa_table) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 442, in format_row E stderr: row = self.python_features_decoder.decode_row(row) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 225, in decode_row E stderr: return self.features.decode_example(row) if self.features else row E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1846, in decode_example E stderr: return { E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1847, in <dictcomp> E stderr: column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1304, in decode_nested_example E stderr: return decode_nested_example([schema.feature], obj) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1296, in decode_nested_example E stderr: if decode_nested_example(sub_schema, first_elmt) != first_elmt: E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1309, in decode_nested_example E stderr: return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/image.py", line 144, in decode_example E stderr: image = PIL.Image.open(path) E stderr: File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/PIL/Image.py", line 3092, in open E stderr: fp = builtins.open(filename, "rb") E stderr: FileNotFoundError: [Errno 2] No such file or directory: '/mnt/nvme0/code/data/cache/huggingface/datasets/downloads/extracted/134227b9b94c4eccf19b205bf3021d4492d0227b9be6c2ddb6bf517d8d55a8cb/data/101/images_01.jpg' ``` Only if I wipe out the cached dir and rebuild then it starts working as `download/extracted` is back again with extracted files. ``` rm -r ~/.cache/huggingface/datasets/HuggingFaceM4___general-pmd-synthetic-testing python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing ``` I think there are 2 issues here: 1. why does it still rely on extracted files after `arrow` files were printed - did I do something incorrectly when creating this dataset? 2. why doesn't the dataset know that it has been gutted and loads just fine? If it has a dependency on `download/extracted` then `load_dataset` should check if it's there and fail or force rebuilding. I am sure this could be a very expensive operation, so probably really solving #1 will not require this check. and this second item is probably an overkill. Other than perhaps if it had an optional `check_consistency` flag to do that. ### Environment info datasets@main
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5457/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5457/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5454/comments
https://api.github.com/repos/huggingface/datasets/issues/5454/events
https://github.com/huggingface/datasets/issues/5454
1,552,890,419
I_kwDODunzps5cjzoz
5,454
Save and resume the state of a DataLoader
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
2
"2023-01-23T10:58:54"
"2023-01-24T01:45:48"
null
MEMBER
null
It would be nice when using `datasets` with a PyTorch DataLoader to be able to resume a training from a DataLoader state (e.g. to resume a training that crashed) What I have in mind (but lmk if you have other ideas or comments): For map-style datasets, this requires to have a PyTorch Sampler state that can be saved and reloaded per node and worker. For iterable datasets, this requires to save the state of the dataset iterator, which includes: - the current shard idx and row position in the current shard - the epoch number - the rng state - the shuffle buffer Right now you can already resume the data loading of an iterable dataset by using `IterableDataset.skip` but it takes a lot of time because it re-iterates on all the past data until it reaches the resuming point. cc @stas00 @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5454/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5454/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5451
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5451/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5451/comments
https://api.github.com/repos/huggingface/datasets/issues/5451/events
https://github.com/huggingface/datasets/issues/5451
1,552,336,300
I_kwDODunzps5chsWs
5,451
ImageFolder BadZipFile: Bad offset for central directory
{ "login": "hmartiro", "id": 1524208, "node_id": "MDQ6VXNlcjE1MjQyMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1524208?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hmartiro", "html_url": "https://github.com/hmartiro", "followers_url": "https://api.github.com/users/hmartiro/followers", "following_url": "https://api.github.com/users/hmartiro/following{/other_user}", "gists_url": "https://api.github.com/users/hmartiro/gists{/gist_id}", "starred_url": "https://api.github.com/users/hmartiro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hmartiro/subscriptions", "organizations_url": "https://api.github.com/users/hmartiro/orgs", "repos_url": "https://api.github.com/users/hmartiro/repos", "events_url": "https://api.github.com/users/hmartiro/events{/privacy}", "received_events_url": "https://api.github.com/users/hmartiro/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
"2023-01-22T23:50:12"
"2023-01-23T17:25:56"
null
NONE
null
### Describe the bug I'm getting the following exception: ``` lib/python3.10/zipfile.py:1353 in _RealGetContents │ │ │ │ 1350 │ │ # self.start_dir: Position of start of central directory │ │ 1351 │ │ self.start_dir = offset_cd + concat │ │ 1352 │ │ if self.start_dir < 0: │ │ ❱ 1353 │ │ │ raise BadZipFile("Bad offset for central directory") │ │ 1354 │ │ fp.seek(self.start_dir, 0) │ │ 1355 │ │ data = fp.read(size_cd) │ │ 1356 │ │ fp = io.BytesIO(data) │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ BadZipFile: Bad offset for central directory Extracting data files: 35%|█████████████████▊ | 38572/110812 [00:10<00:20, 3576.26it/s] ``` ### Steps to reproduce the bug ``` load_dataset( args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, ), ``` ### Expected behavior loads the dataset ### Environment info datasets==2.8.0 Python 3.10.8 Linux 129-146-3-202 5.15.0-52-generic #58~20.04.1-Ubuntu SMP Thu Oct 13 13:09:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5451/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5451/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5450
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5450/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5450/comments
https://api.github.com/repos/huggingface/datasets/issues/5450/events
https://github.com/huggingface/datasets/issues/5450
1,551,109,365
I_kwDODunzps5cdAz1
5,450
to_tf_dataset with a TF collator causes bizarrely persistent slowdown
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
6
"2023-01-20T16:08:37"
"2023-01-23T18:54:09"
null
MEMBER
null
### Describe the bug This will make more sense if you take a look at [a Colab notebook that reproduces this issue.](https://colab.research.google.com/drive/1rxyeciQFWJTI0WrZ5aojp4Ls1ut18fNH?usp=sharing) Briefly, there are several datasets that, when you iterate over them with `to_tf_dataset` **and** a data collator that returns `tf` tensors, become very slow. We haven't been able to figure this one out - it can be intermittent, and we have no idea what could possibly cause it. The weirdest thing is that **the slowdown affects other attempts to access the underlying dataset**. If you try to iterate over the `tf.data.Dataset`, then interrupt execution, and then try to iterate over the original dataset, the original dataset is now also very slow! This is true even if the dataset format is not set to `tf` - the iteration is slow even though it's not calling TF at all! There is a simple workaround for this - we can simply get our data collators to return `np` tensors. When we do this, the bug is never triggered and everything is fine. In general, `np` is preferred for this kind of preprocessing work anyway, when the preprocessing is not going to be compiled into a pure `tf.data` pipeline! However, the issue is fascinating, and the TF team were wondering if anyone in datasets (cc @lhoestq @mariosasko) might have an idea of what could cause this. ### Steps to reproduce the bug Run the attached Colab. ### Expected behavior The slowdown should go away, or at least not persist after we stop iterating over the `tf.data.Dataset` ### Environment info The issue occurs on multiple versions of Python and TF, both on local machines and on Colab. All testing was done using the latest versions of `transformers` and `datasets` from `main`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5450/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5450/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5448/comments
https://api.github.com/repos/huggingface/datasets/issues/5448/events
https://github.com/huggingface/datasets/issues/5448
1,550,618,514
I_kwDODunzps5cbI-S
5,448
Support fsspec 2023.1.0 in CI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
"2023-01-20T10:26:31"
"2023-01-20T13:26:05"
"2023-01-20T13:26:05"
MEMBER
null
Once we find out the root cause of: - #5445 we should revert the temporary pin on fsspec introduced by: - #5447
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5448/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5448/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5445
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5445/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5445/comments
https://api.github.com/repos/huggingface/datasets/issues/5445/events
https://github.com/huggingface/datasets/issues/5445
1,550,588,703
I_kwDODunzps5cbBsf
5,445
CI tests are broken: AttributeError: 'mappingproxy' object has no attribute 'target'
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
"2023-01-20T10:03:10"
"2023-01-20T10:28:44"
"2023-01-20T10:28:44"
MEMBER
null
CI tests are broken, raising `AttributeError: 'mappingproxy' object has no attribute 'target'`. See: https://github.com/huggingface/datasets/actions/runs/3966497597/jobs/6797384185 ``` ... ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_paths4] - AttributeError: 'mappingproxy' object has no attribute 'target' ===== 2076 passed, 19 skipped, 15 warnings, 47 errors in 115.54s (0:01:55) ===== ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5445/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5444/comments
https://api.github.com/repos/huggingface/datasets/issues/5444/events
https://github.com/huggingface/datasets/issues/5444
1,550,185,071
I_kwDODunzps5cZfJv
5,444
info messages logged as warnings
{ "login": "davidgilbertson", "id": 4443482, "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidgilbertson", "html_url": "https://github.com/davidgilbertson", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
6
"2023-01-20T01:19:18"
"2023-01-25T15:43:22"
null
NONE
null
### Describe the bug Code in `datasets` is using `logger.warning` when it should be using `logger.info`. Some of these are probably a matter of opinion, but I think anything starting with `logger.warning(f"Loading chached` clearly falls into the info category. Definitions from the Python docs for reference: * INFO: Confirmation that things are working as expected. * WARNING: An indication that something unexpected happened, or indicative of some problem in the near future (e.g. ‘disk space low’). The software is still working as expected. In theory, a user should be able to resolve things such that there are no warnings. ### Steps to reproduce the bug Load any dataset that's already cached. ### Expected behavior No output when log level is at the default WARNING level. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.8 - PyArrow version: 9.0.0 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5444/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5442/comments
https://api.github.com/repos/huggingface/datasets/issues/5442/events
https://github.com/huggingface/datasets/issues/5442
1,550,084,450
I_kwDODunzps5cZGli
5,442
OneDrive Integrations with HF Datasets
{ "login": "Mohammed20201991", "id": 59222637, "node_id": "MDQ6VXNlcjU5MjIyNjM3", "avatar_url": "https://avatars.githubusercontent.com/u/59222637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mohammed20201991", "html_url": "https://github.com/Mohammed20201991", "followers_url": "https://api.github.com/users/Mohammed20201991/followers", "following_url": "https://api.github.com/users/Mohammed20201991/following{/other_user}", "gists_url": "https://api.github.com/users/Mohammed20201991/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mohammed20201991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mohammed20201991/subscriptions", "organizations_url": "https://api.github.com/users/Mohammed20201991/orgs", "repos_url": "https://api.github.com/users/Mohammed20201991/repos", "events_url": "https://api.github.com/users/Mohammed20201991/events{/privacy}", "received_events_url": "https://api.github.com/users/Mohammed20201991/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
"2023-01-19T23:12:08"
"2023-01-20T18:05:52"
null
NONE
null
### Feature request First of all , I would like to thank all community who are developed DataSet storage and make it free available How to integrate our Onedrive account or any other possible storage clouds (like google drive,...) with the **HF** datasets section. For example, if I have **50GB** on my **Onedrive** account and I want to move between drive and Hugging face repo or vis versa ### Motivation make the dataset section more flexible with other possible storage like the integration between Google Collab and Google drive the storage ### Your contribution Can be done using Hugging face CLI
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5442/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5442/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5439/comments
https://api.github.com/repos/huggingface/datasets/issues/5439/events
https://github.com/huggingface/datasets/issues/5439
1,537,973,564
I_kwDODunzps5bq508
5,439
[dataset request] Add Common Voice 12.0
{ "login": "MohammedRakib", "id": 31034499, "node_id": "MDQ6VXNlcjMxMDM0NDk5", "avatar_url": "https://avatars.githubusercontent.com/u/31034499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MohammedRakib", "html_url": "https://github.com/MohammedRakib", "followers_url": "https://api.github.com/users/MohammedRakib/followers", "following_url": "https://api.github.com/users/MohammedRakib/following{/other_user}", "gists_url": "https://api.github.com/users/MohammedRakib/gists{/gist_id}", "starred_url": "https://api.github.com/users/MohammedRakib/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MohammedRakib/subscriptions", "organizations_url": "https://api.github.com/users/MohammedRakib/orgs", "repos_url": "https://api.github.com/users/MohammedRakib/repos", "events_url": "https://api.github.com/users/MohammedRakib/events{/privacy}", "received_events_url": "https://api.github.com/users/MohammedRakib/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
1
"2023-01-18T13:07:05"
"2023-01-25T18:38:53"
null
NONE
null
### Feature request Please add the common voice 12_0 datasets. Apart from English, a significant amount of audio-data has been added to the other minor-language datasets. ### Motivation The dataset link: https://commonvoice.mozilla.org/en/datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5439/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5439/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5437
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5437/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5437/comments
https://api.github.com/repos/huggingface/datasets/issues/5437/events
https://github.com/huggingface/datasets/issues/5437
1,536,837,144
I_kwDODunzps5bmkYY
5,437
Can't load png dataset with 4 channel (RGBA)
{ "login": "WiNE-iNEFF", "id": 41611046, "node_id": "MDQ6VXNlcjQxNjExMDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/41611046?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WiNE-iNEFF", "html_url": "https://github.com/WiNE-iNEFF", "followers_url": "https://api.github.com/users/WiNE-iNEFF/followers", "following_url": "https://api.github.com/users/WiNE-iNEFF/following{/other_user}", "gists_url": "https://api.github.com/users/WiNE-iNEFF/gists{/gist_id}", "starred_url": "https://api.github.com/users/WiNE-iNEFF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WiNE-iNEFF/subscriptions", "organizations_url": "https://api.github.com/users/WiNE-iNEFF/orgs", "repos_url": "https://api.github.com/users/WiNE-iNEFF/repos", "events_url": "https://api.github.com/users/WiNE-iNEFF/events{/privacy}", "received_events_url": "https://api.github.com/users/WiNE-iNEFF/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2023-01-17T18:22:27"
"2023-01-18T20:20:15"
"2023-01-18T20:20:15"
NONE
null
I try to create dataset which contains about 9000 png images 64x64 in size, and they are all 4-channel (RGBA). When trying to use load_dataset() then a dataset is created from only 2 images. What exactly interferes I can not understand.![Screenshot_20230117_212213.jpg](https://user-images.githubusercontent.com/41611046/212980147-9aa68e30-76e9-4b61-a937-c2fdabd56564.jpg)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5437/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5435
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5435/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5435/comments
https://api.github.com/repos/huggingface/datasets/issues/5435/events
https://github.com/huggingface/datasets/issues/5435
1,536,099,300
I_kwDODunzps5bjwPk
5,435
Wrong statement in "Load a Dataset in Streaming mode" leads to data leakage
{ "login": "HaoyuYang59", "id": 80093591, "node_id": "MDQ6VXNlcjgwMDkzNTkx", "avatar_url": "https://avatars.githubusercontent.com/u/80093591?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HaoyuYang59", "html_url": "https://github.com/HaoyuYang59", "followers_url": "https://api.github.com/users/HaoyuYang59/followers", "following_url": "https://api.github.com/users/HaoyuYang59/following{/other_user}", "gists_url": "https://api.github.com/users/HaoyuYang59/gists{/gist_id}", "starred_url": "https://api.github.com/users/HaoyuYang59/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HaoyuYang59/subscriptions", "organizations_url": "https://api.github.com/users/HaoyuYang59/orgs", "repos_url": "https://api.github.com/users/HaoyuYang59/repos", "events_url": "https://api.github.com/users/HaoyuYang59/events{/privacy}", "received_events_url": "https://api.github.com/users/HaoyuYang59/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
"2023-01-17T10:04:16"
"2023-01-19T09:56:03"
"2023-01-19T09:56:03"
NONE
null
### Describe the bug In the [Split your dataset with take and skip](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#split-your-dataset-with-take-and-skip), it states: > Using take (or skip) prevents future calls to shuffle from shuffling the dataset shards order, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer. Therefore it is advised to shuffle the dataset before splitting using take or skip. See more details in the [Shuffling the dataset: shuffle](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#iterable-dataset-shuffling) section.` >> \# You can also create splits from a shuffled dataset >> train_dataset = shuffled_dataset.skip(1000) >> eval_dataset = shuffled_dataset.take(1000) Where the shuffled dataset comes from: `shuffled_dataset = dataset.shuffle(buffer_size=10_000, seed=42)` At least in Tensorflow 2.9/2.10/2.11, [docs](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) states the `reshuffle_each_iteration` argument is `True` by default. This means the dataset would be shuffled after each epoch, and as a result **the validation data would leak into training test**. ### Steps to reproduce the bug N/A ### Expected behavior The `reshuffle_each_iteration` argument should be set to `False`. ### Environment info Tensorflow 2.9/2.10/2.11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5435/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5434
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5434/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5434/comments
https://api.github.com/repos/huggingface/datasets/issues/5434/events
https://github.com/huggingface/datasets/issues/5434
1,536,090,042
I_kwDODunzps5bjt-6
5,434
sample_dataset module not found
{ "login": "nickums", "id": 15816213, "node_id": "MDQ6VXNlcjE1ODE2MjEz", "avatar_url": "https://avatars.githubusercontent.com/u/15816213?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nickums", "html_url": "https://github.com/nickums", "followers_url": "https://api.github.com/users/nickums/followers", "following_url": "https://api.github.com/users/nickums/following{/other_user}", "gists_url": "https://api.github.com/users/nickums/gists{/gist_id}", "starred_url": "https://api.github.com/users/nickums/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nickums/subscriptions", "organizations_url": "https://api.github.com/users/nickums/orgs", "repos_url": "https://api.github.com/users/nickums/repos", "events_url": "https://api.github.com/users/nickums/events{/privacy}", "received_events_url": "https://api.github.com/users/nickums/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2023-01-17T09:57:54"
"2023-01-19T13:52:12"
"2023-01-19T07:55:11"
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5434/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5434/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5433
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5433/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5433/comments
https://api.github.com/repos/huggingface/datasets/issues/5433/events
https://github.com/huggingface/datasets/issues/5433
1,536,017,901
I_kwDODunzps5bjcXt
5,433
Support latest Docker image in CI benchmarks
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
3
"2023-01-17T09:06:08"
"2023-01-18T06:29:08"
"2023-01-18T06:29:08"
MEMBER
null
Once we find out the root cause of: - #5431 we should revert the temporary pin on the Docker image version introduced by: - #5432
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5433/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5433/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5431
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5431/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5431/comments
https://api.github.com/repos/huggingface/datasets/issues/5431/events
https://github.com/huggingface/datasets/issues/5431
1,535,862,621
I_kwDODunzps5bi2dd
5,431
CI benchmarks are broken: Unknown arguments: runnerPath, path
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
"2023-01-17T06:49:57"
"2023-01-18T06:33:24"
"2023-01-17T08:51:18"
MEMBER
null
Our CI benchmarks are broken, raising `Unknown arguments` error: https://github.com/huggingface/datasets/actions/runs/3932397079/jobs/6724905161 ``` Unknown arguments: runnerPath, path ``` Stack trace: ``` 100%|██████████| 500/500 [00:01<00:00, 338.98ba/s] Updating lock file 'dvc.lock' To track the changes with git, run: git add dvc.lock To enable auto staging, run: dvc config core.autostage true Use `dvc push` to send your updates to remote storage. cml send-comment <markdown file> Global Options: --log Logging verbosity [string] [choices: "error", "warn", "info", "debug"] [default: "info"] --driver Git provider where the repository is hosted [string] [choices: "github", "gitlab", "bitbucket"] [default: infer from the environment] --repo Repository URL or slug [string] [default: infer from the environment] --driver-token, --token CI driver personal/project access token (PAT) [string] [default: infer from the environment] --help Show help [boolean] Options: --target Comment type (`commit`, `pr`, `commit/f00bar`, `pr/42`, `issue/1337`),default is automatic (`pr` but fallback to `commit`). [string] --watch Watch for changes and automatically update the comment [boolean] --publish Upload any local images found in the Markdown report [boolean] [default: true] --publish-url Self-hosted image server URL [string] [default: "https://asset.cml.dev/"] --publish-native, --native Uses driver's native capabilities to upload assets instead of CML's storage; not available on GitHub [boolean] --watermark-title Hidden comment marker (used for targeting in subsequent `cml comment update`); "{workflow}" & "{run}" are auto-replaced [string] [default: ""] Unknown arguments: runnerPath, path Error: Process completed with exit code 1. ``` Issue reported to iterative/cml: - iterative/cml#1319
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5431/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5431/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5430
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5430/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5430/comments
https://api.github.com/repos/huggingface/datasets/issues/5430/events
https://github.com/huggingface/datasets/issues/5430
1,535,856,503
I_kwDODunzps5bi093
5,430
Support Apache Beam >= 2.44.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2023-01-17T06:42:12"
"2023-01-17T16:12:18"
null
MEMBER
null
Once we find out the root cause of: - #5426 we should revert the temporary pin on apache-beam introduced by: - #5429
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5430/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5430/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5428
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5428/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5428/comments
https://api.github.com/repos/huggingface/datasets/issues/5428/events
https://github.com/huggingface/datasets/issues/5428
1,535,166,139
I_kwDODunzps5bgMa7
5,428
Load/Save FAISS index using fsspec
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
2
"2023-01-16T16:08:12"
"2023-01-19T16:34:34"
null
CONTRIBUTOR
null
### Feature request From what I understand `faiss` already support this [link](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning#generic-io-support) I would like to use a stream as input to `Dataset.load_faiss_index` and `Dataset.save_faiss_index`. ### Motivation In my case, I'm saving faiss index in cloud storage and use `fsspec` to load them. It would be ideal if I could send the stream directly instead of copying the file locally (or mounting the bucket) and then load the index. ### Your contribution I can submit the PR
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5428/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5428/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5427
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5427/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5427/comments
https://api.github.com/repos/huggingface/datasets/issues/5427/events
https://github.com/huggingface/datasets/issues/5427
1,535,162,889
I_kwDODunzps5bgLoJ
5,427
Unable to download dataset id_clickbait
{ "login": "ilos-vigil", "id": 45941585, "node_id": "MDQ6VXNlcjQ1OTQxNTg1", "avatar_url": "https://avatars.githubusercontent.com/u/45941585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ilos-vigil", "html_url": "https://github.com/ilos-vigil", "followers_url": "https://api.github.com/users/ilos-vigil/followers", "following_url": "https://api.github.com/users/ilos-vigil/following{/other_user}", "gists_url": "https://api.github.com/users/ilos-vigil/gists{/gist_id}", "starred_url": "https://api.github.com/users/ilos-vigil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ilos-vigil/subscriptions", "organizations_url": "https://api.github.com/users/ilos-vigil/orgs", "repos_url": "https://api.github.com/users/ilos-vigil/repos", "events_url": "https://api.github.com/users/ilos-vigil/events{/privacy}", "received_events_url": "https://api.github.com/users/ilos-vigil/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2023-01-16T16:05:36"
"2023-01-18T09:51:28"
"2023-01-18T09:25:19"
NONE
null
### Describe the bug I tried to download dataset `id_clickbait`, but receive this error message. ``` FileNotFoundError: Couldn't find file at https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/k42j7x2kpn-1.zip ``` When i open the link using browser, i got this XML data. ```xml <?xml version="1.0" encoding="UTF-8"?> <Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message><BucketName>md-datasets-cache-zipfiles-prod</BucketName><RequestId>NVRM6VEEQD69SD00</RequestId><HostId>W/SPDxLGvlCGi0OD6d7mSDvfOAUqLAfvs9nTX50BkJrjMny+X9Jnqp/Li2lG9eTUuT4MUkAA2jjTfCrCiUmu7A==</HostId></Error> ``` ### Steps to reproduce the bug Code snippet: ``` from datasets import load_dataset load_dataset('id_clickbait', 'annotated') load_dataset('id_clickbait', 'raw') ``` Link to Kaggle notebook: https://www.kaggle.com/code/ilosvigil/bug-check-on-id-clickbait-dataset ### Expected behavior Successfully download and load `id_newspaper` dataset. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5427/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5427/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5426
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5426/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5426/comments
https://api.github.com/repos/huggingface/datasets/issues/5426/events
https://github.com/huggingface/datasets/issues/5426
1,535,158,555
I_kwDODunzps5bgKkb
5,426
CI tests are broken: SchemaInferenceError
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
"2023-01-16T16:02:07"
"2023-01-17T07:17:12"
"2023-01-16T16:49:04"
MEMBER
null
CI is broken, raising a `SchemaInferenceError`: see https://github.com/huggingface/datasets/actions/runs/3930901593/jobs/6721492004 ``` FAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare_sharded - datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data ``` Stack trace: ``` ______________ BeamBuilderTest.test_download_and_prepare_sharded _______________ [gw1] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python self = <tests.test_beam.BeamBuilderTest testMethod=test_download_and_prepare_sharded> @require_beam def test_download_and_prepare_sharded(self): import apache_beam as beam original_write_parquet = beam.io.parquetio.WriteToParquet expected_num_examples = len(get_test_dummy_examples()) with tempfile.TemporaryDirectory() as tmp_cache_dir: builder = DummyBeamDataset(cache_dir=tmp_cache_dir, beam_runner="DirectRunner") with patch("apache_beam.io.parquetio.WriteToParquet") as write_parquet_mock: write_parquet_mock.side_effect = partial(original_write_parquet, num_shards=2) > builder.download_and_prepare() tests/test_beam.py:97: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:864: in download_and_prepare **download_and_prepare_kwargs, /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:1976: in _download_and_prepare num_examples, num_bytes = beam_writer.finalize(metrics.query(m_filter)) /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:694: in finalize shard_num_bytes, _ = parquet_to_arrow(source, destination) /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:740: in parquet_to_arrow num_bytes, num_examples = writer.finalize() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <datasets.arrow_writer.ArrowWriter object at 0x7f6dcbb3e810> close_stream = True def finalize(self, close_stream=True): self.write_rows_on_file() # In case current_examples < writer_batch_size, but user uses finalize() if self._check_duplicates: self.check_duplicate_keys() # Re-intializing to empty list for next batch self.hkey_record = [] self.write_examples_on_file() # If schema is known, infer features even if no examples were written if self.pa_writer is None and self.schema: self._build_writer(self.schema) if self.pa_writer is not None: self.pa_writer.close() self.pa_writer = None if close_stream: self.stream.close() else: if close_stream: self.stream.close() > raise SchemaInferenceError("Please pass `features` or at least one example when writing data") E datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:593: SchemaInferenceError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5426/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5426/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5425
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5425/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5425/comments
https://api.github.com/repos/huggingface/datasets/issues/5425/events
https://github.com/huggingface/datasets/issues/5425
1,534,581,850
I_kwDODunzps5bd9xa
5,425
Sort on multiple keys with datasets.Dataset.sort()
{ "login": "rocco-fortuna", "id": 101344863, "node_id": "U_kgDOBgpmXw", "avatar_url": "https://avatars.githubusercontent.com/u/101344863?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rocco-fortuna", "html_url": "https://github.com/rocco-fortuna", "followers_url": "https://api.github.com/users/rocco-fortuna/followers", "following_url": "https://api.github.com/users/rocco-fortuna/following{/other_user}", "gists_url": "https://api.github.com/users/rocco-fortuna/gists{/gist_id}", "starred_url": "https://api.github.com/users/rocco-fortuna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rocco-fortuna/subscriptions", "organizations_url": "https://api.github.com/users/rocco-fortuna/orgs", "repos_url": "https://api.github.com/users/rocco-fortuna/repos", "events_url": "https://api.github.com/users/rocco-fortuna/events{/privacy}", "received_events_url": "https://api.github.com/users/rocco-fortuna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
null
[]
null
9
"2023-01-16T09:22:26"
"2023-02-03T13:42:23"
null
NONE
null
### Feature request From discussion on forum: https://discuss.huggingface.co/t/datasets-dataset-sort-does-not-preserve-ordering/29065/1 `sort()` does not preserve ordering, and it does not support sorting on multiple columns, nor a key function. The suggested solution: > ... having something similar to pandas and be able to specify multiple columns for sorting. We’re already using pandas under the hood to do the sorting in datasets. The suggested workaround: > convert your dataset to pandas and use `df.sort_values()` ### Motivation Preserved ordering when sorting is very handy when one needs to sort on multiple columns, A and B, so that e.g. whenever A is equal for two or more rows, B is kept sorted. Having a parameter to do this in 🤗datasets would be cleaner than going through pandas and back, and it wouldn't add much complexity to the library. Alternatives: - the possibility to specify multiple keys to sort by with decreasing priority (suggested solution), - the ability to provide a key function for sorting, so that one can manually specify the sorting criteria. ### Your contribution I'll be happy to contribute by submitting a PR. Will get documented on `CONTRIBUTING.MD`. Would love to get thoughts on this, if anyone has anything to add.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5425/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5425/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5424
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5424/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5424/comments
https://api.github.com/repos/huggingface/datasets/issues/5424/events
https://github.com/huggingface/datasets/issues/5424
1,534,394,756
I_kwDODunzps5bdQGE
5,424
When applying `ReadInstruction` to custom load it's not DatasetDict but list of Dataset?
{ "login": "macabdul9", "id": 25720695, "node_id": "MDQ6VXNlcjI1NzIwNjk1", "avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4", "gravatar_id": "", "url": "https://api.github.com/users/macabdul9", "html_url": "https://github.com/macabdul9", "followers_url": "https://api.github.com/users/macabdul9/followers", "following_url": "https://api.github.com/users/macabdul9/following{/other_user}", "gists_url": "https://api.github.com/users/macabdul9/gists{/gist_id}", "starred_url": "https://api.github.com/users/macabdul9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/macabdul9/subscriptions", "organizations_url": "https://api.github.com/users/macabdul9/orgs", "repos_url": "https://api.github.com/users/macabdul9/repos", "events_url": "https://api.github.com/users/macabdul9/events{/privacy}", "received_events_url": "https://api.github.com/users/macabdul9/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
"2023-01-16T06:54:28"
"2023-01-19T15:09:14"
null
NONE
null
### Describe the bug I am loading datasets from custom `tsv` files stored locally and applying split instructions for each split. Although the ReadInstruction is being applied correctly and I was expecting it to be `DatasetDict` but instead it is a list of `Dataset`. ### Steps to reproduce the bug Steps to reproduce the behaviour: 1. Import `from datasets import load_dataset, ReadInstruction` 2. Instruction to load the dataset ``` instructions = [ ReadInstruction(split_name="train", from_=0, to=10, unit='%', rounding='closest'), ReadInstruction(split_name="dev", from_=0, to=10, unit='%', rounding='closest'), ReadInstruction(split_name="test", from_=0, to=5, unit='%', rounding='closest') ] ``` 3. Load `dataset = load_dataset('csv', data_dir="data/", data_files={"train":"train.tsv", "dev":"dev.tsv", "test":"test.tsv"}, delimiter="\t", split=instructions)` ### Expected behavior **Current behaviour** ![Screenshot from 2023-01-16 10-45-27](https://user-images.githubusercontent.com/25720695/212614754-306898d8-8c27-4475-9bb8-0321bd939561.png) : **Expected behaviour** ![Screenshot from 2023-01-16 10-45-42](https://user-images.githubusercontent.com/25720695/212614813-0d336bf7-5266-482e-bb96-ef51f64de204.png) ### Environment info ``datasets==2.8.0 `` `Python==3.8.5 ` `Platform - Ubuntu 20.04.4 LTS`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5424/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5424/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5422
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5422/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5422/comments
https://api.github.com/repos/huggingface/datasets/issues/5422/events
https://github.com/huggingface/datasets/issues/5422
1,533,385,239
I_kwDODunzps5bZZoX
5,422
Datasets load error for saved github issues
{ "login": "folterj", "id": 7360564, "node_id": "MDQ6VXNlcjczNjA1NjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7360564?v=4", "gravatar_id": "", "url": "https://api.github.com/users/folterj", "html_url": "https://github.com/folterj", "followers_url": "https://api.github.com/users/folterj/followers", "following_url": "https://api.github.com/users/folterj/following{/other_user}", "gists_url": "https://api.github.com/users/folterj/gists{/gist_id}", "starred_url": "https://api.github.com/users/folterj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/folterj/subscriptions", "organizations_url": "https://api.github.com/users/folterj/orgs", "repos_url": "https://api.github.com/users/folterj/repos", "events_url": "https://api.github.com/users/folterj/events{/privacy}", "received_events_url": "https://api.github.com/users/folterj/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
"2023-01-14T17:29:38"
"2023-01-16T13:10:30"
null
NONE
null
### Describe the bug Loading a previously downloaded & saved dataset as described in the HuggingFace course: issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train") Gives this error: datasets.builder.DatasetGenerationError: An error occurred while generating the dataset A work-around I found was to use streaming. ### Steps to reproduce the bug Reproduce by executing the code provided: https://huggingface.co/course/chapter5/5?fw=pt From the heading: 'let’s create a function that can download all the issues from a GitHub repository' ### Expected behavior No error ### Environment info Datasets version 2.8.0. Note that version 2.6.1 gives the same error (related to null timestamp). **[EDIT]** This is the complete error trace confirming the issue is related to the timestamp (`Couldn't cast array of type timestamp[s] to null`) ``` Using custom data configuration default-950028611d2860c8 Downloading and preparing dataset json/default to [...]/.cache/huggingface/datasets/json/default-950028611d2860c8/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51... Downloading data files: 100%|██████████| 1/1 [00:00<?, ?it/s] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 500.63it/s] Generating train split: 2619 examples [00:00, 7155.72 examples/s]Traceback (most recent call last): File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1831, in _prepare_split_single writer.write_table(table) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\arrow_writer.py", line 567, in write_table pa_table = table_cast(pa_table, self._schema) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2282, in table_cast return cast_table_to_schema(table, schema) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2241, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2241, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1807, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1807, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2035, in cast_array_to_feature arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2035, in <listcomp> arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()] File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1809, in wrapper return func(array, *args, **kwargs) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 2101, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1809, in wrapper return func(array, *args, **kwargs) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\table.py", line 1990, in array_cast raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") TypeError: Couldn't cast array of type timestamp[s] to null The above exception was the direct cause of the following exception: Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\pydevconsole.py", line 364, in runcode coro = func() File "<input>", line 1, in <module> File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile pydev_imports.execfile(filename, global_vars, local_vars) # execute the script File "C:\Program Files\JetBrains\PyCharm 2022.1.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "[...]\PycharmProjects\TransformersTesting\dataset_issues.py", line 20, in <module> issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train") File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\load.py", line 1757, in load_dataset builder_instance.download_and_prepare( File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 860, in download_and_prepare self._download_and_prepare( File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 953, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1706, in _prepare_split for job_id, done, content in self._prepare_split_single( File "[...]\miniconda3\envs\HuggingFace\lib\site-packages\datasets\builder.py", line 1849, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset Generating train split: 2619 examples [00:19, 7155.72 examples/s] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5422/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5422/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5421
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5421/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5421/comments
https://api.github.com/repos/huggingface/datasets/issues/5421/events
https://github.com/huggingface/datasets/issues/5421
1,532,278,307
I_kwDODunzps5bVLYj
5,421
Support case-insensitive Hub dataset name in load_dataset
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
1
"2023-01-13T13:07:07"
"2023-01-13T20:12:32"
"2023-01-13T20:12:32"
CONTRIBUTOR
null
### Feature request The dataset name on the Hub is case-insensitive (see https://github.com/huggingface/moon-landing/pull/2399, internal issue), i.e., https://huggingface.co/datasets/GLUE redirects to https://huggingface.co/datasets/glue. Ideally, we could load the glue dataset using the following: ``` from datasets import load_dataset load_dataset('GLUE', 'cola') ``` It breaks because the loading script `GLUE.py` does not exist (`glue.py` should be selected instead). Minor additional comment: in other cases without a loading script, we can load the dataset, but the automatically generated config name depends on the casing: - `load_dataset('severo/danish-wit')` generates the config name `severo--danish-wit-e6fda5b070deb133`, while - `load_dataset('severo/danish-WIT')` generates the config name `severo--danish-WIT-e6fda5b070deb133` ### Motivation To follow the same UX on the Hub and in the datasets library. ### Your contribution ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5421/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5421/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5419
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5419/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5419/comments
https://api.github.com/repos/huggingface/datasets/issues/5419/events
https://github.com/huggingface/datasets/issues/5419
1,531,999,850
I_kwDODunzps5bUHZq
5,419
label_column='labels' in datasets.TextClassification and 'label' or 'label_ids' in transformers.DataColator
{ "login": "CreatixEA", "id": 172385, "node_id": "MDQ6VXNlcjE3MjM4NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/172385?v=4", "gravatar_id": "", "url": "https://api.github.com/users/CreatixEA", "html_url": "https://github.com/CreatixEA", "followers_url": "https://api.github.com/users/CreatixEA/followers", "following_url": "https://api.github.com/users/CreatixEA/following{/other_user}", "gists_url": "https://api.github.com/users/CreatixEA/gists{/gist_id}", "starred_url": "https://api.github.com/users/CreatixEA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CreatixEA/subscriptions", "organizations_url": "https://api.github.com/users/CreatixEA/orgs", "repos_url": "https://api.github.com/users/CreatixEA/repos", "events_url": "https://api.github.com/users/CreatixEA/events{/privacy}", "received_events_url": "https://api.github.com/users/CreatixEA/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
"2023-01-13T09:40:07"
"2023-01-19T15:46:51"
null
NONE
null
### Describe the bug When preparing a dataset for a task using `datasets.TextClassification`, the output feature is named `labels`. When preparing the trainer using the `transformers.DataCollator` the default column name is `label` if binary or `label_ids` if multi-class problem. It is required to rename the column accordingly to the expected name : `label` or `label_ids` ### Steps to reproduce the bug ```python from datasets import TextClassification, AutoTokenized, DataCollatorWithPadding ds_prepared = my_dataset.prepare_for_task(TextClassification(text_column='TEXT', label_column='MY_LABEL_COLUMN_1_OR_0')) print(ds_prepared) tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ds_tokenized = ds_prepared.map(lambda x: tokenizer(x['text'], truncation=True), batched=True) print(ds_tokenized) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") tf_data = model.prepare_tf_dataset(ds_tokenized, shuffle=True, batch_size=16, collate_fn=data_collator) print(tf_data) ``` ### Expected behavior Without renaming the the column, the target column is not in the final tf_data since it is not in the column name expected by the data_collator. To correct this, we have to rename the column: ```python ds_prepared = my_dataset.prepare_for_task(TextClassification(text_column='TEXT', label_column='MY_LABEL_COLUMN_1_OR_0')).rename_column('labels', 'label') ``` ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.6 - PyArrow version: 10.0.1 - Pandas version: 1.5.2 - `transformers` version: 4.26.0.dev0 - Platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in>
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5419/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5419/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5418
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5418/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5418/comments
https://api.github.com/repos/huggingface/datasets/issues/5418/events
https://github.com/huggingface/datasets/issues/5418
1,530,111,184
I_kwDODunzps5bM6TQ
5,418
Add ProgressBar for `to_parquet`
{ "login": "zanussbaum", "id": 33707069, "node_id": "MDQ6VXNlcjMzNzA3MDY5", "avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zanussbaum", "html_url": "https://github.com/zanussbaum", "followers_url": "https://api.github.com/users/zanussbaum/followers", "following_url": "https://api.github.com/users/zanussbaum/following{/other_user}", "gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}", "starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions", "organizations_url": "https://api.github.com/users/zanussbaum/orgs", "repos_url": "https://api.github.com/users/zanussbaum/repos", "events_url": "https://api.github.com/users/zanussbaum/events{/privacy}", "received_events_url": "https://api.github.com/users/zanussbaum/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "zanussbaum", "id": 33707069, "node_id": "MDQ6VXNlcjMzNzA3MDY5", "avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zanussbaum", "html_url": "https://github.com/zanussbaum", "followers_url": "https://api.github.com/users/zanussbaum/followers", "following_url": "https://api.github.com/users/zanussbaum/following{/other_user}", "gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}", "starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions", "organizations_url": "https://api.github.com/users/zanussbaum/orgs", "repos_url": "https://api.github.com/users/zanussbaum/repos", "events_url": "https://api.github.com/users/zanussbaum/events{/privacy}", "received_events_url": "https://api.github.com/users/zanussbaum/received_events", "type": "User", "site_admin": false }
[ { "login": "zanussbaum", "id": 33707069, "node_id": "MDQ6VXNlcjMzNzA3MDY5", "avatar_url": "https://avatars.githubusercontent.com/u/33707069?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zanussbaum", "html_url": "https://github.com/zanussbaum", "followers_url": "https://api.github.com/users/zanussbaum/followers", "following_url": "https://api.github.com/users/zanussbaum/following{/other_user}", "gists_url": "https://api.github.com/users/zanussbaum/gists{/gist_id}", "starred_url": "https://api.github.com/users/zanussbaum/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zanussbaum/subscriptions", "organizations_url": "https://api.github.com/users/zanussbaum/orgs", "repos_url": "https://api.github.com/users/zanussbaum/repos", "events_url": "https://api.github.com/users/zanussbaum/events{/privacy}", "received_events_url": "https://api.github.com/users/zanussbaum/received_events", "type": "User", "site_admin": false } ]
null
4
"2023-01-12T05:06:20"
"2023-01-24T18:18:24"
"2023-01-24T18:18:24"
CONTRIBUTOR
null
### Feature request Add a progress bar for `Dataset.to_parquet`, similar to how `to_json` works. ### Motivation It's a bit frustrating to not know how long a dataset will take to write to file and if it's stuck or not without a progress bar ### Your contribution Sure I can help if needed
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5418/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5418/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5415
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5415/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5415/comments
https://api.github.com/repos/huggingface/datasets/issues/5415/events
https://github.com/huggingface/datasets/issues/5415
1,526,904,861
I_kwDODunzps5bArgd
5,415
RuntimeError: Sharding is ambiguous for this dataset
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
"2023-01-10T07:36:11"
"2023-01-18T14:09:04"
"2023-01-18T14:09:03"
MEMBER
null
### Describe the bug When loading some datasets, a RuntimeError is raised. For example, for "ami" dataset: https://huggingface.co/datasets/ami/discussions/3 ``` .../huggingface/datasets/src/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size) 1415 fpath = path_join(self._output_dir, fname) 1416 -> 1417 num_input_shards = _number_of_shards_in_gen_kwargs(split_generator.gen_kwargs) 1418 if num_input_shards <= 1 and num_proc is not None: 1419 logger.warning( .../huggingface/datasets/src/datasets/utils/sharding.py in _number_of_shards_in_gen_kwargs(gen_kwargs) 10 lists_lengths = {key: len(value) for key, value in gen_kwargs.items() if isinstance(value, list)} 11 if len(set(lists_lengths.values())) > 1: ---> 12 raise RuntimeError( 13 ( 14 "Sharding is ambiguous for this dataset: " RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key samples_paths has length 6 - key ids has length 7 - key verification_ids has length 6 To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length. ``` This behavior was introduced when implementing multiprocessing by PR: - #5107 ### Steps to reproduce the bug ```python ds = load_dataset("ami", "microphone-single", split="train", revision="2d7620bb7c3f1aab9f329615c3bdb598069d907a") ``` ### Expected behavior No error raised. ### Environment info Since datasets 2.7.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5415/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5415/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5414
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5414/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5414/comments
https://api.github.com/repos/huggingface/datasets/issues/5414/events
https://github.com/huggingface/datasets/issues/5414
1,525,733,818
I_kwDODunzps5a8Nm6
5,414
Sharding error with Multilingual LibriSpeech
{ "login": "Nithin-Holla", "id": 19574344, "node_id": "MDQ6VXNlcjE5NTc0MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/19574344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Nithin-Holla", "html_url": "https://github.com/Nithin-Holla", "followers_url": "https://api.github.com/users/Nithin-Holla/followers", "following_url": "https://api.github.com/users/Nithin-Holla/following{/other_user}", "gists_url": "https://api.github.com/users/Nithin-Holla/gists{/gist_id}", "starred_url": "https://api.github.com/users/Nithin-Holla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Nithin-Holla/subscriptions", "organizations_url": "https://api.github.com/users/Nithin-Holla/orgs", "repos_url": "https://api.github.com/users/Nithin-Holla/repos", "events_url": "https://api.github.com/users/Nithin-Holla/events{/privacy}", "received_events_url": "https://api.github.com/users/Nithin-Holla/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
4
"2023-01-09T14:45:31"
"2023-01-18T14:09:04"
"2023-01-18T14:09:04"
NONE
null
### Describe the bug Loading the German Multilingual LibriSpeech dataset results in a RuntimeError regarding sharding with the following stacktrace: ``` Downloading and preparing dataset multilingual_librispeech/german to /home/nithin/datadrive/cache/huggingface/datasets/facebook___multilingual_librispeech/german/2.1.0/1904af50f57a5c370c9364cc337699cfe496d4e9edcae6648a96be23086362d0... Downloading data files: 100% 3/3 [00:00<00:00, 107.23it/s] Downloading data files: 100% 1/1 [00:00<00:00, 35.08it/s] Downloading data files: 100% 6/6 [00:00<00:00, 303.36it/s] Downloading data files: 100% 3/3 [00:00<00:00, 130.37it/s] Downloading data files: 100% 1049/1049 [00:00<00:00, 4491.40it/s] Downloading data files: 100% 37/37 [00:00<00:00, 1096.78it/s] Downloading data files: 100% 40/40 [00:00<00:00, 1003.93it/s] Extracting data files: 100% 3/3 [00:11<00:00, 2.62s/it] Generating train split: 469942/0 [34:13<00:00, 273.21 examples/s] Output exceeds the size limit. Open the full output data in a text editor --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-14-74fa6d092bdc> in <module> ----> 1 mls = load_dataset(MLS_DATASET, 2 LANGUAGE, 3 cache_dir="~/datadrive/cache/huggingface/datasets", 4 ignore_verifications=True) /anaconda/envs/py38_default/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) 1755 1756 # Download and prepare data -> 1757 builder_instance.download_and_prepare( 1758 download_config=download_config, 1759 download_mode=download_mode, /anaconda/envs/py38_default/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 858 if num_proc is not None: 859 prepare_split_kwargs["num_proc"] = num_proc --> 860 self._download_and_prepare( 861 dl_manager=dl_manager, 862 verify_infos=verify_infos, /anaconda/envs/py38_default/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs) 1609 1610 def _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs): ... RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key audio_archives has length 1049 - key local_extracted_archive has length 1049 - key limited_ids_paths has length 1 To fix this, check the 'gen_kwargs' and make sure to use lists only for data sources, and use tuples otherwise. In the end there should only be one single list, or several lists with the same length. ``` ### Steps to reproduce the bug Here is the code to reproduce it: ```python from datasets import load_dataset MLS_DATASET = "facebook/multilingual_librispeech" LANGUAGE = "german" mls = load_dataset(MLS_DATASET, LANGUAGE, cache_dir="~/datadrive/cache/huggingface/datasets", ignore_verifications=True) ``` ### Expected behavior The expected behaviour is that the dataset is successfully loaded. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.4.0-1094-azure-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 10.0.1 - Pandas version: 1.2.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5414/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5414/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5413
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5413/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5413/comments
https://api.github.com/repos/huggingface/datasets/issues/5413/events
https://github.com/huggingface/datasets/issues/5413
1,524,591,837
I_kwDODunzps5a32zd
5,413
concatenate_datasets fails when two dataset with shards > 1 and unequal shard numbers
{ "login": "ZeguanXiao", "id": 38279341, "node_id": "MDQ6VXNlcjM4Mjc5MzQx", "avatar_url": "https://avatars.githubusercontent.com/u/38279341?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ZeguanXiao", "html_url": "https://github.com/ZeguanXiao", "followers_url": "https://api.github.com/users/ZeguanXiao/followers", "following_url": "https://api.github.com/users/ZeguanXiao/following{/other_user}", "gists_url": "https://api.github.com/users/ZeguanXiao/gists{/gist_id}", "starred_url": "https://api.github.com/users/ZeguanXiao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ZeguanXiao/subscriptions", "organizations_url": "https://api.github.com/users/ZeguanXiao/orgs", "repos_url": "https://api.github.com/users/ZeguanXiao/repos", "events_url": "https://api.github.com/users/ZeguanXiao/events{/privacy}", "received_events_url": "https://api.github.com/users/ZeguanXiao/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
1
"2023-01-08T17:01:52"
"2023-01-26T09:27:21"
"2023-01-26T09:27:21"
NONE
null
### Describe the bug When using `concatenate_datasets([dataset1, dataset2], axis = 1)` to concatenate two datasets with shards > 1, it fails: ``` File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/combine.py", line 182, in concatenate_datasets return _concatenate_map_style_datasets(dsets, info=info, split=split, axis=axis) File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 5499, in _concatenate_map_style_datasets table = concat_tables([dset._data for dset in dsets], axis=axis) File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/table.py", line 1778, in concat_tables return ConcatenationTable.from_tables(tables, axis=axis) File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/table.py", line 1483, in from_tables blocks = _extend_blocks(blocks, table_blocks, axis=axis) File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/table.py", line 1477, in _extend_blocks result[i].extend(row_blocks) IndexError: list index out of range ``` ### Steps to reproduce the bug dataset = concatenate_datasets([dataset1, dataset2], axis = 1) ### Expected behavior The datasets are correctly concatenated. ### Environment info datasets==2.8.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5413/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5412
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5412/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5412/comments
https://api.github.com/repos/huggingface/datasets/issues/5412/events
https://github.com/huggingface/datasets/issues/5412
1,524,250,269
I_kwDODunzps5a2jad
5,412
load_dataset() cannot find dataset_info.json with multiple training runs in parallel
{ "login": "destigres", "id": 7139344, "node_id": "MDQ6VXNlcjcxMzkzNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/7139344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/destigres", "html_url": "https://github.com/destigres", "followers_url": "https://api.github.com/users/destigres/followers", "following_url": "https://api.github.com/users/destigres/following{/other_user}", "gists_url": "https://api.github.com/users/destigres/gists{/gist_id}", "starred_url": "https://api.github.com/users/destigres/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/destigres/subscriptions", "organizations_url": "https://api.github.com/users/destigres/orgs", "repos_url": "https://api.github.com/users/destigres/repos", "events_url": "https://api.github.com/users/destigres/events{/privacy}", "received_events_url": "https://api.github.com/users/destigres/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
"2023-01-08T00:44:32"
"2023-01-19T20:28:43"
"2023-01-19T20:28:43"
NONE
null
### Describe the bug I have a custom local dataset in JSON form. I am trying to do multiple training runs in parallel. The first training run runs with no issue. However, when I start another run on another GPU, the following code throws this error. If there is a workaround to ignore the cache I think that would solve my problem too. I am using datasets version 2.8.0. ### Steps to reproduce the bug 1. Start training run of GPU 0 loading dataset from ``` load_dataset( "json", data_files=tr_dataset_path, split=f"train", download_mode="force_redownload", ) ``` 2. While GPU 0 is training, start an identical run on GPU 1. GPU 1 will produce the following error: ``` Traceback (most recent call last): File "/local-scratch1/data/mt/code/qq/train.py", line 198, in <module> main() File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 1130, in __call__ return self.main(*args, **kwargs) File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 1055, in main rv = self.invoke(ctx) File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/username/.local/lib/python3.8/site-packages/click/core.py", line 760, in invoke return __callback(*args, **kwargs) File "/local-scratch1/data/mt/code/qq/train.py", line 113, in main load_dataset( File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/load.py", line 1734, in load_dataset builder_instance = load_dataset_builder( File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/load.py", line 1518, in load_dataset_builder builder_instance: DatasetBuilder = builder_cls( File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/builder.py", line 366, in __init__ self.info = DatasetInfo.from_directory(self._cache_dir) File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/datasets/info.py", line 313, in from_directory with fs.open(path_join(dataset_info_dir, config.DATASET_INFO_FILENAME), "r", encoding="utf-8") as f: File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/spec.py", line 1094, in open self.open( File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/spec.py", line 1106, in open f = self._open( File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/implementations/local.py", line 175, in _open return LocalFileOpener(path, mode, fs=self, **kwargs) File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/implementations/local.py", line 273, in __init__ self._open() File "/home/username/miniconda3/envs/qq3/lib/python3.8/site-packages/fsspec/implementations/local.py", line 278, in _open self.f = open(self.path, mode=self.mode) FileNotFoundError: [Errno 2] No such file or directory: '/home/username/.cache/huggingface/datasets/json/default-43d06a4aedb25e6d/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51/dataset_info.json' ``` ### Expected behavior Expected behavior: 2nd GPU training run should run the same as 1st GPU training run. ### Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.8.0 - Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.10 - Python version: 3.8.15 - PyArrow version: 9.0.0 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5412/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5408
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5408/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5408/comments
https://api.github.com/repos/huggingface/datasets/issues/5408/events
https://github.com/huggingface/datasets/issues/5408
1,519,890,752
I_kwDODunzps5al7FA
5,408
dataset map function could not be hash properly
{ "login": "Tungway1990", "id": 68179274, "node_id": "MDQ6VXNlcjY4MTc5Mjc0", "avatar_url": "https://avatars.githubusercontent.com/u/68179274?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tungway1990", "html_url": "https://github.com/Tungway1990", "followers_url": "https://api.github.com/users/Tungway1990/followers", "following_url": "https://api.github.com/users/Tungway1990/following{/other_user}", "gists_url": "https://api.github.com/users/Tungway1990/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tungway1990/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tungway1990/subscriptions", "organizations_url": "https://api.github.com/users/Tungway1990/orgs", "repos_url": "https://api.github.com/users/Tungway1990/repos", "events_url": "https://api.github.com/users/Tungway1990/events{/privacy}", "received_events_url": "https://api.github.com/users/Tungway1990/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2023-01-05T01:59:59"
"2023-01-06T13:22:19"
"2023-01-06T13:22:18"
NONE
null
### Describe the bug I follow the [blog post](https://huggingface.co/blog/fine-tune-whisper#building-a-demo) to finetune a Cantonese transcribe model. When using map function to prepare dataset, following warning pop out: `common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=1)` > Parameter 'function'=<function prepare_dataset at 0x000001D1D9D79A60> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. I read https://github.com/huggingface/datasets/issues/4521 and https://github.com/huggingface/datasets/issues/3178 but cannot solve the issue. ### Steps to reproduce the bug ```python from datasets import load_dataset, DatasetDict common_voice = DatasetDict() common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "zh-HK", split="train+validation") common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "zh-HK", split="test") common_voice = common_voice.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"]) from transformers import WhisperFeatureExtractor, WhisperTokenizer, WhisperProcessor feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small") tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="chinese", task="transcribe") processor = WhisperProcessor.from_pretrained("openai/whisper-small", language="chinese", task="transcribe") from datasets import Audio common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000)) def prepare_dataset(batch): # load and resample audio data from 48 to 16kHz audio = batch["audio"] # compute log-Mel input features from input audio array batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0] # encode target text to label ids batch["labels"] = tokenizer(batch["sentence"]).input_ids return batch common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=1) ``` ### Expected behavior Should be no warning shown. ### Environment info - `datasets` version: 2.7.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.12 - PyArrow version: 8.0.0 - Pandas version: 1.3.5 - dill version: 0.3.4 - multiprocess version: 0.70.12.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5408/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5408/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5407
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5407/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5407/comments
https://api.github.com/repos/huggingface/datasets/issues/5407/events
https://github.com/huggingface/datasets/issues/5407
1,519,797,345
I_kwDODunzps5alkRh
5,407
Datasets.from_sql() generates deprecation warning
{ "login": "msummerfield", "id": 21002157, "node_id": "MDQ6VXNlcjIxMDAyMTU3", "avatar_url": "https://avatars.githubusercontent.com/u/21002157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/msummerfield", "html_url": "https://github.com/msummerfield", "followers_url": "https://api.github.com/users/msummerfield/followers", "following_url": "https://api.github.com/users/msummerfield/following{/other_user}", "gists_url": "https://api.github.com/users/msummerfield/gists{/gist_id}", "starred_url": "https://api.github.com/users/msummerfield/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/msummerfield/subscriptions", "organizations_url": "https://api.github.com/users/msummerfield/orgs", "repos_url": "https://api.github.com/users/msummerfield/repos", "events_url": "https://api.github.com/users/msummerfield/events{/privacy}", "received_events_url": "https://api.github.com/users/msummerfield/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2023-01-05T00:43:17"
"2023-01-06T10:59:14"
"2023-01-06T10:59:14"
NONE
null
### Describe the bug Calling `Datasets.from_sql()` generates a warning: `.../site-packages/datasets/builder.py:712: FutureWarning: 'use_auth_token' was deprecated in version 2.7.1 and will be removed in 3.0.0. Pass 'use_auth_token' to the initializer/'load_dataset_builder' instead.` ### Steps to reproduce the bug Any valid call to `Datasets.from_sql()` will produce the deprecation warning. ### Expected behavior No warning. The fix should be simply to remove the parameter `use_auth_token` from the call to `builder.download_and_prepare()` at line 43 of `io/sql.py` (it is set to `None` anyway, and is not needed). ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-4.15.0-169-generic-x86_64-with-glibc2.27 - Python version: 3.9.15 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5407/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5407/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5406
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5406/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5406/comments
https://api.github.com/repos/huggingface/datasets/issues/5406/events
https://github.com/huggingface/datasets/issues/5406
1,519,140,544
I_kwDODunzps5ajD7A
5,406
[2.6.1][2.7.0] Upgrade `datasets` to fix `TypeError: can only concatenate str (not "int") to str`
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
9
"2023-01-04T15:10:04"
"2023-02-02T13:03:14"
null
MEMBER
null
`datasets` 2.6.1 and 2.7.0 started to stop supporting datasets like IMDB, ConLL or MNIST datasets. When loading a dataset using 2.6.1 or 2.7.0, you may this error when loading certain datasets: ```python TypeError: can only concatenate str (not "int") to str ``` This is because we started to update the metadata of those datasets to a format that is not supported in 2.6.1 and 2.7.0 This change is required or those datasets won't be supported by the Hugging Face Hub. Therefore if you encounter this error or if you're using `datasets` 2.6.1 or 2.7.0, we encourage you to update to a newer version. For example, versions 2.6.2 and 2.7.1 patch this issue. ```python pip install -U datasets ``` All the datasets affected are the ones with a ClassLabel feature type and YAML "dataset_info" metadata. More info [here](https://github.com/huggingface/datasets/issues/5275). We apologize for the inconvenience.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5406/reactions", "total_count": 5, "+1": 5, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5406/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5508
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5508/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5508/comments
https://api.github.com/repos/huggingface/datasets/issues/5508/events
https://github.com/huggingface/datasets/issues/5508
1,573,290,359
I_kwDODunzps5dxoF3
5,508
Saving a dataset after setting format to torch doesn't work, but only if filtering
{ "login": "joebhakim", "id": 13984157, "node_id": "MDQ6VXNlcjEzOTg0MTU3", "avatar_url": "https://avatars.githubusercontent.com/u/13984157?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joebhakim", "html_url": "https://github.com/joebhakim", "followers_url": "https://api.github.com/users/joebhakim/followers", "following_url": "https://api.github.com/users/joebhakim/following{/other_user}", "gists_url": "https://api.github.com/users/joebhakim/gists{/gist_id}", "starred_url": "https://api.github.com/users/joebhakim/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joebhakim/subscriptions", "organizations_url": "https://api.github.com/users/joebhakim/orgs", "repos_url": "https://api.github.com/users/joebhakim/repos", "events_url": "https://api.github.com/users/joebhakim/events{/privacy}", "received_events_url": "https://api.github.com/users/joebhakim/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
"2023-02-06T21:08:58"
"2023-02-06T21:08:58"
null
NONE
null
### Describe the bug Saving a dataset after setting format to torch doesn't work, but only if filtering ### Steps to reproduce the bug ``` a = Dataset.from_dict({"b": [1, 2]}) a.set_format('torch') a.save_to_disk("test_save") # saves successfully a.filter(None).save_to_disk("test_save_filter") # does not >> [...] TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types [<class 'torch.Tensor'>]. When using `batched=True`, make sure provided `function` returns a `dict` of types like `(<class 'list'>, <class 'numpy.ndarray'>)`. # note: skipping the format change to torch lets this work. ### Expected behavior Saving to work ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-6.1.9-arch1-1-x86_64-with-glibc2.36 - Python version: 3.10.9 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5508/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5507/comments
https://api.github.com/repos/huggingface/datasets/issues/5507/events
https://github.com/huggingface/datasets/issues/5507
1,572,667,036
I_kwDODunzps5dvP6c
5,507
Optimise behaviour in respect to indices mapping
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
0
"2023-02-06T14:25:55"
"2023-02-06T14:25:55"
null
CONTRIBUTOR
null
_Originally [posted](https://huggingface.slack.com/archives/C02V51Q3800/p1675443873878489?thread_ts=1675418893.373479&cid=C02V51Q3800) on Slack_ Considering all this, perhaps for Datasets 3.0, we can do the following: * have `continuous=True` by default in `.shard` (requested in the survey and makes more sense for us since it doesn't create an indices mapping) * allow calling `save_to_disk` on "unflattened" datasets * remove "hidden" expensive calls in `save_to_disk`, `unique`, `concatenate_datasets`, etc. For instance, instead of silently calling `flatten_indices` where it's needed, it's probably better to be explicit (considering how expensive these ops can be) and raise an error instead
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5507/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5506
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5506/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5506/comments
https://api.github.com/repos/huggingface/datasets/issues/5506/events
https://github.com/huggingface/datasets/issues/5506
1,571,838,641
I_kwDODunzps5dsFqx
5,506
IterableDataset and Dataset return different batch sizes when using Trainer with multiple GPUs
{ "login": "kheyer", "id": 38166299, "node_id": "MDQ6VXNlcjM4MTY2Mjk5", "avatar_url": "https://avatars.githubusercontent.com/u/38166299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kheyer", "html_url": "https://github.com/kheyer", "followers_url": "https://api.github.com/users/kheyer/followers", "following_url": "https://api.github.com/users/kheyer/following{/other_user}", "gists_url": "https://api.github.com/users/kheyer/gists{/gist_id}", "starred_url": "https://api.github.com/users/kheyer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kheyer/subscriptions", "organizations_url": "https://api.github.com/users/kheyer/orgs", "repos_url": "https://api.github.com/users/kheyer/repos", "events_url": "https://api.github.com/users/kheyer/events{/privacy}", "received_events_url": "https://api.github.com/users/kheyer/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
"2023-02-06T03:26:03"
"2023-02-06T03:26:03"
null
NONE
null
### Describe the bug I am training a Roberta model using 2 GPUs and the `Trainer` API with a batch size of 256. Initially I used a standard `Dataset`, but had issues with slow data loading. After reading [this issue](https://github.com/huggingface/datasets/issues/2252), I swapped to loading my dataset as contiguous shards and passing those to an `IterableDataset`. I observed an unexpected drop in GPU memory utilization, and found the batch size returned from the model had been cut in half. When using `Trainer` with 2 GPUs and a batch size of 256, `Dataset` returns a batch of size 512 (256 per GPU), while `IterableDataset` returns a batch size of 256 (256 total). My guess is `IterableDataset` isn't accounting for multiple cards. ### Steps to reproduce the bug ```python import datasets from datasets import IterableDataset from transformers import RobertaConfig from transformers import RobertaTokenizerFast from transformers import RobertaForMaskedLM from transformers import DataCollatorForLanguageModeling from transformers import Trainer, TrainingArguments use_iterable_dataset = True def gen_from_shards(shards): for shard in shards: for example in shard: yield example dataset = datasets.load_from_disk('my_dataset.hf') if use_iterable_dataset: n_shards = 100 shards = [dataset.shard(num_shards=n_shards, index=i) for i in range(n_shards)] dataset = IterableDataset.from_generator(gen_from_shards, gen_kwargs={"shards": shards}) tokenizer = RobertaTokenizerFast.from_pretrained("./my_tokenizer", max_len=160, use_fast=True) config = RobertaConfig( vocab_size=8248, max_position_embeddings=256, num_attention_heads=8, num_hidden_layers=6, type_vocab_size=1) model = RobertaForMaskedLM(config=config) data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15) training_args = TrainingArguments( per_device_train_batch_size=256 # other args removed for brevity ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=dataset, ) trainer.train() ``` ### Expected behavior Expected `Dataset` and `IterableDataset` to have the same batch size behavior. If the current behavior is intentional, the batch size printout at the start of training should be updated. Currently, both dataset classes result in `Trainer` printing the same total batch size, even though the batch size sent to the GPUs are different. ### Environment info datasets 2.7.1 transformers 4.25.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5506/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5506/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5505
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5505/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5505/comments
https://api.github.com/repos/huggingface/datasets/issues/5505/events
https://github.com/huggingface/datasets/issues/5505
1,571,720,814
I_kwDODunzps5dro5u
5,505
PyTorch BatchSampler still loads from Dataset one-by-one
{ "login": "davidgilbertson", "id": 4443482, "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidgilbertson", "html_url": "https://github.com/davidgilbertson", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
"2023-02-06T01:14:55"
"2023-02-06T01:14:55"
null
NONE
null
### Describe the bug In [the docs here](https://huggingface.co/docs/datasets/use_with_pytorch#use-a-batchsampler), it mentions the issue of the Dataset being read one-by-one, then states that using a BatchSampler resolves the issue. I'm not sure if this is a mistake in the docs or the code, but it seems that the only way for a Dataset to be passed a list of indexes by PyTorch (instead of one index at a time) is to define a `__getitems__` method (note the plural) on the Dataset object, and since the HF Dataset doesn't have this, PyTorch executes [this line of code](https://github.com/pytorch/pytorch/blob/master/torch/utils/data/_utils/fetch.py#L58), reverting to fetching one-by-one. ### Steps to reproduce the bug You can put a breakpoint in `Dataset.__getitem__()` or just print the args from there and see that it's called multiple times for a single `next(iter(dataloader))`, even when using the code from the docs: ```py from torch.utils.data.sampler import BatchSampler, RandomSampler batch_sampler = BatchSampler(RandomSampler(ds), batch_size=32, drop_last=False) dataloader = DataLoader(ds, batch_sampler=batch_sampler) ``` ### Expected behavior The expected behaviour would be for it to fetch batches from the dataset, rather than one-by-one. To demonstrate that there is room for improvement: once I have a HF dataset `ds`, if I just add this line: ```py ds.__getitems__ = ds.__getitem__ ``` ...then the time taken to loop over the dataset improves considerably (for wikitext-103, from one minute to 13 seconds with batch size 32). Probably not a big deal in the grand scheme of things, but seems like an easy win. ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.8 - PyArrow version: 10.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5505/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5505/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5500/comments
https://api.github.com/repos/huggingface/datasets/issues/5500/events
https://github.com/huggingface/datasets/issues/5500
1,569,257,240
I_kwDODunzps5diPcY
5,500
WMT19 custom download checksum error
{ "login": "Hannibal046", "id": 38466901, "node_id": "MDQ6VXNlcjM4NDY2OTAx", "avatar_url": "https://avatars.githubusercontent.com/u/38466901?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Hannibal046", "html_url": "https://github.com/Hannibal046", "followers_url": "https://api.github.com/users/Hannibal046/followers", "following_url": "https://api.github.com/users/Hannibal046/following{/other_user}", "gists_url": "https://api.github.com/users/Hannibal046/gists{/gist_id}", "starred_url": "https://api.github.com/users/Hannibal046/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Hannibal046/subscriptions", "organizations_url": "https://api.github.com/users/Hannibal046/orgs", "repos_url": "https://api.github.com/users/Hannibal046/repos", "events_url": "https://api.github.com/users/Hannibal046/events{/privacy}", "received_events_url": "https://api.github.com/users/Hannibal046/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2023-02-03T05:45:37"
"2023-02-03T05:52:56"
"2023-02-03T05:52:56"
NONE
null
### Describe the bug I use the following scripts to download data from WMT19: ```python import datasets from datasets import inspect_dataset, load_dataset_builder from wmt19.wmt_utils import _TRAIN_SUBSETS,_DEV_SUBSETS ## this is a must due to: https://discuss.huggingface.co/t/load-dataset-hangs-with-local-files/28034/3 if __name__ == '__main__': dev_subsets,train_subsets = [],[] for subset in _TRAIN_SUBSETS: if subset.target=='en' and 'de' in subset.sources: train_subsets.append(subset.name) for subset in _DEV_SUBSETS: if subset.target=='en' and 'de' in subset.sources: dev_subsets.append(subset.name) inspect_dataset("wmt19", "./wmt19") builder = load_dataset_builder( "./wmt19/wmt_utils.py", language_pair=("de", "en"), subsets={ datasets.Split.TRAIN: train_subsets, datasets.Split.VALIDATION: dev_subsets, }, ) builder.download_and_prepare() ds = builder.as_dataset() ds.to_json("../data/wmt19/ende/data.json") ``` And I got the following error: ``` Traceback (most recent call last): | 0/2 [00:00<?, ?obj/s] File "draft.py", line 26, in <module> builder.download_and_prepare() | 0/1 [00:00<?, ?obj/s] File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 605, in download_and_prepare self._download_and_prepare(%| | 0/1 [00:00<?, ?obj/s] File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 1104, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) | 0/1 [00:00<?, ?obj/s] File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/builder.py", line 676, in _download_and_prepare verify_checksums(s #13: 0%| | 0/1 [00:00<?, ?obj/s] File "/Users/hannibal046/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 35, in verify_checksums raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums))) | 0/1 [00:00<?, ?obj/s] datasets.utils.info_utils.UnexpectedDownloadedFile: {'https://s3.amazonaws.com/web-language-models/paracrawl/release1/paracrawl-release1.en-de.zipporah0-dedup-clean.tgz', 'https://huggingface.co/datasets/wmt/wmt13/resolve/main-zip/training-parallel-europarl-v7.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/rapid2016.zip', 'https://huggingface.co/datasets/wmt/wmt18/resolve/main-zip/translation-task/training-parallel-nc-v13.zip', 'https://huggingface.co/datasets/wmt/wmt17/resolve/main-zip/translation-task/training-parallel-nc-v12.zip', 'https://huggingface.co/datasets/wmt/wmt14/resolve/main-zip/training-parallel-nc-v9.zip', 'https://huggingface.co/datasets/wmt/wmt15/resolve/main-zip/training-parallel-nc-v10.zip', 'https://huggingface.co/datasets/wmt/wmt16/resolve/main-zip/translation-task/training-parallel-nc-v11.zip'} ``` ### Steps to reproduce the bug see above ### Expected behavior download data successfully ### Environment info datasets==2.1.0 python==3.8
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5500/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5500/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5499/comments
https://api.github.com/repos/huggingface/datasets/issues/5499/events
https://github.com/huggingface/datasets/issues/5499
1,568,937,026
I_kwDODunzps5dhBRC
5,499
`load_dataset` has ~4 seconds of overhead for cached data
{ "login": "davidgilbertson", "id": 4443482, "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidgilbertson", "html_url": "https://github.com/davidgilbertson", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
"2023-02-02T23:34:50"
"2023-02-02T23:34:50"
null
NONE
null
### Feature request When loading a dataset that has been cached locally, the `load_dataset` function takes a lot longer than it should take to fetch the dataset from disk (or memory). This is particularly noticeable for smaller datasets. For example, wikitext-2, comparing `load_data` (once cached) and `load_from_disk`, the `load_dataset` method takes 40 times longer. ⏱ 4.84s ⮜ load_dataset ⏱ 119ms ⮜ load_from_disk ### Motivation I assume this is doing something like checking for a newer version. If so, that's an age old problem: do you make the user wait _every single time they load from cache_ or do you do something like load from cache always, _then_ check for a newer version and alert if they have stale data. The decision usually revolves around what percentage of the time the data will have been updated, and how dangerous old data is. For most datasets it's extremely unlikely that there will be a newer version on any given run, so 99% of the time this is just wasted time. Maybe you don't want to make that decision for all users, but at least having the _option_ to not wait for checks would be an improvement. ### Your contribution .
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5499/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5499/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5498/comments
https://api.github.com/repos/huggingface/datasets/issues/5498/events
https://github.com/huggingface/datasets/issues/5498
1,568,190,529
I_kwDODunzps5deLBB
5,498
TypeError: 'bool' object is not iterable when filtering a datasets.arrow_dataset.Dataset
{ "login": "vmuel", "id": 91255010, "node_id": "MDQ6VXNlcjkxMjU1MDEw", "avatar_url": "https://avatars.githubusercontent.com/u/91255010?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vmuel", "html_url": "https://github.com/vmuel", "followers_url": "https://api.github.com/users/vmuel/followers", "following_url": "https://api.github.com/users/vmuel/following{/other_user}", "gists_url": "https://api.github.com/users/vmuel/gists{/gist_id}", "starred_url": "https://api.github.com/users/vmuel/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vmuel/subscriptions", "organizations_url": "https://api.github.com/users/vmuel/orgs", "repos_url": "https://api.github.com/users/vmuel/repos", "events_url": "https://api.github.com/users/vmuel/events{/privacy}", "received_events_url": "https://api.github.com/users/vmuel/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2023-02-02T14:46:49"
"2023-02-04T17:19:37"
"2023-02-04T17:19:36"
NONE
null
### Describe the bug Hi, Thanks for the amazing work on the library! **Describe the bug** I think I might have noticed a small bug in the filter method. Having loaded a dataset using `load_dataset`, when I try to filter out empty entries with `batched=True`, I get a TypeError. ### Steps to reproduce the bug ``` train_dataset = train_dataset.filter( function=lambda example: example["image"] is not None, batched=True, batch_size=10) ``` Error message: ``` File .../lib/python3.9/site-packages/datasets/fingerprint.py:480, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 476 validate_fingerprint(kwargs[fingerprint_name]) 478 # Call actual function --> 480 out = func(self, *args, **kwargs) ... -> 5666 indices_array = [i for i, to_keep in zip(indices, mask) if to_keep] 5667 if indices_mapping is not None: 5668 indices_array = pa.array(indices_array, type=pa.uint64()) TypeError: 'bool' object is not iterable ``` **Removing batched=True allows to bypass the issue.** ### Expected behavior According to the doc, "[batch_size corresponds to the] number of examples per batch provided to function if batched = True", so we shouldn't need to remove the batchd=True arg? source: https://huggingface.co/docs/datasets/v2.9.0/en/package_reference/main_classes#datasets.Dataset.filter ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.4.0-122-generic-x86_64-with-glibc2.31 - Python version: 3.9.10 - PyArrow version: 10.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5498/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5498/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5496/comments
https://api.github.com/repos/huggingface/datasets/issues/5496/events
https://github.com/huggingface/datasets/issues/5496
1,567,301,765
I_kwDODunzps5dayCF
5,496
Add a `reduce` method
{ "login": "zhangir-azerbayev", "id": 59542043, "node_id": "MDQ6VXNlcjU5NTQyMDQz", "avatar_url": "https://avatars.githubusercontent.com/u/59542043?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhangir-azerbayev", "html_url": "https://github.com/zhangir-azerbayev", "followers_url": "https://api.github.com/users/zhangir-azerbayev/followers", "following_url": "https://api.github.com/users/zhangir-azerbayev/following{/other_user}", "gists_url": "https://api.github.com/users/zhangir-azerbayev/gists{/gist_id}", "starred_url": "https://api.github.com/users/zhangir-azerbayev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhangir-azerbayev/subscriptions", "organizations_url": "https://api.github.com/users/zhangir-azerbayev/orgs", "repos_url": "https://api.github.com/users/zhangir-azerbayev/repos", "events_url": "https://api.github.com/users/zhangir-azerbayev/events{/privacy}", "received_events_url": "https://api.github.com/users/zhangir-azerbayev/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
"2023-02-02T04:30:22"
"2023-02-03T14:11:32"
null
NONE
null
### Feature request Right now the `Dataset` class implements `map()` and `filter()`, but leaves out the third functional idiom popular among Python users: `reduce`. ### Motivation A `reduce` method is often useful when calculating dataset statistics, for example, the occurrence of a particular n-gram or the average line length of a code dataset. ### Your contribution I haven't contributed to `datasets` before, but I don't expect this will be too difficult, since the implementation will closely follow that of `map` and `filter`. I could have a crack over the weekend.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5496/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5496/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5495/comments
https://api.github.com/repos/huggingface/datasets/issues/5495/events
https://github.com/huggingface/datasets/issues/5495
1,566,803,452
I_kwDODunzps5dY4X8
5,495
to_tf_dataset fails with datetime UTC columns even if not included in columns argument
{ "login": "dwyatte", "id": 2512762, "node_id": "MDQ6VXNlcjI1MTI3NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwyatte", "html_url": "https://github.com/dwyatte", "followers_url": "https://api.github.com/users/dwyatte/followers", "following_url": "https://api.github.com/users/dwyatte/following{/other_user}", "gists_url": "https://api.github.com/users/dwyatte/gists{/gist_id}", "starred_url": "https://api.github.com/users/dwyatte/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dwyatte/subscriptions", "organizations_url": "https://api.github.com/users/dwyatte/orgs", "repos_url": "https://api.github.com/users/dwyatte/repos", "events_url": "https://api.github.com/users/dwyatte/events{/privacy}", "received_events_url": "https://api.github.com/users/dwyatte/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
null
[]
null
2
"2023-02-01T20:47:33"
"2023-02-04T01:56:55"
null
NONE
null
### Describe the bug There appears to be some eager behavior in `to_tf_dataset` that runs against every column in a dataset even if they aren't included in the columns argument. This is problematic with datetime UTC columns due to them not working with zero copy. If I don't have UTC information in my datetime column, then everything works as expected. ### Steps to reproduce the bug ```python import numpy as np import pandas as pd from datasets import Dataset df = pd.DataFrame(np.random.rand(2, 1), columns=["x"]) # df["dt"] = pd.to_datetime(["2023-01-01", "2023-01-01"]) # works fine df["dt"] = pd.to_datetime(["2023-01-01 00:00:00.00000+00:00", "2023-01-01 00:00:00.00000+00:00"]) df.to_parquet("test.pq") ds = Dataset.from_parquet("test.pq") tf_ds = ds.to_tf_dataset(columns=["x"], batch_size=2, shuffle=True) ``` ``` ArrowInvalid Traceback (most recent call last) Cell In[1], line 12 8 df.to_parquet("test.pq") 11 ds = Dataset.from_parquet("test.pq") ---> 12 tf_ds = ds.to_tf_dataset(columns=["r"], batch_size=2, shuffle=True) File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:411, in TensorflowDatasetMixin.to_tf_dataset(self, batch_size, columns, shuffle, collate_fn, drop_remainder, collate_fn_args, label_cols, prefetch, num_workers) 407 dataset = self 409 # TODO(Matt, QL): deprecate the retention of label_ids and label --> 411 output_signature, columns_to_np_types = dataset._get_output_signature( 412 dataset, 413 collate_fn=collate_fn, 414 collate_fn_args=collate_fn_args, 415 cols_to_retain=cols_to_retain, 416 batch_size=batch_size if drop_remainder else None, 417 ) 419 if "labels" in output_signature: 420 if ("label_ids" in columns or "label" in columns) and "labels" not in columns: File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:254, in TensorflowDatasetMixin._get_output_signature(dataset, collate_fn, collate_fn_args, cols_to_retain, batch_size, num_test_batches) 252 for _ in range(num_test_batches): 253 indices = sample(range(len(dataset)), test_batch_size) --> 254 test_batch = dataset[indices] 255 if cols_to_retain is not None: 256 test_batch = {key: value for key, value in test_batch.items() if key in cols_to_retain} File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2590, in Dataset.__getitem__(self, key) 2588 def __getitem__(self, key): # noqa: F811 2589 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2590 return self._getitem( 2591 key, 2592 ) File ~/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2575, in Dataset._getitem(self, key, **kwargs) 2573 formatter = get_formatter(format_type, features=self.features, **format_kwargs) 2574 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2575 formatted_output = format_table( 2576 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2577 ) 2578 return formatted_output File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:634, in format_table(table, key, formatter, format_columns, output_all_columns) 632 python_formatter = PythonFormatter(features=None) 633 if format_columns is None: --> 634 return formatter(pa_table, query_type=query_type) 635 elif query_type == "column": 636 if key in format_columns: File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:410, in Formatter.__call__(self, pa_table, query_type) 408 return self.format_column(pa_table) 409 elif query_type == "batch": --> 410 return self.format_batch(pa_table) File ~/venv/lib/python3.8/site-packages/datasets/formatting/np_formatter.py:78, in NumpyFormatter.format_batch(self, pa_table) 77 def format_batch(self, pa_table: pa.Table) -> Mapping: ---> 78 batch = self.numpy_arrow_extractor().extract_batch(pa_table) 79 batch = self.python_features_decoder.decode_batch(batch) 80 batch = self.recursive_tensorize(batch) File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in NumpyArrowExtractor.extract_batch(self, pa_table) 163 def extract_batch(self, pa_table: pa.Table) -> dict: --> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names} File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:164, in <dictcomp>(.0) 163 def extract_batch(self, pa_table: pa.Table) -> dict: --> 164 return {col: self._arrow_array_to_numpy(pa_table[col]) for col in pa_table.column_names} File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:185, in NumpyArrowExtractor._arrow_array_to_numpy(self, pa_array) 181 else: 182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all( 183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks 184 ) --> 185 array: List = [ 186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only) 187 ] 188 else: 189 if isinstance(pa_array.type, _ArrayXDExtensionType): 190 # don't call to_pylist() to preserve dtype of the fixed-size array File ~/venv/lib/python3.8/site-packages/datasets/formatting/formatting.py:186, in <listcomp>(.0) 181 else: 182 zero_copy_only = _is_zero_copy_only(pa_array.type) and all( 183 not _is_array_with_nulls(chunk) for chunk in pa_array.chunks 184 ) 185 array: List = [ --> 186 row for chunk in pa_array.chunks for row in chunk.to_numpy(zero_copy_only=zero_copy_only) 187 ] 188 else: 189 if isinstance(pa_array.type, _ArrayXDExtensionType): 190 # don't call to_pylist() to preserve dtype of the fixed-size array File ~/venv/lib/python3.8/site-packages/pyarrow/array.pxi:1475, in pyarrow.lib.Array.to_numpy() File ~/venv/lib/python3.8/site-packages/pyarrow/error.pxi:100, in pyarrow.lib.check_status() ArrowInvalid: Needed to copy 1 chunks with 0 nulls, but zero_copy_only was True ``` ### Expected behavior I think there are two potential issues/fixes 1. Proper handling of datetime UTC columns (perhaps there is something incorrect with zero copy handling here) 2. Not eagerly running against every column in a dataset when the columns argument of `to_tf_dataset` specifies a subset of columns (although I'm not sure if this is unavoidable) ### Environment info - `datasets` version: 2.9.0 - Platform: macOS-13.2-x86_64-i386-64bit - Python version: 3.8.12 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5495/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5495/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5494/comments
https://api.github.com/repos/huggingface/datasets/issues/5494/events
https://github.com/huggingface/datasets/issues/5494
1,566,655,348
I_kwDODunzps5dYUN0
5,494
Update audio installation doc page
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
open
false
null
[]
null
3
"2023-02-01T19:07:50"
"2023-02-02T13:11:58"
null
CONTRIBUTOR
null
Our [installation documentation page](https://huggingface.co/docs/datasets/installation#audio) says that one can use Datasets for mp3 only with `torchaudio<0.12`. `torchaudio>0.12` is actually supported too but requires a specific version of ffmpeg which is not easily installed on all linux versions but there is a custom ubuntu repo for it, we have insctructions in the code: https://github.com/huggingface/datasets/blob/main/src/datasets/features/audio.py#L327 So we should update the doc page. But first investigate [this issue](5488).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5494/reactions", "total_count": 3, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/5494/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5492/comments
https://api.github.com/repos/huggingface/datasets/issues/5492/events
https://github.com/huggingface/datasets/issues/5492
1,566,604,216
I_kwDODunzps5dYHu4
5,492
Push_to_hub in a pull request
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[ { "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false } ]
null
1
"2023-02-01T18:32:14"
"2023-02-01T18:40:46"
null
MEMBER
null
Right now `ds.push_to_hub()` can push a dataset on `main` or on a new branch with `branch=`, but there is no way to open a pull request. Even passing `branch=refs/pr/x` doesn't seem to work: it tries to create a branch with that name cc @nateraw It should be possible to tweak the use of `huggingface_hub` in `push_to_hub` to make it open a PR or push to an existing PR
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5492/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5488/comments
https://api.github.com/repos/huggingface/datasets/issues/5488/events
https://github.com/huggingface/datasets/issues/5488
1,565,025,262
I_kwDODunzps5dSGPu
5,488
Error loading MP3 files from CommonVoice
{ "login": "kradonneoh", "id": 110259722, "node_id": "U_kgDOBpJuCg", "avatar_url": "https://avatars.githubusercontent.com/u/110259722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kradonneoh", "html_url": "https://github.com/kradonneoh", "followers_url": "https://api.github.com/users/kradonneoh/followers", "following_url": "https://api.github.com/users/kradonneoh/following{/other_user}", "gists_url": "https://api.github.com/users/kradonneoh/gists{/gist_id}", "starred_url": "https://api.github.com/users/kradonneoh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kradonneoh/subscriptions", "organizations_url": "https://api.github.com/users/kradonneoh/orgs", "repos_url": "https://api.github.com/users/kradonneoh/repos", "events_url": "https://api.github.com/users/kradonneoh/events{/privacy}", "received_events_url": "https://api.github.com/users/kradonneoh/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
"2023-01-31T21:25:33"
"2023-02-01T15:28:56"
null
NONE
null
### Describe the bug When loading a CommonVoice dataset with `datasets==2.9.0` and `torchaudio>=0.12.0`, I get an error reading the audio arrays: ```python --------------------------------------------------------------------------- LibsndfileError Traceback (most recent call last) ~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3(self, path_or_file) 310 try: # try torchaudio anyway because sometimes it works (depending on the os and os packages installed) --> 311 array, sampling_rate = self._decode_mp3_torchaudio(path_or_file) 312 except RuntimeError: ~/.local/lib/python3.8/site-packages/datasets/features/audio.py in _decode_mp3_torchaudio(self, path_or_file) 351 --> 352 array, sampling_rate = torchaudio.load(path_or_file, format="mp3") 353 if self.sampling_rate and self.sampling_rate != sampling_rate: ~/.local/lib/python3.8/site-packages/torchaudio/backend/soundfile_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 204 """ --> 205 with soundfile.SoundFile(filepath, "r") as file_: 206 if file_.format != "WAV" or normalize: ~/.local/lib/python3.8/site-packages/soundfile.py in __init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 654 format, subtype, endian) --> 655 self._file = self._open(file, mode_int, closefd) 656 if set(mode).issuperset('r+') and self.seekable(): ~/.local/lib/python3.8/site-packages/soundfile.py in _open(self, file, mode_int, closefd) 1212 err = _snd.sf_error(file_ptr) -> 1213 raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name)) 1214 if mode_int == _snd.SFM_WRITE: LibsndfileError: Error opening <_io.BytesIO object at 0x7fa539462090>: File contains data in an unknown format. ``` I assume this is because there's some issue with the mp3 decoding process. I've verified that I have `ffmpeg>=4` (on a Linux distro), which appears to be the fallback backend for `torchaudio,` (at least according to #4889). ### Steps to reproduce the bug ```python dataset = load_dataset("mozilla-foundation/common_voice_11_0", "be", split="train") dataset[0] ``` ### Expected behavior Similar behavior to `torchaudio<0.12.0`, which doesn't result in a `LibsndfileError` ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-5.15.0-52-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 10.0.1 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5488/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5487/comments
https://api.github.com/repos/huggingface/datasets/issues/5487/events
https://github.com/huggingface/datasets/issues/5487
1,564,480,121
I_kwDODunzps5dQBJ5
5,487
Incorrect filepath for dill module
{ "login": "avivbrokman", "id": 35349273, "node_id": "MDQ6VXNlcjM1MzQ5Mjcz", "avatar_url": "https://avatars.githubusercontent.com/u/35349273?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avivbrokman", "html_url": "https://github.com/avivbrokman", "followers_url": "https://api.github.com/users/avivbrokman/followers", "following_url": "https://api.github.com/users/avivbrokman/following{/other_user}", "gists_url": "https://api.github.com/users/avivbrokman/gists{/gist_id}", "starred_url": "https://api.github.com/users/avivbrokman/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/avivbrokman/subscriptions", "organizations_url": "https://api.github.com/users/avivbrokman/orgs", "repos_url": "https://api.github.com/users/avivbrokman/repos", "events_url": "https://api.github.com/users/avivbrokman/events{/privacy}", "received_events_url": "https://api.github.com/users/avivbrokman/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
5
"2023-01-31T15:01:08"
"2023-02-02T07:07:55"
null
NONE
null
### Describe the bug I installed the `datasets` package and when I try to `import` it, I get the following error: ``` Traceback (most recent call last): File "/var/folders/jt/zw5g74ln6tqfdzsl8tx378j00000gn/T/ipykernel_3805/3458380017.py", line 1, in <module> import datasets File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module> from .arrow_dataset import Dataset File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 66, in <module> from .arrow_writer import ArrowWriter, OptimizedTypedSequence File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/arrow_writer.py", line 27, in <module> from .features import Features, Image, Value File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/features/__init__.py", line 17, in <module> from .audio import Audio File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/features/audio.py", line 12, in <module> from ..download.streaming_download_manager import xopen File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/download/__init__.py", line 9, in <module> from .download_manager import DownloadManager, DownloadMode File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/download/download_manager.py", line 36, in <module> from ..utils.py_utils import NestedDataStructure, map_nested, size_str File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 602, in <module> class Pickler(dill.Pickler): File "/Users/avivbrokman/opt/anaconda3/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 605, in Pickler dispatch = dill._dill.MetaCatchingDict(dill.Pickler.dispatch.copy()) AttributeError: module 'dill' has no attribute '_dill' ``` Looking at the github source code for dill, it appears that `datasets` has a bug or is not compatible with the latest `dill`. Specifically, rather than `dill._dill.XXXX` it should be `dill.dill._dill.XXXX`. But given the popularity of `datasets` I feel confused about me being the first person to have this issue, so it makes me wonder if I'm misdiagnosing the issue. ### Steps to reproduce the bug Install `dill` and `datasets` packages and then `import datasets` ### Expected behavior I expect `datasets` to import. ### Environment info - `datasets` version: 2.9.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.13 - PyArrow version: 11.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5487/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5487/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5486/comments
https://api.github.com/repos/huggingface/datasets/issues/5486/events
https://github.com/huggingface/datasets/issues/5486
1,564,059,749
I_kwDODunzps5dOahl
5,486
Adding `sep` to TextConfig
{ "login": "omar-araboghli", "id": 29576434, "node_id": "MDQ6VXNlcjI5NTc2NDM0", "avatar_url": "https://avatars.githubusercontent.com/u/29576434?v=4", "gravatar_id": "", "url": "https://api.github.com/users/omar-araboghli", "html_url": "https://github.com/omar-araboghli", "followers_url": "https://api.github.com/users/omar-araboghli/followers", "following_url": "https://api.github.com/users/omar-araboghli/following{/other_user}", "gists_url": "https://api.github.com/users/omar-araboghli/gists{/gist_id}", "starred_url": "https://api.github.com/users/omar-araboghli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/omar-araboghli/subscriptions", "organizations_url": "https://api.github.com/users/omar-araboghli/orgs", "repos_url": "https://api.github.com/users/omar-araboghli/repos", "events_url": "https://api.github.com/users/omar-araboghli/events{/privacy}", "received_events_url": "https://api.github.com/users/omar-araboghli/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
"2023-01-31T10:39:53"
"2023-01-31T14:50:18"
null
NONE
null
I have a local a `.txt` file that follows the `CONLL2003` format which I need to load using `load_script`. However, by using `sample_by='line'`, one can only split the dataset into lines without splitting each line into columns. Would it be reasonable to add a `sep` argument in combination with `sample_by='paragraph'` to parse a paragraph into an array for each column ? If so, I am happy to contribute! ## Environment * `python 3.8.10` * `datasets 2.9.0` ## Snippet of `train.txt` ```txt Distribution NN O O and NN O O dynamics NN O O of NN O O electron NN O B-RP complexes NN O I-RP in NN O O cyanobacterial NN O B-R membranes NN O I-R The NN O O occurrence NN O O of NN O O prostaglandin NN O B-R F2α NN O I-R in NN O O Pharbitis NN O B-R seedlings NN O I-R grown NN O O under NN O O short NN O B-P days NN O I-P or NN O I-P days NN O I-P ``` ## Current Behaviour ```python # defining 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags'] here would fail with `ValueError: Length of names (4) does not match length of arrays (1)` dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='line') dataset['train']['tokens'][0] >>> 'Distribution\tNN\tO\tO' ``` ## Expected Behaviour / Suggestion ```python # suppose we defined 4 features ['tokens', 'pos_tags', 'chunk_tags', 'ner_tags'] dataset = datasets.load_dataset(path='text', features=features, data_files={'train': 'train.txt'}, sample_by='paragraph', sep='\t') dataset['train']['tokens'][0] >>> ['Distribution', 'and', 'dynamics', ... ] dataset['train']['ner_tags'][0] >>> ['O', 'O', 'O', ... ] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5486/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5483/comments
https://api.github.com/repos/huggingface/datasets/issues/5483/events
https://github.com/huggingface/datasets/issues/5483
1,560,894,690
I_kwDODunzps5dCVzi
5,483
Unable to upload dataset
{ "login": "yuvalkirstain", "id": 57996478, "node_id": "MDQ6VXNlcjU3OTk2NDc4", "avatar_url": "https://avatars.githubusercontent.com/u/57996478?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuvalkirstain", "html_url": "https://github.com/yuvalkirstain", "followers_url": "https://api.github.com/users/yuvalkirstain/followers", "following_url": "https://api.github.com/users/yuvalkirstain/following{/other_user}", "gists_url": "https://api.github.com/users/yuvalkirstain/gists{/gist_id}", "starred_url": "https://api.github.com/users/yuvalkirstain/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yuvalkirstain/subscriptions", "organizations_url": "https://api.github.com/users/yuvalkirstain/orgs", "repos_url": "https://api.github.com/users/yuvalkirstain/repos", "events_url": "https://api.github.com/users/yuvalkirstain/events{/privacy}", "received_events_url": "https://api.github.com/users/yuvalkirstain/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2023-01-28T15:18:26"
"2023-01-29T08:09:49"
"2023-01-29T08:09:49"
NONE
null
### Describe the bug Uploading a simple dataset ends with an exception ### Steps to reproduce the bug I created a new conda env with python 3.10, pip installed datasets and: ```python >>> from datasets import load_dataset, load_from_disk, Dataset >>> d = Dataset.from_dict({"text": ["hello"] * 2}) >>> d.push_to_hub("ttt111") /home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_hf_folder.py:92: UserWarning: A token has been found in `/a/home/cc/students/cs/kirstain/.huggingface/token`. This is the old path where tokens were stored. The new location is `/home/olab/kirstain/.cache/huggingface/token` which is configurable using `HF_HOME` environment variable. Your token has been copied to this new location. You can now safely delete the old token file manually or use `huggingface-cli logout`. warnings.warn( Creating parquet from Arrow format: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 279.94ba/s] Upload 1 LFS files: 0%| | 0/1 [00:02<?, ?it/s] Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:04<?, ?it/s] Traceback (most recent call last): File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 264, in hf_raise_for_status response.raise_for_status() File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/requests/models.py", line 1021, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 334, in _inner_upload_lfs_object return _upload_lfs_object( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 391, in _upload_lfs_object lfs_upload( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 273, in lfs_upload _upload_single_part( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/lfs.py", line 305, in _upload_single_part hf_raise_for_status(upload_res) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_errors.py", line 318, in hf_raise_for_status raise HfHubHTTPError(str(e), response=response) from e huggingface_hub.utils._errors.HfHubHTTPError: 403 Client Error: Forbidden for url: https://s3.us-east-1.amazonaws.com/lfs.huggingface.co/repos/cf/0c/cf0c5ab8a3f729e5f57a8b79a36ecea64a31126f13218591c27ed9a1c7bd9b41/ece885a4bb6bbc8c1bb51b45542b805283d74590f72cd4c45d3ba76628570386?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGO27GPWFUO%2F20230128%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20230128T151640Z&X-Amz-Expires=900&X-Amz-Signature=89e78e9a9d70add7ed93d453334f4f93c6f29d889d46750a1f2da04af73978db&X-Amz-SignedHeaders=host&x-amz-storage-class=INTELLIGENT_TIERING&x-id=PutObject The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4909, in push_to_hub repo_id, split, uploaded_size, dataset_nbytes, repo_files, deleted_size = self._push_parquet_shards_to_hub( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 4804, in _push_parquet_shards_to_hub _retry( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/file_utils.py", line 281, in _retry return func(*func_args, **func_kwargs) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn return fn(*args, **kwargs) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2537, in upload_file commit_info = self.create_commit( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn return fn(*args, **kwargs) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/hf_api.py", line 2346, in create_commit upload_lfs_files( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py", line 124, in _inner_fn return fn(*args, **kwargs) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 346, in upload_lfs_files thread_map( File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 94, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/contrib/concurrent.py", line 76, in _executor_map return list(tqdm_class(ex.map(fn, *iterables, **map_args), **kwargs)) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 621, in result_iterator yield _result_or_cancel(fs.pop()) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 319, in _result_or_cancel return fut.result(timeout) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 458, in result return self.__get_result() File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/home/olab/kirstain/anaconda3/envs/datasets/lib/python3.10/site-packages/huggingface_hub/_commit_api.py", line 338, in _inner_upload_lfs_object raise RuntimeError( RuntimeError: Error while uploading 'data/train-00000-of-00001-6df93048e66df326.parquet' to the Hub. ``` ### Expected behavior The dataset should be uploaded without any exceptions ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-4.15.0-65-generic-x86_64-with-glibc2.27 - Python version: 3.10.9 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5483/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5482/comments
https://api.github.com/repos/huggingface/datasets/issues/5482/events
https://github.com/huggingface/datasets/issues/5482
1,560,853,137
I_kwDODunzps5dCLqR
5,482
Reload features from Parquet metadata
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
open
false
null
[]
null
2
"2023-01-28T13:12:31"
"2023-02-05T18:09:54"
null
MEMBER
null
The idea would be to allow this : ```python ds.to_parquet("my_dataset/ds.parquet") reloaded = load_dataset("my_dataset") assert ds.features == reloaded.features ``` And it should also work with Image and Audio types (right now they're reloaded as a dict type) This can be implemented by storing and reading the feature types in the parquet metadata, as we do for arrow files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5482/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5482/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5481/comments
https://api.github.com/repos/huggingface/datasets/issues/5481/events
https://github.com/huggingface/datasets/issues/5481
1,560,468,195
I_kwDODunzps5dAtrj
5,481
Load a cached dataset as iterable
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 3761482852, "node_id": "LA_kwDODunzps7gM6xk", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue", "name": "good second issue", "color": "BDE59C", "default": false, "description": "Issues a bit more difficult than \"Good First\" issues" } ]
open
false
null
[]
null
10
"2023-01-27T21:43:51"
"2023-02-01T16:28:48"
null
MEMBER
null
The idea would be to allow something like ```python ds = load_dataset("c4", "en", as_iterable=True) ``` To be used to train models. It would load an IterableDataset from the cached Arrow files. Cc @stas00 Edit : from the discussions we may load from cache when streaming=True
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5481/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 3, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5481/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5479/comments
https://api.github.com/repos/huggingface/datasets/issues/5479/events
https://github.com/huggingface/datasets/issues/5479
1,560,357,590
I_kwDODunzps5dASrW
5,479
audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated
{ "login": "jcho19", "id": 107211437, "node_id": "U_kgDOBmPqrQ", "avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcho19", "html_url": "https://github.com/jcho19", "followers_url": "https://api.github.com/users/jcho19/followers", "following_url": "https://api.github.com/users/jcho19/following{/other_user}", "gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcho19/subscriptions", "organizations_url": "https://api.github.com/users/jcho19/orgs", "repos_url": "https://api.github.com/users/jcho19/repos", "events_url": "https://api.github.com/users/jcho19/events{/privacy}", "received_events_url": "https://api.github.com/users/jcho19/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2023-01-27T20:01:22"
"2023-01-29T05:23:14"
"2023-01-29T05:23:14"
NONE
null
### Describe the bug I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what could be missing/need to be updated in the one that doesn't work? On the remote env, libsndfile is 1.0.28 and ffmpeg is 4.2.1. from datasets import load_dataset ds = load_dataset("audiofolder", data_dir="...") Here is the output (should be generating 400+ rows): Downloading and preparing dataset audiofolder/default to ... Downloading data files: 0%| | 0/2 [00:00<?, ?it/s] Downloading data files: 0it [00:00, ?it/s] Extracting data files: 0it [00:00, ?it/s] Generating train split: 0 examples [00:00, ? examples/s] Dataset audiofolder downloaded and prepared to ... Subsequent calls will reuse this data. 0%| | 0/1 [00:00<?, ?it/s] DatasetDict({ train: Dataset({ features: ['audio', 'transcription'], num_rows: 1 }) }) Here is my pip environment in the one that doesn't work (uses torch 1.11.a0 from shared env): Package Version ------------------- ------------------- aiofiles 22.1.0 aiohttp 3.8.3 aiosignal 1.3.1 altair 4.2.1 anyio 3.6.2 appdirs 1.4.4 argcomplete 2.0.0 argon2-cffi 20.1.0 astunparse 1.6.3 async-timeout 4.0.2 attrs 21.2.0 audioread 3.0.0 backcall 0.2.0 bleach 4.0.0 certifi 2021.10.8 cffi 1.14.6 charset-normalizer 2.0.12 click 8.1.3 contourpy 1.0.7 cycler 0.11.0 datasets 2.9.0 debugpy 1.4.1 decorator 5.0.9 defusedxml 0.7.1 dill 0.3.6 distlib 0.3.4 entrypoints 0.3 evaluate 0.4.0 expecttest 0.1.3 fastapi 0.89.1 ffmpy 0.3.0 filelock 3.6.0 fonttools 4.38.0 frozenlist 1.3.3 fsspec 2023.1.0 future 0.18.2 gradio 3.16.2 h11 0.14.0 httpcore 0.16.3 httpx 0.23.3 huggingface-hub 0.12.0 idna 3.3 ipykernel 6.2.0 ipython 7.26.0 ipython-genutils 0.2.0 ipywidgets 7.6.3 jedi 0.18.0 Jinja2 3.0.1 jiwer 2.5.1 joblib 1.2.0 jsonschema 3.2.0 jupyter 1.0.0 jupyter-client 6.1.12 jupyter-console 6.4.0 jupyter-core 4.7.1 jupyterlab-pygments 0.1.2 jupyterlab-widgets 1.0.0 kiwisolver 1.4.4 Levenshtein 0.20.2 librosa 0.9.2 linkify-it-py 1.0.3 llvmlite 0.39.1 markdown-it-py 2.1.0 MarkupSafe 2.0.1 matplotlib 3.6.3 matplotlib-inline 0.1.2 mdit-py-plugins 0.3.3 mdurl 0.1.2 mistune 0.8.4 multidict 6.0.4 multiprocess 0.70.14 nbclient 0.5.4 nbconvert 6.1.0 nbformat 5.1.3 nest-asyncio 1.5.1 notebook 6.4.3 numba 0.56.4 numpy 1.20.3 orjson 3.8.5 packaging 21.0 pandas 1.5.3 pandocfilters 1.4.3 parso 0.8.2 pexpect 4.8.0 pickleshare 0.7.5 Pillow 9.4.0 pip 22.3.1 pipx 1.1.0 platformdirs 2.5.2 pooch 1.6.0 prometheus-client 0.11.0 prompt-toolkit 3.0.19 psutil 5.9.0 ptyprocess 0.7.0 pyarrow 10.0.1 pycparser 2.20 pycryptodome 3.16.0 pydantic 1.10.4 pydub 0.25.1 Pygments 2.10.0 pyparsing 2.4.7 pyrsistent 0.18.0 python-dateutil 2.8.2 python-multipart 0.0.5 pytz 2022.7.1 PyYAML 6.0 pyzmq 22.2.1 qtconsole 5.1.1 QtPy 1.10.0 rapidfuzz 2.13.7 regex 2022.10.31 requests 2.27.1 resampy 0.4.2 responses 0.18.0 rfc3986 1.5.0 scikit-learn 1.2.1 scipy 1.6.3 Send2Trash 1.8.0 setuptools 65.5.1 shiboken6 6.3.1 shiboken6-generator 6.3.1 six 1.16.0 sniffio 1.3.0 soundfile 0.11.0 starlette 0.22.0 terminado 0.11.0 testpath 0.5.0 threadpoolctl 3.1.0 tokenizers 0.13.2 toolz 0.12.0 torch 1.11.0a0+gitunknown tornado 6.1 tqdm 4.64.1 traitlets 5.0.5 transformers 4.27.0.dev0 types-dataclasses 0.6.4 typing_extensions 4.1.1 uc-micro-py 1.0.1 urllib3 1.26.9 userpath 1.8.0 uvicorn 0.20.0 virtualenv 20.14.1 wcwidth 0.2.5 webencodings 0.5.1 websockets 10.4 wheel 0.37.1 widgetsnbextension 3.5.1 xxhash 3.2.0 yarl 1.8.2 ### Steps to reproduce the bug Create a pip environment with the packages listed above (make sure ffmpeg and libsndfile is installed with same versions listed above). Create a custom audio dataset and load it in with load_dataset("audiofolder", ...) ### Expected behavior load_dataset should create a dataset with 400+ rows. ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.0 - PyArrow version: 10.0.1 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5479/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5479/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5477/comments
https://api.github.com/repos/huggingface/datasets/issues/5477/events
https://github.com/huggingface/datasets/issues/5477
1,559,909,892
I_kwDODunzps5c-lYE
5,477
Unpin sqlalchemy once issue is fixed
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
"2023-01-27T15:01:55"
"2023-01-27T15:01:55"
null
MEMBER
null
Once the source issue is fixed: - pandas-dev/pandas#51015 we should revert the pin introduced in: - #5476
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5477/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5475/comments
https://api.github.com/repos/huggingface/datasets/issues/5475/events
https://github.com/huggingface/datasets/issues/5475
1,559,030,149
I_kwDODunzps5c7OmF
5,475
Dataset scan time is much slower than using native arrow
{ "login": "jonny-cyberhaven", "id": 121845112, "node_id": "U_kgDOB0M1eA", "avatar_url": "https://avatars.githubusercontent.com/u/121845112?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonny-cyberhaven", "html_url": "https://github.com/jonny-cyberhaven", "followers_url": "https://api.github.com/users/jonny-cyberhaven/followers", "following_url": "https://api.github.com/users/jonny-cyberhaven/following{/other_user}", "gists_url": "https://api.github.com/users/jonny-cyberhaven/gists{/gist_id}", "starred_url": "https://api.github.com/users/jonny-cyberhaven/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonny-cyberhaven/subscriptions", "organizations_url": "https://api.github.com/users/jonny-cyberhaven/orgs", "repos_url": "https://api.github.com/users/jonny-cyberhaven/repos", "events_url": "https://api.github.com/users/jonny-cyberhaven/events{/privacy}", "received_events_url": "https://api.github.com/users/jonny-cyberhaven/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2023-01-27T01:32:25"
"2023-01-30T16:17:11"
"2023-01-30T16:17:11"
CONTRIBUTOR
null
### Describe the bug I'm basically running the same scanning experiment from the tutorials https://huggingface.co/course/chapter5/4?fw=pt except now I'm comparing to a native pyarrow version. I'm finding that the native pyarrow approach is much faster (2 orders of magnitude). Is there something I'm missing that explains this phenomenon? ### Steps to reproduce the bug https://colab.research.google.com/drive/11EtHDaGAf1DKCpvYnAPJUW-LFfAcDzHY?usp=sharing ### Expected behavior I expect scan times to be on par with using pyarrow directly. ### Environment info standard colab environment
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5475/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5475/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5474
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5474/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5474/comments
https://api.github.com/repos/huggingface/datasets/issues/5474/events
https://github.com/huggingface/datasets/issues/5474
1,558,827,155
I_kwDODunzps5c6dCT
5,474
Column project operation on `datasets.Dataset`
{ "login": "daskol", "id": 9336514, "node_id": "MDQ6VXNlcjkzMzY1MTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9336514?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daskol", "html_url": "https://github.com/daskol", "followers_url": "https://api.github.com/users/daskol/followers", "following_url": "https://api.github.com/users/daskol/following{/other_user}", "gists_url": "https://api.github.com/users/daskol/gists{/gist_id}", "starred_url": "https://api.github.com/users/daskol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daskol/subscriptions", "organizations_url": "https://api.github.com/users/daskol/orgs", "repos_url": "https://api.github.com/users/daskol/repos", "events_url": "https://api.github.com/users/daskol/events{/privacy}", "received_events_url": "https://api.github.com/users/daskol/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892865, "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate", "name": "duplicate", "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists" }, { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
"2023-01-26T21:47:53"
"2023-02-01T16:44:09"
null
NONE
null
### Feature request There is no operation to select a subset of columns of original dataset. Expected API follows. ```python a = Dataset.from_dict({ 'int': [0, 1, 2] 'char': ['a', 'b', 'c'], 'none': [None] * 3, }) b = a.project('int', 'char') # usually, .select() print(a.column_names) # stdout: ['int', 'char', 'none'] print(b.column_names) # stdout: ['int', 'char'] ``` Method project can easily accept not only column names (as a `str)` but univariant function applied to corresponding column as an example. Or keyword arguments can be used in order to rename columns in advance (see `pandas`, `pyspark`, `pyarrow`, and SQL).. ### Motivation Projection is a typical operation in every data processing library. And it is a basic block of a well-known data manipulation language like SQL. Without this operation `datasets.Dataset` interface is not complete. ### Your contribution Not sure. Some of my PRs are still open and some do not have any discussions.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5474/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5468/comments
https://api.github.com/repos/huggingface/datasets/issues/5468/events
https://github.com/huggingface/datasets/issues/5468
1,558,066,625
I_kwDODunzps5c3jXB
5,468
Allow opposite of remove_columns on Dataset and DatasetDict
{ "login": "hollance", "id": 346853, "node_id": "MDQ6VXNlcjM0Njg1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/346853?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hollance", "html_url": "https://github.com/hollance", "followers_url": "https://api.github.com/users/hollance/followers", "following_url": "https://api.github.com/users/hollance/following{/other_user}", "gists_url": "https://api.github.com/users/hollance/gists{/gist_id}", "starred_url": "https://api.github.com/users/hollance/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hollance/subscriptions", "organizations_url": "https://api.github.com/users/hollance/orgs", "repos_url": "https://api.github.com/users/hollance/repos", "events_url": "https://api.github.com/users/hollance/events{/privacy}", "received_events_url": "https://api.github.com/users/hollance/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
null
[]
null
9
"2023-01-26T12:28:09"
"2023-01-31T10:48:56"
null
NONE
null
### Feature request In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code: ```python COLUMNS_TO_KEEP = ["text", "audio"] all_columns = gigaspeech["train"].column_names columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP) gigaspeech = gigaspeech.remove_columns(columns_to_remove) ``` This kind of thing happens a lot when you don't need to keep all columns from the dataset. It would be more convenient (and less error prone) if you could just write: ```python gigaspeech = gigaspeech.keep_columns(["text", "audio"]) ``` Internally, `keep_columns` could still call `remove_columns`, but it expresses more clearly what the user's intent is. ### Motivation Less code to write for the user of the dataset. ### Your contribution -
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5468/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5465/comments
https://api.github.com/repos/huggingface/datasets/issues/5465/events
https://github.com/huggingface/datasets/issues/5465
1,557,510,618
I_kwDODunzps5c1bna
5,465
audiofolder creates empty dataset even though the dataset passed in follows the correct structure
{ "login": "jcho19", "id": 107211437, "node_id": "U_kgDOBmPqrQ", "avatar_url": "https://avatars.githubusercontent.com/u/107211437?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jcho19", "html_url": "https://github.com/jcho19", "followers_url": "https://api.github.com/users/jcho19/followers", "following_url": "https://api.github.com/users/jcho19/following{/other_user}", "gists_url": "https://api.github.com/users/jcho19/gists{/gist_id}", "starred_url": "https://api.github.com/users/jcho19/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jcho19/subscriptions", "organizations_url": "https://api.github.com/users/jcho19/orgs", "repos_url": "https://api.github.com/users/jcho19/repos", "events_url": "https://api.github.com/users/jcho19/events{/privacy}", "received_events_url": "https://api.github.com/users/jcho19/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2023-01-26T01:45:45"
"2023-01-26T08:48:45"
"2023-01-26T08:48:45"
NONE
null
### Describe the bug The structure of my dataset folder called "my_dataset" is : data metadata.csv The data folder consists of all mp3 files and metadata.csv consist of file locations like 'data/...mp3 and transcriptions. There's 400+ mp3 files and corresponding transcriptions for my dataset. When I run the following: ds = load_dataset("audiofolder", data_dir="my_dataset") I get: Using custom data configuration default-... Downloading and preparing dataset audiofolder/default to /... Downloading data files: 0%| | 0/2 [00:00<?, ?it/s] Downloading data files: 0it [00:00, ?it/s] Extracting data files: 0it [00:00, ?it/s] Generating train split: 0 examples [00:00, ? examples/s] Dataset audiofolder downloaded and prepared to /.... Subsequent calls will reuse this data. 0%| | 0/1 [00:00<?, ?it/s] DatasetDict({ train: Dataset({ features: ['audio', 'transcription'], num_rows: 1 }) }) ### Steps to reproduce the bug Create a dataset folder called 'my_dataset' with a subfolder called 'data' that has mp3 files. Also, create metadata.csv that has file locations like 'data/...mp3' and their corresponding transcription. Run: ds = load_dataset("audiofolder", data_dir="my_dataset") ### Expected behavior It should generate a dataset with numerous rows. ### Environment info Run on Jupyter notebook
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5465/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5465/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5464/comments
https://api.github.com/repos/huggingface/datasets/issues/5464/events
https://github.com/huggingface/datasets/issues/5464
1,557,462,104
I_kwDODunzps5c1PxY
5,464
NonMatchingChecksumError for hendrycks_test
{ "login": "sarahwie", "id": 8027676, "node_id": "MDQ6VXNlcjgwMjc2NzY=", "avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sarahwie", "html_url": "https://github.com/sarahwie", "followers_url": "https://api.github.com/users/sarahwie/followers", "following_url": "https://api.github.com/users/sarahwie/following{/other_user}", "gists_url": "https://api.github.com/users/sarahwie/gists{/gist_id}", "starred_url": "https://api.github.com/users/sarahwie/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sarahwie/subscriptions", "organizations_url": "https://api.github.com/users/sarahwie/orgs", "repos_url": "https://api.github.com/users/sarahwie/repos", "events_url": "https://api.github.com/users/sarahwie/events{/privacy}", "received_events_url": "https://api.github.com/users/sarahwie/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2023-01-26T00:43:23"
"2023-01-27T05:44:31"
"2023-01-26T07:41:58"
NONE
null
### Describe the bug The checksum of the file has likely changed on the remote host. ### Steps to reproduce the bug `dataset = nlp.load_dataset("hendrycks_test", "anatomy")` ### Expected behavior no error thrown ### Environment info - `datasets` version: 2.2.1 - Platform: macOS-13.1-arm64-arm-64bit - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5464/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5464/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5461/comments
https://api.github.com/repos/huggingface/datasets/issues/5461/events
https://github.com/huggingface/datasets/issues/5461
1,555,532,719
I_kwDODunzps5ct4uv
5,461
Discrepancy in `nyu_depth_v2` dataset
{ "login": "awsaf49", "id": 36858976, "node_id": "MDQ6VXNlcjM2ODU4OTc2", "avatar_url": "https://avatars.githubusercontent.com/u/36858976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/awsaf49", "html_url": "https://github.com/awsaf49", "followers_url": "https://api.github.com/users/awsaf49/followers", "following_url": "https://api.github.com/users/awsaf49/following{/other_user}", "gists_url": "https://api.github.com/users/awsaf49/gists{/gist_id}", "starred_url": "https://api.github.com/users/awsaf49/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/awsaf49/subscriptions", "organizations_url": "https://api.github.com/users/awsaf49/orgs", "repos_url": "https://api.github.com/users/awsaf49/repos", "events_url": "https://api.github.com/users/awsaf49/events{/privacy}", "received_events_url": "https://api.github.com/users/awsaf49/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
37
"2023-01-24T19:15:46"
"2023-02-06T20:52:00"
null
CONTRIBUTOR
null
### Describe the bug I think there is a discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are different from actual ones. Here is a side-by-side comparison, ![image](https://user-images.githubusercontent.com/36858976/214381162-1d9582c2-6750-4114-a01a-61ca1cd5f872.png) I tried to find the origin of this issue but sadly as I mentioned in tensorflow/datasets/issues/4674, the download link from `fast-depth` doesn't work anymore hence couldn't verify if the error originated there or during porting data from there to HF. Hi @sayakpaul, as you worked on huggingface/datasets/issues/5255, if you still have access to that data could you please share the data or perhaps checkout this issue? ### Steps to reproduce the bug This [notebook](https://colab.research.google.com/drive/1K3ZU8XUPRDOYD38MQS9nreQXJYitlKSW?usp=sharing#scrollTo=UEW7QSh0jf0i) from @sayakpaul could be used to generate depth maps and actual ground truths could be checked from this [dataset](https://www.kaggle.com/datasets/awsaf49/nyuv2-bts-dataset) from BTS repo. > Note: BTS dataset has only 36K data compared to the train-test 50K. They sampled the data as adjacent frames look quite the same ### Expected behavior Expected depth maps should be smooth rather than discrete/clipped. ### Environment info - `datasets` version: 2.8.1.dev0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 9.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5461/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5461/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5458/comments
https://api.github.com/repos/huggingface/datasets/issues/5458/events
https://github.com/huggingface/datasets/issues/5458
1,555,054,737
I_kwDODunzps5csECR
5,458
slice split while streaming
{ "login": "SvenDS9", "id": 122370631, "node_id": "U_kgDOB0s6Rw", "avatar_url": "https://avatars.githubusercontent.com/u/122370631?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SvenDS9", "html_url": "https://github.com/SvenDS9", "followers_url": "https://api.github.com/users/SvenDS9/followers", "following_url": "https://api.github.com/users/SvenDS9/following{/other_user}", "gists_url": "https://api.github.com/users/SvenDS9/gists{/gist_id}", "starred_url": "https://api.github.com/users/SvenDS9/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SvenDS9/subscriptions", "organizations_url": "https://api.github.com/users/SvenDS9/orgs", "repos_url": "https://api.github.com/users/SvenDS9/repos", "events_url": "https://api.github.com/users/SvenDS9/events{/privacy}", "received_events_url": "https://api.github.com/users/SvenDS9/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2023-01-24T14:08:17"
"2023-01-24T15:11:47"
"2023-01-24T15:11:47"
NONE
null
### Describe the bug When using the `load_dataset` function with streaming set to True, slicing splits is apparently not supported. Did I miss this in the documentation? ### Steps to reproduce the bug `load_dataset("lhoestq/demo1",revision=None, streaming=True, split="train[:3]")` causes ValueError: Bad split: train[:3]. Available splits: ['train', 'test'] in builder.py, line 1213, in as_streaming_dataset ### Expected behavior The first 3 entries of the dataset as a stream ### Environment info - `datasets` version: 2.8.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.10.9 - PyArrow version: 10.0.1 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5458/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5458/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5457/comments
https://api.github.com/repos/huggingface/datasets/issues/5457/events
https://github.com/huggingface/datasets/issues/5457
1,554,171,264
I_kwDODunzps5cosWA
5,457
prebuilt dataset relies on `downloads/extracted`
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "organizations_url": "https://api.github.com/users/stas00/orgs", "repos_url": "https://api.github.com/users/stas00/repos", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "received_events_url": "https://api.github.com/users/stas00/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
"2023-01-24T02:09:32"
"2023-01-24T18:14:10"
null
MEMBER
null
### Describe the bug I pre-built the dataset: ``` python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing ``` and it can be used just fine. now I wipe out `downloads/extracted` and it no longer works. ``` rm -r ~/.cache/huggingface/datasets/downloads ``` That is I can still load it: ``` python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing No config specified, defaulting to: general-pmd-synthetic-testing/100.unique Found cached dataset general-pmd-synthetic-testing (/home/stas/.cache/huggingface/datasets/HuggingFaceM4___general-pmd-synthetic-testing/100.unique/1.1.1/86bc445e3e48cb5ef79de109eb4e54ff85b318cd55c3835c4ee8f86eae33d9d2) ``` but if I try to use it: ``` E stderr: Traceback (most recent call last): E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/main.py", line 116, in <module> E stderr: train_loader, val_loader = get_dataloaders( E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 170, in get_dataloaders E stderr: train_loader = get_dataloader_from_config( E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 443, in get_dataloader_from_config E stderr: dataloader = get_dataloader( E stderr: File "/mnt/nvme0/code/huggingface/m4-master-6/m4/training/dataset.py", line 264, in get_dataloader E stderr: is_pmd = "meta" in hf_dataset[0] and "source" in hf_dataset[0] E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 2601, in __getitem__ E stderr: return self._getitem( E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/arrow_dataset.py", line 2586, in _getitem E stderr: formatted_output = format_table( E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 634, in format_table E stderr: return formatter(pa_table, query_type=query_type) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 406, in __call__ E stderr: return self.format_row(pa_table) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 442, in format_row E stderr: row = self.python_features_decoder.decode_row(row) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/formatting/formatting.py", line 225, in decode_row E stderr: return self.features.decode_example(row) if self.features else row E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1846, in decode_example E stderr: return { E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1847, in <dictcomp> E stderr: column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1304, in decode_nested_example E stderr: return decode_nested_example([schema.feature], obj) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1296, in decode_nested_example E stderr: if decode_nested_example(sub_schema, first_elmt) != first_elmt: E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/features.py", line 1309, in decode_nested_example E stderr: return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) E stderr: File "/mnt/nvme0/code/huggingface/datasets-master/src/datasets/features/image.py", line 144, in decode_example E stderr: image = PIL.Image.open(path) E stderr: File "/home/stas/anaconda3/envs/py38-pt113/lib/python3.8/site-packages/PIL/Image.py", line 3092, in open E stderr: fp = builtins.open(filename, "rb") E stderr: FileNotFoundError: [Errno 2] No such file or directory: '/mnt/nvme0/code/data/cache/huggingface/datasets/downloads/extracted/134227b9b94c4eccf19b205bf3021d4492d0227b9be6c2ddb6bf517d8d55a8cb/data/101/images_01.jpg' ``` Only if I wipe out the cached dir and rebuild then it starts working as `download/extracted` is back again with extracted files. ``` rm -r ~/.cache/huggingface/datasets/HuggingFaceM4___general-pmd-synthetic-testing python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing ``` I think there are 2 issues here: 1. why does it still rely on extracted files after `arrow` files were printed - did I do something incorrectly when creating this dataset? 2. why doesn't the dataset know that it has been gutted and loads just fine? If it has a dependency on `download/extracted` then `load_dataset` should check if it's there and fail or force rebuilding. I am sure this could be a very expensive operation, so probably really solving #1 will not require this check. and this second item is probably an overkill. Other than perhaps if it had an optional `check_consistency` flag to do that. ### Environment info datasets@main
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5457/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5457/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5454/comments
https://api.github.com/repos/huggingface/datasets/issues/5454/events
https://github.com/huggingface/datasets/issues/5454
1,552,890,419
I_kwDODunzps5cjzoz
5,454
Save and resume the state of a DataLoader
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 2067400324, "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion", "name": "generic discussion", "color": "c5def5", "default": false, "description": "Generic discussion on the library" } ]
open
false
null
[]
null
2
"2023-01-23T10:58:54"
"2023-01-24T01:45:48"
null
MEMBER
null
It would be nice when using `datasets` with a PyTorch DataLoader to be able to resume a training from a DataLoader state (e.g. to resume a training that crashed) What I have in mind (but lmk if you have other ideas or comments): For map-style datasets, this requires to have a PyTorch Sampler state that can be saved and reloaded per node and worker. For iterable datasets, this requires to save the state of the dataset iterator, which includes: - the current shard idx and row position in the current shard - the epoch number - the rng state - the shuffle buffer Right now you can already resume the data loading of an iterable dataset by using `IterableDataset.skip` but it takes a lot of time because it re-iterates on all the past data until it reaches the resuming point. cc @stas00 @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5454/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5454/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5451
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5451/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5451/comments
https://api.github.com/repos/huggingface/datasets/issues/5451/events
https://github.com/huggingface/datasets/issues/5451
1,552,336,300
I_kwDODunzps5chsWs
5,451
ImageFolder BadZipFile: Bad offset for central directory
{ "login": "hmartiro", "id": 1524208, "node_id": "MDQ6VXNlcjE1MjQyMDg=", "avatar_url": "https://avatars.githubusercontent.com/u/1524208?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hmartiro", "html_url": "https://github.com/hmartiro", "followers_url": "https://api.github.com/users/hmartiro/followers", "following_url": "https://api.github.com/users/hmartiro/following{/other_user}", "gists_url": "https://api.github.com/users/hmartiro/gists{/gist_id}", "starred_url": "https://api.github.com/users/hmartiro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hmartiro/subscriptions", "organizations_url": "https://api.github.com/users/hmartiro/orgs", "repos_url": "https://api.github.com/users/hmartiro/repos", "events_url": "https://api.github.com/users/hmartiro/events{/privacy}", "received_events_url": "https://api.github.com/users/hmartiro/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
"2023-01-22T23:50:12"
"2023-01-23T17:25:56"
null
NONE
null
### Describe the bug I'm getting the following exception: ``` lib/python3.10/zipfile.py:1353 in _RealGetContents │ │ │ │ 1350 │ │ # self.start_dir: Position of start of central directory │ │ 1351 │ │ self.start_dir = offset_cd + concat │ │ 1352 │ │ if self.start_dir < 0: │ │ ❱ 1353 │ │ │ raise BadZipFile("Bad offset for central directory") │ │ 1354 │ │ fp.seek(self.start_dir, 0) │ │ 1355 │ │ data = fp.read(size_cd) │ │ 1356 │ │ fp = io.BytesIO(data) │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ BadZipFile: Bad offset for central directory Extracting data files: 35%|█████████████████▊ | 38572/110812 [00:10<00:20, 3576.26it/s] ``` ### Steps to reproduce the bug ``` load_dataset( args.dataset_name, args.dataset_config_name, cache_dir=args.cache_dir, ), ``` ### Expected behavior loads the dataset ### Environment info datasets==2.8.0 Python 3.10.8 Linux 129-146-3-202 5.15.0-52-generic #58~20.04.1-Ubuntu SMP Thu Oct 13 13:09:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5451/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5451/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5450
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5450/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5450/comments
https://api.github.com/repos/huggingface/datasets/issues/5450/events
https://github.com/huggingface/datasets/issues/5450
1,551,109,365
I_kwDODunzps5cdAz1
5,450
to_tf_dataset with a TF collator causes bizarrely persistent slowdown
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
6
"2023-01-20T16:08:37"
"2023-01-23T18:54:09"
null
MEMBER
null
### Describe the bug This will make more sense if you take a look at [a Colab notebook that reproduces this issue.](https://colab.research.google.com/drive/1rxyeciQFWJTI0WrZ5aojp4Ls1ut18fNH?usp=sharing) Briefly, there are several datasets that, when you iterate over them with `to_tf_dataset` **and** a data collator that returns `tf` tensors, become very slow. We haven't been able to figure this one out - it can be intermittent, and we have no idea what could possibly cause it. The weirdest thing is that **the slowdown affects other attempts to access the underlying dataset**. If you try to iterate over the `tf.data.Dataset`, then interrupt execution, and then try to iterate over the original dataset, the original dataset is now also very slow! This is true even if the dataset format is not set to `tf` - the iteration is slow even though it's not calling TF at all! There is a simple workaround for this - we can simply get our data collators to return `np` tensors. When we do this, the bug is never triggered and everything is fine. In general, `np` is preferred for this kind of preprocessing work anyway, when the preprocessing is not going to be compiled into a pure `tf.data` pipeline! However, the issue is fascinating, and the TF team were wondering if anyone in datasets (cc @lhoestq @mariosasko) might have an idea of what could cause this. ### Steps to reproduce the bug Run the attached Colab. ### Expected behavior The slowdown should go away, or at least not persist after we stop iterating over the `tf.data.Dataset` ### Environment info The issue occurs on multiple versions of Python and TF, both on local machines and on Colab. All testing was done using the latest versions of `transformers` and `datasets` from `main`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5450/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5450/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5448/comments
https://api.github.com/repos/huggingface/datasets/issues/5448/events
https://github.com/huggingface/datasets/issues/5448
1,550,618,514
I_kwDODunzps5cbI-S
5,448
Support fsspec 2023.1.0 in CI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
"2023-01-20T10:26:31"
"2023-01-20T13:26:05"
"2023-01-20T13:26:05"
MEMBER
null
Once we find out the root cause of: - #5445 we should revert the temporary pin on fsspec introduced by: - #5447
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5448/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5448/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5445
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5445/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5445/comments
https://api.github.com/repos/huggingface/datasets/issues/5445/events
https://github.com/huggingface/datasets/issues/5445
1,550,588,703
I_kwDODunzps5cbBsf
5,445
CI tests are broken: AttributeError: 'mappingproxy' object has no attribute 'target'
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
"2023-01-20T10:03:10"
"2023-01-20T10:28:44"
"2023-01-20T10:28:44"
MEMBER
null
CI tests are broken, raising `AttributeError: 'mappingproxy' object has no attribute 'target'`. See: https://github.com/huggingface/datasets/actions/runs/3966497597/jobs/6797384185 ``` ... ERROR tests/test_streaming_download_manager.py::TestxPath::test_xpath_rglob[mock://top_level-date=2019-10-0[1-4]/*-expected_paths4] - AttributeError: 'mappingproxy' object has no attribute 'target' ===== 2076 passed, 19 skipped, 15 warnings, 47 errors in 115.54s (0:01:55) ===== ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5445/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5444/comments
https://api.github.com/repos/huggingface/datasets/issues/5444/events
https://github.com/huggingface/datasets/issues/5444
1,550,185,071
I_kwDODunzps5cZfJv
5,444
info messages logged as warnings
{ "login": "davidgilbertson", "id": 4443482, "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidgilbertson", "html_url": "https://github.com/davidgilbertson", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
6
"2023-01-20T01:19:18"
"2023-01-25T15:43:22"
null
NONE
null
### Describe the bug Code in `datasets` is using `logger.warning` when it should be using `logger.info`. Some of these are probably a matter of opinion, but I think anything starting with `logger.warning(f"Loading chached` clearly falls into the info category. Definitions from the Python docs for reference: * INFO: Confirmation that things are working as expected. * WARNING: An indication that something unexpected happened, or indicative of some problem in the near future (e.g. ‘disk space low’). The software is still working as expected. In theory, a user should be able to resolve things such that there are no warnings. ### Steps to reproduce the bug Load any dataset that's already cached. ### Expected behavior No output when log level is at the default WARNING level. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.8 - PyArrow version: 9.0.0 - Pandas version: 1.5.2
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5444/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5444/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5442/comments
https://api.github.com/repos/huggingface/datasets/issues/5442/events
https://github.com/huggingface/datasets/issues/5442
1,550,084,450
I_kwDODunzps5cZGli
5,442
OneDrive Integrations with HF Datasets
{ "login": "Mohammed20201991", "id": 59222637, "node_id": "MDQ6VXNlcjU5MjIyNjM3", "avatar_url": "https://avatars.githubusercontent.com/u/59222637?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Mohammed20201991", "html_url": "https://github.com/Mohammed20201991", "followers_url": "https://api.github.com/users/Mohammed20201991/followers", "following_url": "https://api.github.com/users/Mohammed20201991/following{/other_user}", "gists_url": "https://api.github.com/users/Mohammed20201991/gists{/gist_id}", "starred_url": "https://api.github.com/users/Mohammed20201991/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mohammed20201991/subscriptions", "organizations_url": "https://api.github.com/users/Mohammed20201991/orgs", "repos_url": "https://api.github.com/users/Mohammed20201991/repos", "events_url": "https://api.github.com/users/Mohammed20201991/events{/privacy}", "received_events_url": "https://api.github.com/users/Mohammed20201991/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
"2023-01-19T23:12:08"
"2023-01-20T18:05:52"
null
NONE
null
### Feature request First of all , I would like to thank all community who are developed DataSet storage and make it free available How to integrate our Onedrive account or any other possible storage clouds (like google drive,...) with the **HF** datasets section. For example, if I have **50GB** on my **Onedrive** account and I want to move between drive and Hugging face repo or vis versa ### Motivation make the dataset section more flexible with other possible storage like the integration between Google Collab and Google drive the storage ### Your contribution Can be done using Hugging face CLI
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5442/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5442/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5439/comments
https://api.github.com/repos/huggingface/datasets/issues/5439/events
https://github.com/huggingface/datasets/issues/5439
1,537,973,564
I_kwDODunzps5bq508
5,439
[dataset request] Add Common Voice 12.0
{ "login": "MohammedRakib", "id": 31034499, "node_id": "MDQ6VXNlcjMxMDM0NDk5", "avatar_url": "https://avatars.githubusercontent.com/u/31034499?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MohammedRakib", "html_url": "https://github.com/MohammedRakib", "followers_url": "https://api.github.com/users/MohammedRakib/followers", "following_url": "https://api.github.com/users/MohammedRakib/following{/other_user}", "gists_url": "https://api.github.com/users/MohammedRakib/gists{/gist_id}", "starred_url": "https://api.github.com/users/MohammedRakib/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MohammedRakib/subscriptions", "organizations_url": "https://api.github.com/users/MohammedRakib/orgs", "repos_url": "https://api.github.com/users/MohammedRakib/repos", "events_url": "https://api.github.com/users/MohammedRakib/events{/privacy}", "received_events_url": "https://api.github.com/users/MohammedRakib/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[ { "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false } ]
null
1
"2023-01-18T13:07:05"
"2023-01-25T18:38:53"
null
NONE
null
### Feature request Please add the common voice 12_0 datasets. Apart from English, a significant amount of audio-data has been added to the other minor-language datasets. ### Motivation The dataset link: https://commonvoice.mozilla.org/en/datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5439/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5439/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5437
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5437/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5437/comments
https://api.github.com/repos/huggingface/datasets/issues/5437/events
https://github.com/huggingface/datasets/issues/5437
1,536,837,144
I_kwDODunzps5bmkYY
5,437
Can't load png dataset with 4 channel (RGBA)
{ "login": "WiNE-iNEFF", "id": 41611046, "node_id": "MDQ6VXNlcjQxNjExMDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/41611046?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WiNE-iNEFF", "html_url": "https://github.com/WiNE-iNEFF", "followers_url": "https://api.github.com/users/WiNE-iNEFF/followers", "following_url": "https://api.github.com/users/WiNE-iNEFF/following{/other_user}", "gists_url": "https://api.github.com/users/WiNE-iNEFF/gists{/gist_id}", "starred_url": "https://api.github.com/users/WiNE-iNEFF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WiNE-iNEFF/subscriptions", "organizations_url": "https://api.github.com/users/WiNE-iNEFF/orgs", "repos_url": "https://api.github.com/users/WiNE-iNEFF/repos", "events_url": "https://api.github.com/users/WiNE-iNEFF/events{/privacy}", "received_events_url": "https://api.github.com/users/WiNE-iNEFF/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2023-01-17T18:22:27"
"2023-01-18T20:20:15"
"2023-01-18T20:20:15"
NONE
null
I try to create dataset which contains about 9000 png images 64x64 in size, and they are all 4-channel (RGBA). When trying to use load_dataset() then a dataset is created from only 2 images. What exactly interferes I can not understand.![Screenshot_20230117_212213.jpg](https://user-images.githubusercontent.com/41611046/212980147-9aa68e30-76e9-4b61-a937-c2fdabd56564.jpg)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5437/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5437/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5435
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5435/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5435/comments
https://api.github.com/repos/huggingface/datasets/issues/5435/events
https://github.com/huggingface/datasets/issues/5435
1,536,099,300
I_kwDODunzps5bjwPk
5,435
Wrong statement in "Load a Dataset in Streaming mode" leads to data leakage
{ "login": "HaoyuYang59", "id": 80093591, "node_id": "MDQ6VXNlcjgwMDkzNTkx", "avatar_url": "https://avatars.githubusercontent.com/u/80093591?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HaoyuYang59", "html_url": "https://github.com/HaoyuYang59", "followers_url": "https://api.github.com/users/HaoyuYang59/followers", "following_url": "https://api.github.com/users/HaoyuYang59/following{/other_user}", "gists_url": "https://api.github.com/users/HaoyuYang59/gists{/gist_id}", "starred_url": "https://api.github.com/users/HaoyuYang59/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HaoyuYang59/subscriptions", "organizations_url": "https://api.github.com/users/HaoyuYang59/orgs", "repos_url": "https://api.github.com/users/HaoyuYang59/repos", "events_url": "https://api.github.com/users/HaoyuYang59/events{/privacy}", "received_events_url": "https://api.github.com/users/HaoyuYang59/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
"2023-01-17T10:04:16"
"2023-01-19T09:56:03"
"2023-01-19T09:56:03"
NONE
null
### Describe the bug In the [Split your dataset with take and skip](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#split-your-dataset-with-take-and-skip), it states: > Using take (or skip) prevents future calls to shuffle from shuffling the dataset shards order, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer. Therefore it is advised to shuffle the dataset before splitting using take or skip. See more details in the [Shuffling the dataset: shuffle](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#iterable-dataset-shuffling) section.` >> \# You can also create splits from a shuffled dataset >> train_dataset = shuffled_dataset.skip(1000) >> eval_dataset = shuffled_dataset.take(1000) Where the shuffled dataset comes from: `shuffled_dataset = dataset.shuffle(buffer_size=10_000, seed=42)` At least in Tensorflow 2.9/2.10/2.11, [docs](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#shuffle) states the `reshuffle_each_iteration` argument is `True` by default. This means the dataset would be shuffled after each epoch, and as a result **the validation data would leak into training test**. ### Steps to reproduce the bug N/A ### Expected behavior The `reshuffle_each_iteration` argument should be set to `False`. ### Environment info Tensorflow 2.9/2.10/2.11
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5435/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5435/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5434
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5434/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5434/comments
https://api.github.com/repos/huggingface/datasets/issues/5434/events
https://github.com/huggingface/datasets/issues/5434
1,536,090,042
I_kwDODunzps5bjt-6
5,434
sample_dataset module not found
{ "login": "nickums", "id": 15816213, "node_id": "MDQ6VXNlcjE1ODE2MjEz", "avatar_url": "https://avatars.githubusercontent.com/u/15816213?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nickums", "html_url": "https://github.com/nickums", "followers_url": "https://api.github.com/users/nickums/followers", "following_url": "https://api.github.com/users/nickums/following{/other_user}", "gists_url": "https://api.github.com/users/nickums/gists{/gist_id}", "starred_url": "https://api.github.com/users/nickums/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nickums/subscriptions", "organizations_url": "https://api.github.com/users/nickums/orgs", "repos_url": "https://api.github.com/users/nickums/repos", "events_url": "https://api.github.com/users/nickums/events{/privacy}", "received_events_url": "https://api.github.com/users/nickums/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2023-01-17T09:57:54"
"2023-01-19T13:52:12"
"2023-01-19T07:55:11"
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5434/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5434/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5433
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5433/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5433/comments
https://api.github.com/repos/huggingface/datasets/issues/5433/events
https://github.com/huggingface/datasets/issues/5433
1,536,017,901
I_kwDODunzps5bjcXt
5,433
Support latest Docker image in CI benchmarks
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
3
"2023-01-17T09:06:08"
"2023-01-18T06:29:08"
"2023-01-18T06:29:08"
MEMBER
null
Once we find out the root cause of: - #5431 we should revert the temporary pin on the Docker image version introduced by: - #5432
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5433/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5433/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5431
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5431/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5431/comments
https://api.github.com/repos/huggingface/datasets/issues/5431/events
https://github.com/huggingface/datasets/issues/5431
1,535,862,621
I_kwDODunzps5bi2dd
5,431
CI benchmarks are broken: Unknown arguments: runnerPath, path
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 4296013012, "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance", "name": "maintenance", "color": "d4c5f9", "default": false, "description": "Maintenance tasks" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
0
"2023-01-17T06:49:57"
"2023-01-18T06:33:24"
"2023-01-17T08:51:18"
MEMBER
null
Our CI benchmarks are broken, raising `Unknown arguments` error: https://github.com/huggingface/datasets/actions/runs/3932397079/jobs/6724905161 ``` Unknown arguments: runnerPath, path ``` Stack trace: ``` 100%|██████████| 500/500 [00:01<00:00, 338.98ba/s] Updating lock file 'dvc.lock' To track the changes with git, run: git add dvc.lock To enable auto staging, run: dvc config core.autostage true Use `dvc push` to send your updates to remote storage. cml send-comment <markdown file> Global Options: --log Logging verbosity [string] [choices: "error", "warn", "info", "debug"] [default: "info"] --driver Git provider where the repository is hosted [string] [choices: "github", "gitlab", "bitbucket"] [default: infer from the environment] --repo Repository URL or slug [string] [default: infer from the environment] --driver-token, --token CI driver personal/project access token (PAT) [string] [default: infer from the environment] --help Show help [boolean] Options: --target Comment type (`commit`, `pr`, `commit/f00bar`, `pr/42`, `issue/1337`),default is automatic (`pr` but fallback to `commit`). [string] --watch Watch for changes and automatically update the comment [boolean] --publish Upload any local images found in the Markdown report [boolean] [default: true] --publish-url Self-hosted image server URL [string] [default: "https://asset.cml.dev/"] --publish-native, --native Uses driver's native capabilities to upload assets instead of CML's storage; not available on GitHub [boolean] --watermark-title Hidden comment marker (used for targeting in subsequent `cml comment update`); "{workflow}" & "{run}" are auto-replaced [string] [default: ""] Unknown arguments: runnerPath, path Error: Process completed with exit code 1. ``` Issue reported to iterative/cml: - iterative/cml#1319
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5431/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5431/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5430
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5430/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5430/comments
https://api.github.com/repos/huggingface/datasets/issues/5430/events
https://github.com/huggingface/datasets/issues/5430
1,535,856,503
I_kwDODunzps5bi093
5,430
Support Apache Beam >= 2.44.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2023-01-17T06:42:12"
"2023-01-17T16:12:18"
null
MEMBER
null
Once we find out the root cause of: - #5426 we should revert the temporary pin on apache-beam introduced by: - #5429
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5430/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5430/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5428
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5428/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5428/comments
https://api.github.com/repos/huggingface/datasets/issues/5428/events
https://github.com/huggingface/datasets/issues/5428
1,535,166,139
I_kwDODunzps5bgMa7
5,428
Load/Save FAISS index using fsspec
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
2
"2023-01-16T16:08:12"
"2023-01-19T16:34:34"
null
CONTRIBUTOR
null
### Feature request From what I understand `faiss` already support this [link](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning#generic-io-support) I would like to use a stream as input to `Dataset.load_faiss_index` and `Dataset.save_faiss_index`. ### Motivation In my case, I'm saving faiss index in cloud storage and use `fsspec` to load them. It would be ideal if I could send the stream directly instead of copying the file locally (or mounting the bucket) and then load the index. ### Your contribution I can submit the PR
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5428/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5428/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5427
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5427/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5427/comments
https://api.github.com/repos/huggingface/datasets/issues/5427/events
https://github.com/huggingface/datasets/issues/5427
1,535,162,889
I_kwDODunzps5bgLoJ
5,427
Unable to download dataset id_clickbait
{ "login": "ilos-vigil", "id": 45941585, "node_id": "MDQ6VXNlcjQ1OTQxNTg1", "avatar_url": "https://avatars.githubusercontent.com/u/45941585?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ilos-vigil", "html_url": "https://github.com/ilos-vigil", "followers_url": "https://api.github.com/users/ilos-vigil/followers", "following_url": "https://api.github.com/users/ilos-vigil/following{/other_user}", "gists_url": "https://api.github.com/users/ilos-vigil/gists{/gist_id}", "starred_url": "https://api.github.com/users/ilos-vigil/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ilos-vigil/subscriptions", "organizations_url": "https://api.github.com/users/ilos-vigil/orgs", "repos_url": "https://api.github.com/users/ilos-vigil/repos", "events_url": "https://api.github.com/users/ilos-vigil/events{/privacy}", "received_events_url": "https://api.github.com/users/ilos-vigil/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2023-01-16T16:05:36"
"2023-01-18T09:51:28"
"2023-01-18T09:25:19"
NONE
null
### Describe the bug I tried to download dataset `id_clickbait`, but receive this error message. ``` FileNotFoundError: Couldn't find file at https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/k42j7x2kpn-1.zip ``` When i open the link using browser, i got this XML data. ```xml <?xml version="1.0" encoding="UTF-8"?> <Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message><BucketName>md-datasets-cache-zipfiles-prod</BucketName><RequestId>NVRM6VEEQD69SD00</RequestId><HostId>W/SPDxLGvlCGi0OD6d7mSDvfOAUqLAfvs9nTX50BkJrjMny+X9Jnqp/Li2lG9eTUuT4MUkAA2jjTfCrCiUmu7A==</HostId></Error> ``` ### Steps to reproduce the bug Code snippet: ``` from datasets import load_dataset load_dataset('id_clickbait', 'annotated') load_dataset('id_clickbait', 'raw') ``` Link to Kaggle notebook: https://www.kaggle.com/code/ilosvigil/bug-check-on-id-clickbait-dataset ### Expected behavior Successfully download and load `id_newspaper` dataset. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5427/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5427/timeline
null
completed
null
null
false