url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
1.08B
node_id
stringlengths
18
24
number
int64
2
3.45k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
int64
1,587B
1,640B
updated_at
int64
1,588B
1,640B
closed_at
int64
1,588B
1,640B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
draft
null
pull_request
null
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/3113
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3113/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3113/comments
https://api.github.com/repos/huggingface/datasets/issues/3113/events
https://github.com/huggingface/datasets/issues/3113
1,030,667,547
I_kwDODunzps49br0b
3,113
Loading Data from HDF files
{ "login": "FeryET", "id": 30388648, "node_id": "MDQ6VXNlcjMwMzg4NjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/30388648?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FeryET", "html_url": "https://github.com/FeryET", "followers_url": "https://api.github.com/users/FeryET/followers", "following_url": "https://api.github.com/users/FeryET/following{/other_user}", "gists_url": "https://api.github.com/users/FeryET/gists{/gist_id}", "starred_url": "https://api.github.com/users/FeryET/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FeryET/subscriptions", "organizations_url": "https://api.github.com/users/FeryET/orgs", "repos_url": "https://api.github.com/users/FeryET/repos", "events_url": "https://api.github.com/users/FeryET/events{/privacy}", "received_events_url": "https://api.github.com/users/FeryET/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,671,606,000
1,634,672,568,000
null
NONE
null
**Is your feature request related to a problem? Please describe.** More often than not I come along big HDF datasets, and currently there is no straight forward way to feed them to a dataset. **Describe the solution you'd like** I would love to see a `from_h5` method that gets an interface implemented by the user on how items are extracted from dataset (in case of multiple datasets containing elements like arrays and metadata and etc). **Describe alternatives you've considered** Currently I manually load hdf files using `h5py` and implement PyTorch dataset interface. For small h5 files I load them into a pandas dataframe and use `from_pandas` function in the `datasets` package to load them, but for big datasets this is not feasible. **Additional context** HDF files are widespread throughout different domains and are one of the go to's for many researchers/scientists/engineers who work with numerical data. Given `datasets`' usecases have outgrown NLP use cases, it will make a lot of sense focusing on things like supporting HDF files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3113/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3113/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3112
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3112/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3112/comments
https://api.github.com/repos/huggingface/datasets/issues/3112/events
https://github.com/huggingface/datasets/issues/3112
1,030,613,083
I_kwDODunzps49behb
3,112
OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB
{ "login": "BenoitDalFerro", "id": 69694610, "node_id": "MDQ6VXNlcjY5Njk0NjEw", "avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BenoitDalFerro", "html_url": "https://github.com/BenoitDalFerro", "followers_url": "https://api.github.com/users/BenoitDalFerro/followers", "following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}", "gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}", "starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions", "organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs", "repos_url": "https://api.github.com/users/BenoitDalFerro/repos", "events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}", "received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "I am very unsure on why you tagged me here. I am not a maintainer of the Datasets library and have no idea how to help you.", "fixed", "Ok got it, tensor full of NaNs, cf.\r\n\r\n~\\anaconda3\\envs\\xxx\\lib\\site-packages\\datasets\\arrow_writer.py in write_examples_on_file(self)\r\n315 # This check fails with FloatArrays with nans, which is not what we want, so account for that:", "Actually this is is a live bug, documented yet still live so reopening" ]
1,634,667,701,000
1,634,669,549,000
null
NONE
null
## Describe the bug Despite having batches way under 2Gb when running `datasets.map()`, after processing correctly the data of the first batch without fuss and irrespective of writer_batch_size (say 2,4,8,16,32,64 and 128 in my case), it returns the following error : > OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB Note that I always run `batch_size=writer_batch_size` : ## Steps to reproduce the bug ```python datasets.map(lambda example : {"column_name" : function(arguments)}, batched=False, remove_columns = datasets.column_names, batch_size=batch_size, writer_batch_size=batch_size, disable_nullable=True, num_proc=None, desc="blablabla") ``` ## Introspecting CUDA memory during bug Placed within `function(arguments)` the following statement to introspect memory usage, merely a little over 1/4 of 2Gb `print(torch.cuda.memory_summary(device=device, abbreviated=False))` > |===========================================================================| | PyTorch CUDA memory summary, device ID 0 | |---------------------------------------------------------------------------| | CUDA OOMs: 0 | cudaMalloc retries: 0 | |===========================================================================| | Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed | |---------------------------------------------------------------------------| | Allocated memory | 541418 KB | 545725 KB | 555695 KB | 14276 KB | | from large pool | 540672 KB | 544431 KB | 544431 KB | 3759 KB | | from small pool | 746 KB | 1714 KB | 11264 KB | 10517 KB | |---------------------------------------------------------------------------| | Active memory | 541418 KB | 545725 KB | 555695 KB | 14276 KB | | from large pool | 540672 KB | 544431 KB | 544431 KB | 3759 KB | | from small pool | 746 KB | 1714 KB | 11264 KB | 10517 KB | |---------------------------------------------------------------------------| | GPU reserved memory | 598016 KB | 598016 KB | 598016 KB | 0 B | | from large pool | 595968 KB | 595968 KB | 595968 KB | 0 B | | from small pool | 2048 KB | 2048 KB | 2048 KB | 0 B | |---------------------------------------------------------------------------| | Non-releasable memory | 36117 KB | 52292 KB | 274275 KB | 238158 KB | | from large pool | 34816 KB | 51537 KB | 261713 KB | 226897 KB | | from small pool | 1301 KB | 2045 KB | 12562 KB | 11261 KB | |---------------------------------------------------------------------------| | Allocations | 198 | 224 | 478 | 280 | | from large pool | 74 | 75 | 75 | 1 | | from small pool | 124 | 150 | 403 | 279 | |---------------------------------------------------------------------------| | Active allocs | 198 | 224 | 478 | 280 | | from large pool | 74 | 75 | 75 | 1 | | from small pool | 124 | 150 | 403 | 279 | |---------------------------------------------------------------------------| | GPU reserved segments | 21 | 21 | 21 | 0 | | from large pool | 20 | 20 | 20 | 0 | | from small pool | 1 | 1 | 1 | 0 | |---------------------------------------------------------------------------| | Non-releasable allocs | 18 | 23 | 166 | 148 | | from large pool | 17 | 18 | 19 | 2 | | from small pool | 1 | 6 | 147 | 146 | |===========================================================================| ## Expected results Efficiently process the datasets and write it down to disk. ## Actual results -------------------------------------------------------------------------- OverflowError Traceback (most recent call last) ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2390 else: -> 2391 writer.write(example) 2392 else: ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write(self, example, key, writer_batch_size) 367 --> 368 self.write_examples_on_file() 369 ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write_examples_on_file(self) 316 if not isinstance(pa_array[0], pa.lib.FloatScalar): --> 317 raise OverflowError( 318 "There was an overflow in the {}. Try to reduce writer_batch_size to have batches smaller than 2GB".format( OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB During handling of the above exception, another exception occurred: OverflowError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_16268/2456940807.py in <module> 3 #tracker = OfflineEmissionsTracker(country_iso_code="FRA", project_name='xxx'+time_stamp,output_dir='./codecarbon') 4 #tracker.start() ----> 5 process_datasets(source_datasets_paths, dataset_dir, LM_tokenizer, LMhead_model, datasets_selection=['wikipedia'], from_scratch=True, 6 clean_sentences=False, negative_sampling=False, translate=False, tokenize=False, generate_embeddings=True, concatenate_embeddings=False, 7 max_sample=10000, padding='do_not_pad', truncation=True, cpu_batch_size=1000, gpu_batch_size=2, cpu_writer_batch_size=1000, gpu_writer_batch_size=2, disable_nullable=True, num_proc=None) # ~\xxx\xxx.py in process_datasets(source_datasets_paths, dataset_dir, LM_tokenizer, LMhead_model, datasets_selection, from_scratch, clean_sentences, translate, negative_sampling, tokenize, generate_embeddings, concatenate_embeddings, max_sample, padding, truncation, cpu_batch_size, gpu_batch_size, cpu_writer_batch_size, gpu_writer_batch_size, disable_nullable, num_proc) 481 for column in tqdm(dataset.column_names, desc=f'Processing column', leave=False): 482 if "xxx_" in column: --> 483 dataset = dataset.map(lambda example : 484 {"embeddings_"+str(column).replace("translated_",""):function(input_ids=example[column], 485 token_type_ids=example[column.replace("input_ids","token_type_ids")], ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2034 2035 if num_proc is None or num_proc == 1: -> 2036 return self._map_single( 2037 function=function, 2038 with_indices=with_indices, ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in wrapper(*args, **kwargs) 501 self: "Dataset" = kwargs.pop("self") 502 # apply actual function --> 503 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 504 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 505 for dataset in datasets: ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in wrapper(*args, **kwargs) 468 } 469 # apply actual function --> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 472 # re-apply format to the output ~\anaconda3\envs\xxx\lib\site-packages\datasets\fingerprint.py in wrapper(*args, **kwargs) 404 # Call actual function 405 --> 406 out = func(self, *args, **kwargs) 407 408 # Update fingerprint of in-place transforms + update in-place history of transforms ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2425 if update_data: 2426 if writer is not None: -> 2427 writer.finalize() 2428 if tmp_file is not None: 2429 tmp_file.close() ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in finalize(self, close_stream) 440 # Re-intializing to empty list for next batch 441 self.hkey_record = [] --> 442 self.write_examples_on_file() 443 if self.pa_writer is None: 444 if self._schema is not None: ~\anaconda3\envs\xxx\lib\site-packages\datasets\arrow_writer.py in write_examples_on_file(self) 315 # This check fails with FloatArrays with nans, which is not what we want, so account for that: 316 if not isinstance(pa_array[0], pa.lib.FloatScalar): --> 317 raise OverflowError( 318 "There was an overflow in the {}. Try to reduce writer_batch_size to have batches smaller than 2GB".format( 319 type(pa_array) OverflowError: There was an overflow in the <class 'pyarrow.lib.ListArray'>. Try to reduce writer_batch_size to have batches smaller than 2GB ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.13.3 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.8.11 - PyArrow version: 3.0.0 ##Next steps Testing on Linux. @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3112/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3112/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3111
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3111/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3111/comments
https://api.github.com/repos/huggingface/datasets/issues/3111/events
https://github.com/huggingface/datasets/issues/3111
1,030,598,983
I_kwDODunzps49bbFH
3,111
concatenate_datasets removes ClassLabel typing.
{ "login": "Dref360", "id": 8976546, "node_id": "MDQ6VXNlcjg5NzY1NDY=", "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Dref360", "html_url": "https://github.com/Dref360", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "organizations_url": "https://api.github.com/users/Dref360/orgs", "repos_url": "https://api.github.com/users/Dref360/repos", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "received_events_url": "https://api.github.com/users/Dref360/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Something like this would fix it I think: https://github.com/huggingface/datasets/compare/master...Dref360:HF-3111/concatenate_types?expand=1" ]
1,634,666,731,000
1,634,827,821,000
1,634,827,821,000
CONTRIBUTOR
null
## Describe the bug When concatenating two datasets, we lose typing of ClassLabel columns. I can work on this if this is a legitimate bug, ## Steps to reproduce the bug ```python import datasets from datasets import Dataset, ClassLabel, Value, concatenate_datasets DS_LEN = 100 my_dataset = Dataset.from_dict( { "sentence": [f"{chr(i % 10)}" for i in range(DS_LEN)], "label": [i % 2 for i in range(DS_LEN)] } ) my_predictions = Dataset.from_dict( { "pred": [(i + 1) % 2 for i in range(DS_LEN)] } ) my_dataset = my_dataset.cast(datasets.Features({"sentence": Value("string"), "label": ClassLabel(2, names=["POS", "NEG"])})) print("Original") print(my_dataset) print(my_dataset.features) concat_ds = concatenate_datasets([my_dataset, my_predictions], axis=1) print("Concatenated") print(concat_ds) print(concat_ds.features) ``` ## Expected results The features of `concat_ds` should contain ClassLabel. ## Actual results On master, I get: ``` {'sentence': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None), 'pred': Value(dtype='int64', id=None)} ``` ## Environment info - `datasets` version: 1.14.1.dev0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3111/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3111/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3105/comments
https://api.github.com/repos/huggingface/datasets/issues/3105/events
https://github.com/huggingface/datasets/issues/3105
1,029,098,843
I_kwDODunzps49Vs1b
3,105
download_mode=`force_redownload` does not work on removed datasets
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
[]
1,634,562,758,000
1,634,895,370,000
null
CONTRIBUTOR
null
## Describe the bug If a cached dataset is removed from the library, I don't see how to delete it programmatically. I thought that using `force_redownload` would try to refresh the cache, then raise an exception, but it reuses the cache instead. ## Steps to reproduce the bug _requires to already have `wit` in the cache_: see https://github.com/huggingface/datasets/pull/2981 ```python import datasets as ds dataset = ds.load_dataset("wit", split="train", download_mode='force_redownload') ``` ## Expected results It should raise an exception, since the dataset does not exist anymore. ## Actual results It uses the cached result ``` Using the latest cached version of the module from /home/slesage/.cache/huggingface/modules/datasets_modules/datasets/wit/107afbffd48e058b19101bddc47fbee25fa68eb6d50a733e262875f1285a5171 (last modified on Wed Sep 29 08:21:10 2021) since it couldn't be found locally at wit, or remotely on the Hugging Face Hub. ``` ## Environment info - `datasets` version: 1.13.4.dev0 - Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3105/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3105/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3104/comments
https://api.github.com/repos/huggingface/datasets/issues/3104/events
https://github.com/huggingface/datasets/issues/3104
1,029,080,412
I_kwDODunzps49VoVc
3,104
Missing Zenodo 1.13.3 release
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Zenodo has fixed on their side the 1.13.3 release: https://zenodo.org/record/5589150" ]
1,634,561,838,000
1,634,908,945,000
1,634,908,944,000
MEMBER
null
After `datasets` 1.13.3 release, this does not appear in Zenodo releases: https://zenodo.org/record/5570305 TODO: - [x] Contact Zenodo support - [x] Check it is fixed
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3104/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3104/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3102
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3102/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3102/comments
https://api.github.com/repos/huggingface/datasets/issues/3102/events
https://github.com/huggingface/datasets/issues/3102
1,029,067,062
I_kwDODunzps49VlE2
3,102
Unsuitable project description in PyPI
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,561,100,000
1,634,561,996,000
1,634,561,996,000
MEMBER
null
Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3102/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3102/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3099/comments
https://api.github.com/repos/huggingface/datasets/issues/3099/events
https://github.com/huggingface/datasets/issues/3099
1,028,338,078
I_kwDODunzps49SzGe
3,099
AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'
{ "login": "JTWang2000", "id": 49268567, "node_id": "MDQ6VXNlcjQ5MjY4NTY3", "avatar_url": "https://avatars.githubusercontent.com/u/49268567?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JTWang2000", "html_url": "https://github.com/JTWang2000", "followers_url": "https://api.github.com/users/JTWang2000/followers", "following_url": "https://api.github.com/users/JTWang2000/following{/other_user}", "gists_url": "https://api.github.com/users/JTWang2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/JTWang2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JTWang2000/subscriptions", "organizations_url": "https://api.github.com/users/JTWang2000/orgs", "repos_url": "https://api.github.com/users/JTWang2000/repos", "events_url": "https://api.github.com/users/JTWang2000/events{/privacy}", "received_events_url": "https://api.github.com/users/JTWang2000/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @JTWang2000, thanks for reporting.\r\n\r\nHowever, I cannot reproduce your reported bug:\r\n```python\r\n>>> from datasets import load_dataset\r\n\r\n>>> dataset = load_dataset(\"sst\", \"default\")\r\n>>> dataset\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tree'],\r\n num_rows: 8544\r\n })\r\n validation: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tree'],\r\n num_rows: 1101\r\n })\r\n test: Dataset({\r\n features: ['sentence', 'label', 'tokens', 'tree'],\r\n num_rows: 2210\r\n })\r\n})\r\n```\r\n\r\nMaybe, the cause is that you have a quite old version of `huggingface_hub`. Could you please try to update it and confirm if the problem persists?\r\n```\r\npip install -U huggingface_hub\r\n```", "Im facing the same issue. I did run the upgrade command but that doesnt seem to resolve the issue", "Hi @aneeshjain, could you please specify which `huggingface_hub` version you are using?\r\n\r\nBesides that, please run `datasets-cli env` and copy-and-paste its output below.", "The problem seems to be with the latest version of `datasets`. After running `pip install -U datasets huggingface_hub`, I get the following: \r\n\r\n```bash\r\npython -c \"import huggingface_hub; print(f'hbvers={huggingface_hub.__version__}'); import datasets; print(f'dvers={datasets.__version__}')\"\r\nhbvers=0.0.8\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/__init__.py\", line 37, in <module>\r\n from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/builder.py\", line 44, in <module>\r\n from .data_files import DataFilesDict, _sanitize_patterns\r\n File \"/opt/conda/lib/python3.6/site-packages/datasets/data_files.py\", line 122, in <module>\r\n allowed_extensions: Optional[list] = None,\r\nAttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo'\r\n````\r\nNote that pip reports the latest `datasets` version as \r\n```bash\r\n pip show datasets\r\nName: datasets\r\nVersion: 1.14.0\r\n```\r\nHowever, if I downgrade datasets with `pip install datasets==1.11.0`, things now work\r\n```bash\r\npython -c \"import huggingface_hub; print(f'hbvers={huggingface_hub.__version__}'); import datasets; print(f'dvers={datasets.__version__}')\"\r\nhbvers=0.0.8\r\ndvers=1.11.0\r\n````", "> Hi @JTWang2000, thanks for reporting.\r\n> \r\n> However, I cannot reproduce your reported bug:\r\n> \r\n> ```python\r\n> >>> from datasets import load_dataset\r\n> \r\n> >>> dataset = load_dataset(\"sst\", \"default\")\r\n> >>> dataset\r\n> DatasetDict({\r\n> train: Dataset({\r\n> features: ['sentence', 'label', 'tokens', 'tree'],\r\n> num_rows: 8544\r\n> })\r\n> validation: Dataset({\r\n> features: ['sentence', 'label', 'tokens', 'tree'],\r\n> num_rows: 1101\r\n> })\r\n> test: Dataset({\r\n> features: ['sentence', 'label', 'tokens', 'tree'],\r\n> num_rows: 2210\r\n> })\r\n> })\r\n> ```\r\n> \r\n> Maybe, the cause is that you have a quite old version of `huggingface_hub`. Could you please try to update it and confirm if the problem persists?\r\n> \r\n> ```\r\n> pip install -U huggingface_hub\r\n> ```\r\n\r\nMy problem solved after updating huggingface hub command. Thanks a lot and sorry for the late reply. ", "@tjruwase, please note that versions of `datsets` and `huggingface_hub` must be compatible one with each other:\r\n- In `datasets` version `1.11.0`, the requirement on `huggingface_hub` was: `huggingface_hub<0.1.0`\r\n https://github.com/huggingface/datasets/blob/2cc00f372a96133e701275eec4d6b26d15257289/setup.py#L90\r\n - Therefore, your installed `huggingface_hub` version `0.0.8` was compatible\r\n- In `datasets` version `1.12.0`, the requirement on `huggingface_hub` was: `huggingface_hub>=0.0.14,<0.1.0`\r\n https://github.com/huggingface/datasets/blob/6c766f9115d686182d76b1b937cb27e099c45d68/setup.py#L104\r\n - Therefore, your installed `huggingface_hub` version `0.0.8` was no longer compatible \r\n- Currently, in `datasets` version `1.15.1`, the requirement on `huggingface_hub` is: `huggingface_hub>=0.1.0,<1.0.0`\r\n https://github.com/huggingface/datasets/blob/018100679d21cf27136f0eccb1c50e3a9c968ce2/setup.py#L102\r\n\r\n@JTWang2000, thanks for your answer. I close this issue then." ]
1,634,480,267,000
1,636,476,149,000
1,636,476,148,000
NONE
null
## Describe the bug When using `pip install datasets` or use `conda install -c huggingface -c conda-forge datasets` cannot install datasets ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("sst", "default") ``` ## Actual results --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-fbe7981e6e21> in <module> 1 import torch 2 import transformers ----> 3 from datasets import load_dataset 4 5 dataset = load_dataset("sst", "default") ~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/__init__.py in <module> 35 from .arrow_reader import ArrowReader, ReadInstruction 36 from .arrow_writer import ArrowWriter ---> 37 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder 38 from .combine import interleave_datasets 39 from .dataset_dict import DatasetDict, IterableDatasetDict ~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/builder.py in <module> 42 ) 43 from .arrow_writer import ArrowWriter, BeamWriter ---> 44 from .data_files import DataFilesDict, _sanitize_patterns 45 from .dataset_dict import DatasetDict, IterableDatasetDict 46 from .fingerprint import Hasher ~/miniforge3/envs/actor/lib/python3.8/site-packages/datasets/data_files.py in <module> 118 119 def _exec_patterns_in_dataset_repository( --> 120 dataset_info: huggingface_hub.hf_api.DatasetInfo, 121 patterns: List[str], 122 allowed_extensions: Optional[list] = None, AttributeError: module 'huggingface_hub.hf_api' has no attribute 'DatasetInfo' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.13.3 - Platform: macOS-11.3.1-arm64-arm-64bit - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3099/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3099/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3097
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3097/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3097/comments
https://api.github.com/repos/huggingface/datasets/issues/3097/events
https://github.com/huggingface/datasets/issues/3097
1,027,750,811
I_kwDODunzps49Qjub
3,097
`ModuleNotFoundError: No module named 'fsspec.exceptions'`
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @VictorSanh.\r\n\r\nI'm fixing it." ]
1,634,326,478,000
1,634,543,514,000
1,634,543,514,000
MEMBER
null
## Describe the bug I keep runnig into a fsspec ModuleNotFound error ## Steps to reproduce the bug ```python >>> from datasets import get_dataset_infos 2021-10-15 15:25:37.863206: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-10-15 15:25:37.863252: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/__init__.py", line 37, in <module> from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 56, in <module> from .utils.streaming_download_manager import StreamingDownloadManager File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/streaming_download_manager.py", line 11, in <module> from fsspec.exceptions import FSTimeoutError ModuleNotFoundError: No module named 'fsspec.exceptions' ``` Yet, I do have `fsspec`: ```bash hf@victor-scale:~/dev/promptsource$ pip show fsspec Name: fsspec Version: 2021.5.0 Summary: File-system specification Home-page: http://github.com/intake/filesystem_spec Author: None Author-email: None License: BSD Location: /home/hf/dev/promptsource/.venv/lib/python3.7/site-packages Requires: Required-by: datasets ``` With the same version of fsspec and `datasets==1.9.0`, I don't see this problem.... ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> I can't even run `datasets-cli env` actually.., but here's my env: - `datasets` version: 1.13.3 - Platform: Ubuntu 18.04 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3097/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3097/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3095
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3095/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3095/comments
https://api.github.com/repos/huggingface/datasets/issues/3095/events
https://github.com/huggingface/datasets/issues/3095
1,027,453,146
I_kwDODunzps49PbDa
3,095
`cast_column` makes audio decoding fail
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "cc @anton-l @albertvillanova ", "Thanks for reporting, @patrickvonplaten.\r\n\r\nI think the issue is related to mp3 resampling, not to `cast_column`.\r\n\r\nYou can check that `cast_column` works OK with non-mp3 audio files:\r\n```python\r\nfrom datasets import load_dataset\r\nimport datasets\r\nds = load_dataset(\"arabic_speech_corpus\", split=\"train\")\r\nds = ds.cast_column(\"audio\", datasets.features.Audio(sampling_rate=16_000))\r\nprint(ds[0][\"audio\"])\r\n```\r\n\r\nI'm fixing it." ]
1,634,305,018,000
1,634,312,310,000
1,634,312,310,000
MEMBER
null
## Describe the bug After changing the sampling rate automatic decoding fails. ## Steps to reproduce the bug ```python from datasets import load_dataset import datasets ds = load_dataset("common_voice", "ab", split="train") ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000)) print(ds[0]["audio"]) # <- this fails currently ``` yields: ``` TypeError: forward() takes 2 positional arguments but 4 were given ``` ## Expected results no failure ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> Copy-and-paste the text below in your GitHub issue. - `datasets` version: 1.13.2 (master) - Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3095/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3095/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3094
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3094/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3094/comments
https://api.github.com/repos/huggingface/datasets/issues/3094/events
https://github.com/huggingface/datasets/issues/3094
1,027,328,633
I_kwDODunzps49O8p5
3,094
Support loading a dataset from SQLite files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,634,295,521,000
1,634,295,521,000
null
MEMBER
null
As requested by @julien-c, we could eventually support loading a dataset from SQLite files, like it is the case for JSON/CSV files.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3094/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3094/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3093/comments
https://api.github.com/repos/huggingface/datasets/issues/3093/events
https://github.com/huggingface/datasets/issues/3093
1,027,262,124
I_kwDODunzps49Osas
3,093
Error loading json dataset with multiple splits if keys in nested dicts have a different order
{ "login": "dthulke", "id": 8331189, "node_id": "MDQ6VXNlcjgzMzExODk=", "avatar_url": "https://avatars.githubusercontent.com/u/8331189?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dthulke", "html_url": "https://github.com/dthulke", "followers_url": "https://api.github.com/users/dthulke/followers", "following_url": "https://api.github.com/users/dthulke/following{/other_user}", "gists_url": "https://api.github.com/users/dthulke/gists{/gist_id}", "starred_url": "https://api.github.com/users/dthulke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dthulke/subscriptions", "organizations_url": "https://api.github.com/users/dthulke/orgs", "repos_url": "https://api.github.com/users/dthulke/repos", "events_url": "https://api.github.com/users/dthulke/events{/privacy}", "received_events_url": "https://api.github.com/users/dthulke/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi, \r\n\r\neven Pandas, which is less strict compared to PyArrow when it comes to reading JSON, doesn't support different orderings:\r\n```python\r\nimport io\r\nimport pandas as pd\r\n\r\ns = \"\"\"\r\n{\"a\": {\"c\": 8, \"b\": 5}}\r\n{\"a\": {\"b\": 7, \"c\": 6}}\r\n\"\"\"\r\n\r\nbuffer = io.StringIO(s)\r\ndf = pd.read_json(buffer, lines=True)\r\n\r\nprint(df.shape[0]) # 0\r\n```\r\n\r\nSo we can't even fall back to Pandas in such cases.\r\n\r\nIt seems the only option is a script that recursively re-orders fields to enforce deterministic order:\r\n```python\r\nwith open(\"train.json\", \"r\") as fin:\r\n with open(\"train_reordered.json\", \"w\") as fout:\r\n for line in fin:\r\n obj_jsonl = json.loads(line.strip())\r\n fout.write(json.dumps(obj_jsonl, sort_keys=True) + \"\\n\")\r\n```" ]
1,634,290,405,000
1,636,550,434,000
null
NONE
null
## Describe the bug Loading a json dataset with multiple splits that have nested dicts with keys in different order results in the error below. If the keys in the nested dicts always have the same order or even if you just load a single split in which the nested dicts don't have the same order, everything works fine. ## Steps to reproduce the bug Create two json files: train.json ``` {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} ``` test.json ``` {"a": {"b": 1, "c": 2}} {"a": {"b": 3, "c": 4}} ``` ```python from datasets import load_dataset # Loading the files individually works (even though the keys in train.json don't have the same order) load_dataset('json', data_files={"test": "test.json"}) load_dataset('json', data_files={"train": "train.json"}) # Loading both splits fails load_dataset('json', data_files={"train": "train.json", "test": "test.json"}) ``` ## Expected results Loading both splits should not give an error whether the nested dicts are have the same order or not. ## Actual results ``` >>> load_dataset('json', data_files={"train": "train.json", "test": "test.json"}) Using custom data configuration default-f1bc76fd07398c4c Downloading and preparing dataset json/default to /home/dthulke/.cache/huggingface/datasets/json/default-f1bc76fd07398c4c/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426... 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 8839.42it/s] 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 477.82it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/load.py", line 1632, in load_dataset use_auth_token=use_auth_token, File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 608, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/builder.py", line 1159, in _prepare_split writer.write_table(table) File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/datasets/arrow_writer.py", line 428, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1596, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 592, in pyarrow.lib._sanitize_arrays File "pyarrow/array.pxi", line 329, in pyarrow.lib.asarray File "pyarrow/table.pxi", line 277, in pyarrow.lib.ChunkedArray.cast File "/home/dthulke/venvs/venv_torch_transformers/lib/python3.6/site-packages/pyarrow/compute.py", line 297, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 527, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 337, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct ``` ## Environment info - `datasets` version: 1.13.2 - Platform: Linux-4.15.0-147-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3093/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3093/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3091
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3091/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3091/comments
https://api.github.com/repos/huggingface/datasets/issues/3091/events
https://github.com/huggingface/datasets/issues/3091
1,027,251,530
I_kwDODunzps49Op1K
3,091
`blog_authorship_corpus` is broken
{ "login": "fdtomasi", "id": 12514317, "node_id": "MDQ6VXNlcjEyNTE0MzE3", "avatar_url": "https://avatars.githubusercontent.com/u/12514317?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fdtomasi", "html_url": "https://github.com/fdtomasi", "followers_url": "https://api.github.com/users/fdtomasi/followers", "following_url": "https://api.github.com/users/fdtomasi/following{/other_user}", "gists_url": "https://api.github.com/users/fdtomasi/gists{/gist_id}", "starred_url": "https://api.github.com/users/fdtomasi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fdtomasi/subscriptions", "organizations_url": "https://api.github.com/users/fdtomasi/orgs", "repos_url": "https://api.github.com/users/fdtomasi/repos", "events_url": "https://api.github.com/users/fdtomasi/events{/privacy}", "received_events_url": "https://api.github.com/users/fdtomasi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @fdtomasi, thanks for reporting.\r\n\r\nYou are right: the original host data URL does no longer exist.\r\n\r\nI've contacted the authors of the dataset to ask them if they host this dataset in another URL.", "Hi, @fdtomasi, the URL is fixed.\r\n\r\nThe fix is already in our master branch and it will be accessible in our next release.\r\n\r\nIn the meantime, you can include the fix if you install the `datasets` library from the master branch:\r\n```\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```", "Awesome thank you so much for the quick fix!" ]
1,634,289,640,000
1,634,648,770,000
1,634,647,839,000
NONE
null
## Describe the bug The dataset `blog_authorship_corpus` is broken. By bypassing the checksum checks, the loading does not return any error but the resulting dataset is empty. I suspect it is because the data download url is broken (http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip). ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("blog_authorship_corpus", split="train", download_mode='force_redownload') ``` ## Expected results No error. ## Actual results ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) /tmp/ipykernel_5237/1729238701.py in <module> 2 ds = load_dataset( 3 "blog_authorship_corpus", split="train", ----> 4 download_mode='force_redownload' 5 ) /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 1115 ignore_verifications=ignore_verifications, 1116 try_from_hf_gcs=try_from_hf_gcs, -> 1117 use_auth_token=use_auth_token, 1118 ) 1119 /opt/conda/lib/python3.7/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 635 if not downloaded_from_gcs: 636 self._download_and_prepare( --> 637 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 638 ) 639 # Sync info /opt/conda/lib/python3.7/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 707 if verify_infos: 708 verify_checksums( --> 709 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" 710 ) 711 /opt/conda/lib/python3.7/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['http://www.cs.biu.ac.il/~koppel/blogs/blogs.zip'] ``` ## Environment info - `datasets` version: 1.13.2 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3091/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3091/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3089/comments
https://api.github.com/repos/huggingface/datasets/issues/3089/events
https://github.com/huggingface/datasets/issues/3089
1,026,973,360
I_kwDODunzps49Nl6w
3,089
JNLPBA Dataset
{ "login": "sciarrilli", "id": 10460111, "node_id": "MDQ6VXNlcjEwNDYwMTEx", "avatar_url": "https://avatars.githubusercontent.com/u/10460111?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sciarrilli", "html_url": "https://github.com/sciarrilli", "followers_url": "https://api.github.com/users/sciarrilli/followers", "following_url": "https://api.github.com/users/sciarrilli/following{/other_user}", "gists_url": "https://api.github.com/users/sciarrilli/gists{/gist_id}", "starred_url": "https://api.github.com/users/sciarrilli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sciarrilli/subscriptions", "organizations_url": "https://api.github.com/users/sciarrilli/orgs", "repos_url": "https://api.github.com/users/sciarrilli/repos", "events_url": "https://api.github.com/users/sciarrilli/events{/privacy}", "received_events_url": "https://api.github.com/users/sciarrilli/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "# Steps to reproduce\r\n\r\nTo reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('jnlpba')\r\n\r\ndataset['train'].features['ner_tags']\r\n```\r\nOutput:\r\n```python\r\nSequence(feature=ClassLabel(num_classes=3, names=['O', 'B', 'I'], names_file=None, id=None), length=-1, id=None)\r\n```\r\n\r\n", "Since I cannot create a branch here is the updated code:\r\n\r\n```python\r\n\r\n# coding=utf-8\r\n# Copyright 2020 HuggingFace Datasets Authors.\r\n#\r\n# Licensed under the Apache License, Version 2.0 (the \"License\");\r\n# you may not use this file except in compliance with the License.\r\n# You may obtain a copy of the License at\r\n#\r\n# http://www.apache.org/licenses/LICENSE-2.0\r\n#\r\n# Unless required by applicable law or agreed to in writing, software\r\n# distributed under the License is distributed on an \"AS IS\" BASIS,\r\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\r\n# See the License for the specific language governing permissions and\r\n# limitations under the License.\r\n\r\n# Lint as: python3\r\n\"\"\"Introduction to the Bio-Entity Recognition Task at JNLPBA\"\"\"\r\n\r\nimport os\r\n\r\nimport datasets\r\n\r\n\r\nlogger = datasets.logging.get_logger(__name__)\r\n\r\n\r\n_CITATION = \"\"\"\\\r\n@inproceedings{kim2004introduction,\r\n title={Introduction to the bio-entity recognition task at JNLPBA},\r\n author={Kim, Jin-Dong and Ohta, Tomoko and Tsuruoka, Yoshimasa and Tateisi, Yuka and Collier, Nigel},\r\n booktitle={Proceedings of the international joint workshop on natural language processing in biomedicine and its applications},\r\n pages={70--75},\r\n year={2004},\r\n organization={Citeseer}\r\n}\r\n\"\"\"\r\n\r\n_DESCRIPTION = \"\"\"\\\r\nThe data came from the GENIA version 3.02 corpus (Kim et al., 2003). This was formed from a controlled search\r\non MEDLINE using the MeSH terms \u0018human\u0019, \u0018blood cells\u0019 and \u0018transcription factors\u0019. From this search 2,000 abstracts\r\nwere selected and hand annotated according to a small taxonomy of 48 classes based on a chemical classification.\r\nAmong the classes, 36 terminal classes were used to annotate the GENIA corpus.\r\n\"\"\"\r\n\r\n_HOMEPAGE = \"http://www.geniaproject.org/shared-tasks/bionlp-jnlpba-shared-task-2004\"\r\n_TRAIN_URL = \"http://www.nactem.ac.uk/GENIA/current/Shared-tasks/JNLPBA/Train/Genia4ERtraining.tar.gz\"\r\n_VAL_URL = 'http://www.nactem.ac.uk/GENIA/current/Shared-tasks/JNLPBA/Evaluation/Genia4ERtest.tar.gz'\r\n\r\n\r\n_URLS = {\r\n \"train\": _TRAIN_URL,\r\n \"val\": _VAL_URL,\r\n}\r\n\r\n_TRAIN_DIRECTORY = \"Genia4ERtraining\"\r\n_VAL_DIRECTORY = \"Genia4ERtest\"\r\n\r\n_TRAIN_FILE = \"Genia4ERtask1.iob2\"\r\n_VAL_FILE = \"Genia4EReval1.iob2\"\r\n\r\n\r\nclass JNLPBAConfig(datasets.BuilderConfig):\r\n \"\"\"BuilderConfig for JNLPBA\"\"\"\r\n\r\n def __init__(self, **kwargs):\r\n \"\"\"BuilderConfig for JNLPBA.\r\n Args:\r\n **kwargs: keyword arguments forwarded to super.\r\n \"\"\"\r\n super(JNLPBAConfig, self).__init__(**kwargs)\r\n\r\n\r\nclass JNLPBA(datasets.GeneratorBasedBuilder):\r\n \"\"\"JNLPBA dataset.\"\"\"\r\n\r\n BUILDER_CONFIGS = [\r\n JNLPBAConfig(name=\"jnlpba\", version=datasets.Version(\"1.0.0\"), description=\"JNLPBA dataset\"),\r\n ]\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n description=_DESCRIPTION,\r\n features=datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"string\"),\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"ner_tags\": datasets.Sequence(\r\n datasets.features.ClassLabel(\r\n names=[\r\n 'O',\r\n 'B-DNA',\r\n 'I-DNA', \r\n 'B-RNA',\r\n 'I-RNA',\r\n 'B-cell_line',\r\n 'I-cell_line',\r\n 'B-cell_type',\r\n 'I-cell_type',\r\n 'B-protein',\r\n 'I-protein',\r\n ]\r\n )\r\n ),\r\n }\r\n ),\r\n supervised_keys=None,\r\n homepage=_HOMEPAGE,\r\n citation=_CITATION,\r\n )\r\n\r\n def _split_generators(self, dl_manager):\r\n downloaded_files = dl_manager.download_and_extract(_URLS)\r\n \r\n return [\r\n datasets.SplitGenerator(name=datasets.Split.TRAIN, \r\n gen_kwargs={\"filepath\": os.path.join(downloaded_files['train'], _TRAIN_FILE)}),\r\n datasets.SplitGenerator(name=datasets.Split.VALIDATION, \r\n gen_kwargs={\"filepath\": os.path.join(downloaded_files['val'], _VAL_FILE)})\r\n ]\r\n \r\n\r\n def _generate_examples(self, filepath):\r\n logger.info(\"⏳ Generating examples from = %s\", filepath)\r\n with open(filepath, encoding=\"utf-8\") as f:\r\n guid = 0\r\n tokens = []\r\n ner_tags = []\r\n for line in f:\r\n if line.startswith('###'):\r\n continue\r\n if line == '' or line == '\\n':\r\n if tokens:\r\n yield guid, {\r\n \"id\": str(guid),\r\n \"tokens\": tokens,\r\n \"ner_tags\": ner_tags,\r\n }\r\n guid += 1\r\n tokens = []\r\n ner_tags = []\r\n else:\r\n # tokens are tab separated\r\n splits = line.split(\"\\t\")\r\n #print(splits)\r\n #print(len(splits))\r\n if len(splits) < 2:\r\n splits = splits[0].split()\r\n tokens.append(splits[0])\r\n ner_tags.append(splits[1].rstrip())\r\n # last example\r\n yield guid, {\r\n \"id\": str(guid),\r\n \"tokens\": tokens,\r\n \"ner_tags\": ner_tags,\r\n }\r\n```" ]
1,634,260,562,000
1,634,891,037,000
1,634,891,037,000
NONE
null
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition. The entities in the [script](https://github.com/huggingface/datasets/blob/master/datasets/jnlpba/jnlpba.py#L81-L83) are: O, B, and I. The correct entities from the original data file are: ['O', 'B-DNA', 'I-DNA', 'B-RNA', 'I-RNA', 'B-cell_line', 'I-cell_line', 'B-cell_type', 'I-cell_type', 'B-protein', 'I-protein'] ## Actual results The dataset loader script needs to include the following NER names: ['O', 'B-DNA', 'I-DNA', 'B-RNA', 'I-RNA', 'B-cell_line', 'I-cell_line', 'B-cell_type', 'I-cell_type', 'B-protein', 'I-protein'] And the [data](https://github.com/huggingface/datasets/blob/master/datasets/jnlpba/jnlpba.py#L46) that is being pulled has been modified from the original dataset and does not include the original NER tags. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3089/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3089/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3087
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3087/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3087/comments
https://api.github.com/repos/huggingface/datasets/issues/3087/events
https://github.com/huggingface/datasets/issues/3087
1,026,780,469
I_kwDODunzps49M201
3,087
Removing label column in a text classification dataset yields to errors
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,634,242,370,000
1,634,292,664,000
1,634,292,664,000
MEMBER
null
## Describe the bug This looks like #3059 but it's not linked to the cache this time. Removing the `label` column from a text classification dataset and then performing any processing will result in an error. To reproduce: ```py from datasets import load_dataset from transformers import AutoTokenizer raw_datasets = load_dataset("imdb") raw_datasets = raw_datasets.remove_columns("label") model_checkpoint = "distilbert-base-cased" tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) context_length = 128 def tokenize_pad_and_truncate(texts): return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length) tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True) ``` Traceback: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-1-ba61bb32f786> in <module> 12 return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length) 13 ---> 14 tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True) ~/git/datasets/src/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc) 500 desc=desc, 501 ) --> 502 for k, dataset in self.items() 503 } 504 ) ~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0) 500 desc=desc, 501 ) --> 502 for k, dataset in self.items() 503 } 504 ) ~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2051 new_fingerprint=new_fingerprint, 2052 disable_tqdm=disable_tqdm, -> 2053 desc=desc, 2054 ) 2055 else: ~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 501 self: "Dataset" = kwargs.pop("self") 502 # apply actual function --> 503 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 504 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 505 for dataset in datasets: ~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 468 } 469 # apply actual function --> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 472 # re-apply format to the output ~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs) 404 # Call actual function 405 --> 406 out = func(self, *args, **kwargs) 407 408 # Update fingerprint of in-place transforms + update in-place history of transforms ~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2243 if os.path.exists(cache_file_name) and load_from_cache_file: 2244 logger.warning("Loading cached processed dataset at %s", cache_file_name) -> 2245 info = self.info.copy() 2246 info.features = features 2247 info.task_templates = None ~/git/datasets/src/datasets/info.py in copy(self) 278 279 def copy(self) -> "DatasetInfo": --> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) 281 282 ~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes) ~/git/datasets/src/datasets/info.py in __post_init__(self) 177 for idx, template in enumerate(self.task_templates): 178 if isinstance(template, TextClassification): --> 179 labels = self.features[template.label_column].names 180 self.task_templates[idx] = TextClassification( 181 text_column=template.text_column, label_column=template.label_column, labels=labels KeyError: 'label' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3087/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3087/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3084/comments
https://api.github.com/repos/huggingface/datasets/issues/3084/events
https://github.com/huggingface/datasets/issues/3084
1,026,428,992
I_kwDODunzps49LhBA
3,084
VisibleDeprecationWarning when using `set_format("numpy")`
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "I just opened a PR and I verified that the code you provided doesn't show any deprecation warning :)" ]
1,634,219,581,000
1,634,918,654,000
1,634,918,654,000
CONTRIBUTOR
null
Code to reproduce: ``` from datasets import load_dataset dataset = load_dataset("glue", "mnli") from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('distilbert-base-cased') def tokenize_function(dataset): return tokenizer(dataset['premise']) tokenized_datasets = dataset.map(tokenize_function, batched=True, remove_columns=dataset['train'].features) tokenized_datasets.set_format("numpy") tokenized_datasets['train'][5:8] ``` Outputs: ``` python3.9/site-packages/datasets/formatting/formatting.py:167: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray return np.array(array, copy=False, **self.np_array_kwargs) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3084/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3084/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3083
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3083/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3083/comments
https://api.github.com/repos/huggingface/datasets/issues/3083/events
https://github.com/huggingface/datasets/issues/3083
1,026,397,062
I_kwDODunzps49LZOG
3,083
Datasets with Audio feature raise error when loaded from cache due to _resampler parameter
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,217,833,000
1,634,224,420,000
1,634,224,420,000
MEMBER
null
## Describe the bug As reported by @patrickvonplaten, when loaded from the cache, datasets containing the Audio feature raise TypeError. ## Steps to reproduce the bug ```python from datasets import load_dataset # load first time works ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") # load from cache breaks ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") ``` ## Actual results ``` TypeError: __init__() got an unexpected keyword argument '_resampler' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3083/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3083/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3080/comments
https://api.github.com/repos/huggingface/datasets/issues/3080/events
https://github.com/huggingface/datasets/issues/3080
1,026,380,626
I_kwDODunzps49LVNS
3,080
Error related to timeout keyword argument
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,217,058,000
1,634,222,391,000
1,634,222,391,000
MEMBER
null
## Describe the bug As reported by @patrickvonplaten, a TypeError is raised when trying to load a dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean") ``` ## Actual results ``` TypeError: dataset_info() got an unexpected keyword argument 'timeout' ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3080/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3080/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3076
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3076/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3076/comments
https://api.github.com/repos/huggingface/datasets/issues/3076/events
https://github.com/huggingface/datasets/issues/3076
1,026,113,484
I_kwDODunzps49KT_M
3,076
Error when loading a metric
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,200,167,000
1,634,202,895,000
1,634,202,895,000
MEMBER
null
## Describe the bug As reported by @sgugger, after last release, exception is thrown when loading a metric. ## Steps to reproduce the bug ```python from datasets import load_metric metric = load_metric("squad_v2") ``` ## Actual results ``` FileNotFoundError Traceback (most recent call last) <ipython-input-1-e612a8cab787> in <module> 1 from datasets import load_metric ----> 2 metric = load_metric("squad_v2") d:\projects\huggingface\datasets\src\datasets\load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, revision, script_version, **metric_init_kwargs) 1336 ) 1337 revision = script_version -> 1338 metric_module = metric_module_factory( 1339 path, revision=revision, download_config=download_config, download_mode=download_mode 1340 ).module_path d:\projects\huggingface\datasets\src\datasets\load.py in metric_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, **download_kwargs) 1237 if not isinstance(e1, FileNotFoundError): 1238 raise e1 from None -> 1239 raise FileNotFoundError( 1240 f"Couldn't find a metric script at {relative_to_absolute_path(combined_path)}. " 1241 f"Metric '{path}' doesn't exist on the Hugging Face Hub either." FileNotFoundError: Couldn't find a metric script at D:\projects\huggingface\datasets\squad_v2\squad_v2.py. Metric 'squad_v2' doesn't exist on the Hugging Face Hub either. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3076/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3076/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3073
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3073/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3073/comments
https://api.github.com/repos/huggingface/datasets/issues/3073/events
https://github.com/huggingface/datasets/issues/3073
1,025,718,469
I_kwDODunzps49IzjF
3,073
Import error installing with ppc64le
{ "login": "gcervantes8", "id": 21228908, "node_id": "MDQ6VXNlcjIxMjI4OTA4", "avatar_url": "https://avatars.githubusercontent.com/u/21228908?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gcervantes8", "html_url": "https://github.com/gcervantes8", "followers_url": "https://api.github.com/users/gcervantes8/followers", "following_url": "https://api.github.com/users/gcervantes8/following{/other_user}", "gists_url": "https://api.github.com/users/gcervantes8/gists{/gist_id}", "starred_url": "https://api.github.com/users/gcervantes8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gcervantes8/subscriptions", "organizations_url": "https://api.github.com/users/gcervantes8/orgs", "repos_url": "https://api.github.com/users/gcervantes8/repos", "events_url": "https://api.github.com/users/gcervantes8/events{/privacy}", "received_events_url": "https://api.github.com/users/gcervantes8/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "This seems to be an issue with importing PyArrow so I posted the problem [here](https://issues.apache.org/jira/browse/ARROW-14323), and I'm closing this issue.\r\n" ]
1,634,161,043,000
1,634,229,346,000
1,634,229,208,000
NONE
null
## Describe the bug Installing the datasets library with a computer running with ppc64le seems to cause an issue when importing the datasets library. ``` python Python 3.6.13 | packaged by conda-forge | (default, Sep 23 2021, 07:37:44) [GCC 9.4.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import datasets Illegal instruction (core dumped) ``` Error when importing `Illegal instruction (core dumped)` ## Steps to reproduce the bug I get this error when installing the library by using conda. I can't install with pip I believe because pyarrow only has the ppc64le library on conda forge ``` conda create --name transformers_py36_v2 python=3.6 conda activate transformers_py36_v2 conda install datasets ``` ## Tracebacks conda create --name transformers_py36_v2 python=3.6 ``` Collecting package metadata (current_repodata.json): done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 4.9.2 latest version: 4.10.3 Please update conda by running $ conda update -n base -c defaults conda ## Package Plan ## environment location: /p/home/gerryc/.conda/envs/transformers_py36_v2 added / updated specs: - python=3.6 The following NEW packages will be INSTALLED: _libgcc_mutex conda-forge/linux-ppc64le::_libgcc_mutex-0.1-conda_forge _openmp_mutex conda-forge/linux-ppc64le::_openmp_mutex-4.5-1_gnu ca-certificates conda-forge/linux-ppc64le::ca-certificates-2021.10.8-h1084571_0 certifi pkgs/main/linux-ppc64le::certifi-2020.12.5-py36h6ffa863_0 ld_impl_linux-ppc~ conda-forge/linux-ppc64le::ld_impl_linux-ppc64le-2.36.1-ha35d02b_2 libffi conda-forge/linux-ppc64le::libffi-3.4.2-h3b9df90_4 libgcc-ng conda-forge/linux-ppc64le::libgcc-ng-11.2.0-h7698a5e_11 libgomp conda-forge/linux-ppc64le::libgomp-11.2.0-h7698a5e_11 libstdcxx-ng conda-forge/linux-ppc64le::libstdcxx-ng-11.2.0-habdf983_11 libzlib conda-forge/linux-ppc64le::libzlib-1.2.11-h339bb43_1013 ncurses conda-forge/linux-ppc64le::ncurses-6.2-hea85c5d_4 openssl conda-forge/linux-ppc64le::openssl-1.1.1l-h4e0d66e_0 pip conda-forge/noarch::pip-21.3-pyhd8ed1ab_0 python conda-forge/linux-ppc64le::python-3.6.13-h57873ef_2_cpython readline conda-forge/linux-ppc64le::readline-8.1-h5c45dff_0 setuptools pkgs/main/linux-ppc64le::setuptools-58.0.4-py36h6ffa863_0 sqlite conda-forge/linux-ppc64le::sqlite-3.36.0-h4e2196e_2 tk conda-forge/linux-ppc64le::tk-8.6.11-h41c6715_1 wheel conda-forge/noarch::wheel-0.37.0-pyhd8ed1ab_1 xz conda-forge/linux-ppc64le::xz-5.2.5-h6eb9509_1 zlib conda-forge/linux-ppc64le::zlib-1.2.11-h339bb43_1013 Proceed ([y]/n)? y Preparing transaction: done Verifying transaction: done Executing transaction: done # # To activate this environment, use # # $ conda activate transformers_py36_v2 # # To deactivate an active environment, use # # $ conda deactivate ``` conda activate transformers_py36_v2 conda install datasets ``` Collecting package metadata (current_repodata.json): done Solving environment: failed with initial frozen solve. Retrying with flexible solve. Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source. Collecting package metadata (repodata.json): done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 4.9.2 latest version: 4.10.3 Please update conda by running $ conda update -n base -c defaults conda ## Package Plan ## environment location: /p/home/gerryc/.conda/envs/transformers_py36_v2 added / updated specs: - datasets The following NEW packages will be INSTALLED: abseil-cpp conda-forge/linux-ppc64le::abseil-cpp-20210324.2-h3b9df90_0 aiohttp conda-forge/linux-ppc64le::aiohttp-3.7.4.post0-py36hc33305d_0 arrow-cpp conda-forge/linux-ppc64le::arrow-cpp-5.0.0-py36hf9cf308_8_cpu async-timeout conda-forge/noarch::async-timeout-3.0.1-py_1000 attrs conda-forge/noarch::attrs-21.2.0-pyhd8ed1ab_0 aws-c-cal conda-forge/linux-ppc64le::aws-c-cal-0.5.11-hb3fac3d_0 aws-c-common conda-forge/linux-ppc64le::aws-c-common-0.6.2-h4e0d66e_0 aws-c-event-stream conda-forge/linux-ppc64le::aws-c-event-stream-0.2.7-h76da5f2_13 aws-c-io conda-forge/linux-ppc64le::aws-c-io-0.10.5-hf6a6c7c_0 aws-checksums conda-forge/linux-ppc64le::aws-checksums-0.1.11-hfe76d68_7 aws-sdk-cpp conda-forge/linux-ppc64le::aws-sdk-cpp-1.8.186-h90855e8_3 brotlipy conda-forge/linux-ppc64le::brotlipy-0.7.0-py36hc33305d_1001 bzip2 conda-forge/linux-ppc64le::bzip2-1.0.8-h4e0d66e_4 c-ares conda-forge/linux-ppc64le::c-ares-1.17.2-h4e0d66e_0 cffi conda-forge/linux-ppc64le::cffi-1.14.6-py36h021ab3c_1 chardet conda-forge/linux-ppc64le::chardet-4.0.0-py36h270354c_1 colorama conda-forge/noarch::colorama-0.4.4-pyh9f0ad1d_0 cryptography conda-forge/linux-ppc64le::cryptography-3.4.7-py36hc71b123_0 dataclasses conda-forge/noarch::dataclasses-0.8-pyh787bdff_2 datasets conda-forge/noarch::datasets-1.12.1-pyhd8ed1ab_1 dill conda-forge/noarch::dill-0.3.4-pyhd8ed1ab_0 filelock conda-forge/noarch::filelock-3.3.0-pyhd8ed1ab_0 fsspec conda-forge/noarch::fsspec-2021.10.0-pyhd8ed1ab_0 gflags conda-forge/linux-ppc64le::gflags-2.2.2-hb209c28_1004 glog conda-forge/linux-ppc64le::glog-0.5.0-h4040248_0 grpc-cpp conda-forge/linux-ppc64le::grpc-cpp-1.40.0-h2bf711c_2 huggingface_hub conda-forge/noarch::huggingface_hub-0.0.19-pyhd8ed1ab_0 idna conda-forge/noarch::idna-2.10-pyh9f0ad1d_0 idna_ssl conda-forge/noarch::idna_ssl-1.0.0-0 importlib-metadata conda-forge/linux-ppc64le::importlib-metadata-4.8.1-py36h270354c_0 importlib_metadata conda-forge/noarch::importlib_metadata-4.8.1-hd8ed1ab_0 krb5 conda-forge/linux-ppc64le::krb5-1.19.2-haf43566_2 libblas conda-forge/linux-ppc64le::libblas-3.9.0-11_linuxppc64le_openblas libbrotlicommon conda-forge/linux-ppc64le::libbrotlicommon-1.0.9-h4e0d66e_5 libbrotlidec conda-forge/linux-ppc64le::libbrotlidec-1.0.9-h4e0d66e_5 libbrotlienc conda-forge/linux-ppc64le::libbrotlienc-1.0.9-h4e0d66e_5 libcblas conda-forge/linux-ppc64le::libcblas-3.9.0-11_linuxppc64le_openblas libcurl conda-forge/linux-ppc64le::libcurl-7.79.1-he415e40_1 libedit conda-forge/linux-ppc64le::libedit-3.1.20191231-h41a240f_2 libev conda-forge/linux-ppc64le::libev-4.33-h6eb9509_1 libevent conda-forge/linux-ppc64le::libevent-2.1.10-h97db324_4 libgfortran-ng conda-forge/linux-ppc64le::libgfortran-ng-11.2.0-hfdc3801_11 libgfortran5 conda-forge/linux-ppc64le::libgfortran5-11.2.0-he58fbb4_11 liblapack conda-forge/linux-ppc64le::liblapack-3.9.0-11_linuxppc64le_openblas libnghttp2 conda-forge/linux-ppc64le::libnghttp2-1.43.0-h42039ad_1 libopenblas conda-forge/linux-ppc64le::libopenblas-0.3.17-pthreads_h486567c_1 libprotobuf conda-forge/linux-ppc64le::libprotobuf-3.18.1-h690f14c_0 libssh2 conda-forge/linux-ppc64le::libssh2-1.10.0-ha5a9321_2 libthrift conda-forge/linux-ppc64le::libthrift-0.15.0-h54f692e_1 libutf8proc conda-forge/linux-ppc64le::libutf8proc-2.6.1-h4e0d66e_0 lz4-c conda-forge/linux-ppc64le::lz4-c-1.9.3-h3b9df90_1 multidict conda-forge/linux-ppc64le::multidict-5.2.0-py36hc33305d_0 multiprocess conda-forge/linux-ppc64le::multiprocess-0.70.12.2-py36hc33305d_0 numpy conda-forge/linux-ppc64le::numpy-1.19.5-py36h86665d4_1 orc conda-forge/linux-ppc64le::orc-1.7.0-hae6b4bd_0 packaging conda-forge/noarch::packaging-21.0-pyhd8ed1ab_0 pandas conda-forge/linux-ppc64le::pandas-1.1.5-py36hab1a6e6_0 parquet-cpp conda-forge/noarch::parquet-cpp-1.5.1-2 pyarrow conda-forge/linux-ppc64le::pyarrow-5.0.0-py36h7a46c7e_8_cpu pycparser conda-forge/noarch::pycparser-2.20-pyh9f0ad1d_2 pyopenssl conda-forge/noarch::pyopenssl-21.0.0-pyhd8ed1ab_0 pyparsing conda-forge/noarch::pyparsing-2.4.7-pyh9f0ad1d_0 pysocks conda-forge/linux-ppc64le::pysocks-1.7.1-py36h270354c_3 python-dateutil conda-forge/noarch::python-dateutil-2.8.2-pyhd8ed1ab_0 python-xxhash conda-forge/linux-ppc64le::python-xxhash-2.0.2-py36hc33305d_0 python_abi conda-forge/linux-ppc64le::python_abi-3.6-2_cp36m pytz conda-forge/noarch::pytz-2021.3-pyhd8ed1ab_0 pyyaml conda-forge/linux-ppc64le::pyyaml-5.4.1-py36hc33305d_1 re2 conda-forge/linux-ppc64le::re2-2021.09.01-h3b9df90_0 requests conda-forge/noarch::requests-2.25.1-pyhd3deb0d_0 s2n conda-forge/linux-ppc64le::s2n-1.0.10-h97db324_0 six conda-forge/noarch::six-1.16.0-pyh6c4a22f_0 snappy conda-forge/linux-ppc64le::snappy-1.1.8-hb209c28_3 tqdm conda-forge/noarch::tqdm-4.62.3-pyhd8ed1ab_0 typing-extensions conda-forge/noarch::typing-extensions-3.10.0.2-hd8ed1ab_0 typing_extensions conda-forge/noarch::typing_extensions-3.10.0.2-pyha770c72_0 urllib3 conda-forge/noarch::urllib3-1.26.7-pyhd8ed1ab_0 xxhash conda-forge/linux-ppc64le::xxhash-0.8.0-h4e0d66e_3 yaml conda-forge/linux-ppc64le::yaml-0.2.5-h6eb9509_0 yarl conda-forge/linux-ppc64le::yarl-1.6.3-py36hc33305d_2 zipp conda-forge/noarch::zipp-3.6.0-pyhd8ed1ab_0 zstd conda-forge/linux-ppc64le::zstd-1.5.0-h65c4b1a_0 The following packages will be UPDATED: certifi pkgs/main::certifi-2020.12.5-py36h6ff~ --> conda-forge::certifi-2021.5.30-py36h270354c_0 Proceed ([y]/n)? y Preparing transaction: done Verifying transaction: done Executing transaction: done ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Red Hat Enterprise Linux 8.2 (Ootpa) - Python version: 3.6 - PyArrow version: pyarrow - 5.0.0 - py36h7a46c7e_8_cpu - conda-forge Any help would be appreciated! I've been struggling on installing datasets on this machine.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3073/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3073/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3071/comments
https://api.github.com/repos/huggingface/datasets/issues/3071/events
https://github.com/huggingface/datasets/issues/3071
1,024,893,493
I_kwDODunzps49FqI1
3,071
Custom plain text dataset, plain json dataset and plain csv dataset are remove from datasets template folder
{ "login": "zixiliuUSC", "id": 49173327, "node_id": "MDQ6VXNlcjQ5MTczMzI3", "avatar_url": "https://avatars.githubusercontent.com/u/49173327?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zixiliuUSC", "html_url": "https://github.com/zixiliuUSC", "followers_url": "https://api.github.com/users/zixiliuUSC/followers", "following_url": "https://api.github.com/users/zixiliuUSC/following{/other_user}", "gists_url": "https://api.github.com/users/zixiliuUSC/gists{/gist_id}", "starred_url": "https://api.github.com/users/zixiliuUSC/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zixiliuUSC/subscriptions", "organizations_url": "https://api.github.com/users/zixiliuUSC/orgs", "repos_url": "https://api.github.com/users/zixiliuUSC/repos", "events_url": "https://api.github.com/users/zixiliuUSC/events{/privacy}", "received_events_url": "https://api.github.com/users/zixiliuUSC/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @zixiliuUSC, \r\n\r\nAs explained in the documentation (https://huggingface.co/docs/datasets/loading.html#json), we support loading any dataset in JSON (as well as CSV, text, Parquet) format:\r\n```python\r\nds = load_dataset('json', data_files='my_file.json')\r\n```" ]
1,634,110,330,000
1,634,113,624,000
1,634,113,623,000
NONE
null
## Adding a Dataset - **Name:** text, json, csv - **Description:** I am developing a customized dataset loading script. The problem is mainly about my custom dataset is seperate into many files and I only find a dataset loading template in [https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py](https://github.com/huggingface/datasets/blob/1.2.1/datasets/json/json.py) that can handle my circumstance. I'm afraid these templates are too old to use. Could you re-add these three templates to current master branch?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3071/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3071/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3069
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3069/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3069/comments
https://api.github.com/repos/huggingface/datasets/issues/3069/events
https://github.com/huggingface/datasets/issues/3069
1,024,818,680
I_kwDODunzps49FX34
3,069
CI fails on Windows with FileNotFoundError when stting up s3_base fixture
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,634,104,346,000
1,634,112,349,000
1,634,107,788,000
MEMBER
null
## Describe the bug After commit 9353fc863d0c99ab0427f83cc5a4f04fcf52f1df, the CI fails on Windows with FileNotFoundError when stting up s3_base fixture. See: https://app.circleci.com/pipelines/github/huggingface/datasets/8151/workflows/5db8d154-badd-4d3d-b202-ca7a318997a2/jobs/50321 Error summary: ``` ERROR tests/test_arrow_dataset.py::test_dummy_dataset_serialize_s3 - FileNotF... ERROR tests/test_dataset_dict.py::test_dummy_dataset_serialize_s3 - FileNotFo... ``` Stack trace: ``` ______________ ERROR at setup of test_dummy_dataset_serialize_s3 ______________ [gw0] win32 -- Python 3.6.8 C:\tools\miniconda3\python.exe @pytest.fixture() def s3_base(): # writable local S3 system import shlex import subprocess # Mocked AWS Credentials for moto. old_environ = os.environ.copy() os.environ.update(S3_FAKE_ENV_VARS) > proc = subprocess.Popen(shlex.split("moto_server s3 -p %s" % s3_port)) tests\s3_fixtures.py:32: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ C:\tools\miniconda3\lib\subprocess.py:729: in __init__ restore_signals, start_new_session) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <subprocess.Popen object at 0x0000012BB8A4B908> args = 'moto_server s3 -p 5555', executable = None, preexec_fn = None close_fds = True, pass_fds = (), cwd = None, env = None startupinfo = <subprocess.STARTUPINFO object at 0x0000012BB8177630> creationflags = 0, shell = False, p2cread = -1, p2cwrite = -1, c2pread = -1 c2pwrite = -1, errread = -1, errwrite = -1, unused_restore_signals = True unused_start_new_session = False def _execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, unused_restore_signals, unused_start_new_session): """Execute program (MS Windows version)""" assert not pass_fds, "pass_fds not supported on Windows." if not isinstance(args, str): args = list2cmdline(args) # Process startup details if startupinfo is None: startupinfo = STARTUPINFO() if -1 not in (p2cread, c2pwrite, errwrite): startupinfo.dwFlags |= _winapi.STARTF_USESTDHANDLES startupinfo.hStdInput = p2cread startupinfo.hStdOutput = c2pwrite startupinfo.hStdError = errwrite if shell: startupinfo.dwFlags |= _winapi.STARTF_USESHOWWINDOW startupinfo.wShowWindow = _winapi.SW_HIDE comspec = os.environ.get("COMSPEC", "cmd.exe") args = '{} /c "{}"'.format (comspec, args) # Start the process try: hp, ht, pid, tid = _winapi.CreateProcess(executable, args, # no special security None, None, int(not close_fds), creationflags, env, os.fspath(cwd) if cwd is not None else None, > startupinfo) E FileNotFoundError: [WinError 2] The system cannot find the file specified C:\tools\miniconda3\lib\subprocess.py:1017: FileNotFoundError ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3069/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3069/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3064
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3064/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3064/comments
https://api.github.com/repos/huggingface/datasets/issues/3064/events
https://github.com/huggingface/datasets/issues/3064
1,023,900,075
I_kwDODunzps49B3mr
3,064
Make `interleave_datasets` more robust
{ "login": "sbmaruf", "id": 32699797, "node_id": "MDQ6VXNlcjMyNjk5Nzk3", "avatar_url": "https://avatars.githubusercontent.com/u/32699797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sbmaruf", "html_url": "https://github.com/sbmaruf", "followers_url": "https://api.github.com/users/sbmaruf/followers", "following_url": "https://api.github.com/users/sbmaruf/following{/other_user}", "gists_url": "https://api.github.com/users/sbmaruf/gists{/gist_id}", "starred_url": "https://api.github.com/users/sbmaruf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sbmaruf/subscriptions", "organizations_url": "https://api.github.com/users/sbmaruf/orgs", "repos_url": "https://api.github.com/users/sbmaruf/repos", "events_url": "https://api.github.com/users/sbmaruf/events{/privacy}", "received_events_url": "https://api.github.com/users/sbmaruf/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,634,049,293,000
1,634,049,565,000
null
NONE
null
**Is your feature request related to a problem? Please describe.** Right now there are few hiccups using `interleave_datasets`. Interleaved dataset iterates until the smallest dataset completes it's iterator. In this way larger datasets may not complete full epoch of iteration. It creates new problems in calculation of epoch since there are no way to track which dataset from `interleave_datasets` completes how many epoch. **Describe the solution you'd like** For `interleave_datasets` module, - [ ] Add a boolean argument `--stop-iter` in `interleave_datasets` that enables dataset to either iterate infinite amount of time or not. That means it should not return `StopIterator` exception in case `--stop-iter=False`. - [ ] Internal list variable `iter_cnt` that explains how many times (in steps/epochs) each dataset iterates at a given point. - [ ] Add an argument `--max-iter` (list type) that explain maximum amount of time each of the dataset can iterate. After complete `--max-iter` of one dataset, other dataset should continue sampling and when all the dataset finish their respective `--max-iter`, only then return `StopIterator` Note: I'm new to `datasets` api. May be these features are already there in the datasets. Since multitask training is the latest trends, I believe this feature would make the `datasets` api more popular. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3064/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3064/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3063
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3063/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3063/comments
https://api.github.com/repos/huggingface/datasets/issues/3063/events
https://github.com/huggingface/datasets/issues/3063
1,023,588,297
I_kwDODunzps49ArfJ
3,063
Windows CI is unable to test streaming properly because of SSL issues
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
open
false
null
[]
null
[ "I think this problem is already fixed:\r\n```python\r\nIn [4]: import fsspec\r\n ...:\r\n ...: url = \"https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes\"\r\n ...:\r\n ...: fsspec.open(url).open()\r\nOut[4]: <File-like object HTTPFileSystem, https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattribu\r\n```", "No I'm still having this issue on my windows, and so does the CI" ]
1,634,031,220,000
1,634,663,512,000
null
MEMBER
null
In https://github.com/huggingface/datasets/pull/3041 the windows tests were skipped because of SSL issues with moon-staging.huggingface.co:443 The issue appears only on windows with asyncio. On Linux it works. With requests it works as well. And with the production environment huggingface.co it also works. to reproduce on windows: ```python import fsspec # use any URL to a file in a dataset repo url = "https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes" fsspec.open(url).open() ``` raises ```python FileNotFoundError: https://moon-staging.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/my-dataset-16242824690709/resolve/main/.gitattributes ``` because of ```python aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')] ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3063/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3063/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3061/comments
https://api.github.com/repos/huggingface/datasets/issues/3061/events
https://github.com/huggingface/datasets/issues/3061
1,023,103,119
I_kwDODunzps48-1CP
3,061
Feature request : add leave=True to dataset.map to enable tqdm nested bars (and whilst we're at it couldn't we get a way to access directly tqdm underneath?)
{ "login": "BenoitDalFerro", "id": 69694610, "node_id": "MDQ6VXNlcjY5Njk0NjEw", "avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BenoitDalFerro", "html_url": "https://github.com/BenoitDalFerro", "followers_url": "https://api.github.com/users/BenoitDalFerro/followers", "following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}", "gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}", "starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions", "organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs", "repos_url": "https://api.github.com/users/BenoitDalFerro/repos", "events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}", "received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "@lhoestq, @albertvillanova can we have `**tqdm_kwargs` in `map`? If there are any fields that are important to our tqdm (like iterable or unit), we can pop them before initialising the tqdm object so as to avoid duplicity.", "Hi ! Sounds like a good idea :)\r\n\r\nAlso I think it would be better to have this as an actual parameters instead of kwargs to make it clearer" ]
1,633,985,389,000
1,634,895,250,000
null
NONE
null
**A clear and concise description of what you want to happen.** It would be so nice to be able to nest HuggingFace `Datasets.map() ` progress bars in the grander scheme of things and whilst we're at it why not other functions. **Describe alternatives you've considered** By the way is there not a way to directly interact with underlying tqdm module ? **kwargs-ish? **Additional context** Furthering tqdm integration #2374 and huggingface/transformers#11797 solutioned by huggingface/transformers#12226 provided with tqdm description as `desc=` @sgugger @bhavitvyamalik
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3061/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3061/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3060
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3060/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3060/comments
https://api.github.com/repos/huggingface/datasets/issues/3060/events
https://github.com/huggingface/datasets/issues/3060
1,022,936,396
I_kwDODunzps48-MVM
3,060
load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached"
{ "login": "RylanSchaeffer", "id": 8942987, "node_id": "MDQ6VXNlcjg5NDI5ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8942987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RylanSchaeffer", "html_url": "https://github.com/RylanSchaeffer", "followers_url": "https://api.github.com/users/RylanSchaeffer/followers", "following_url": "https://api.github.com/users/RylanSchaeffer/following{/other_user}", "gists_url": "https://api.github.com/users/RylanSchaeffer/gists{/gist_id}", "starred_url": "https://api.github.com/users/RylanSchaeffer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RylanSchaeffer/subscriptions", "organizations_url": "https://api.github.com/users/RylanSchaeffer/orgs", "repos_url": "https://api.github.com/users/RylanSchaeffer/repos", "events_url": "https://api.github.com/users/RylanSchaeffer/events{/privacy}", "received_events_url": "https://api.github.com/users/RylanSchaeffer/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @RylanSchaeffer, thanks for reporting.\r\n\r\nI'm sorry, but I was not able to reproduce your problem.\r\n\r\nNormally, the reason for this type of error is that, during your download of the data files, this was not fully complete.\r\n\r\nCould you please try to load the dataset again but forcing its redownload? Please use:\r\n```python\r\ndataset = load_dataset(\"openwebtext\", download_mode=\"FORCE_REDOWNLOAD\")\r\n```\r\n\r\nLet me know if the problem persists.", "I close this issue for the moment. Feel free to re-open it again if the problem persists." ]
1,633,971,927,000
1,635,400,341,000
1,635,400,341,000
NONE
null
## Describe the bug When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error. ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('openwebtext') ``` ## Expected results I expect the `dataset` variable to be properly constructed. ## Actual results ``` File "/home/rschaef/CoCoSci-Language-Distillation/distillation_v2/ratchet_learning/tasks/base.py", line 37, in create_dataset dataset_str, File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/load.py", line 1117, in load_dataset use_auth_token=use_auth_token, File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 637, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 704, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rschaef/.cache/huggingface/modules/datasets_modules/datasets/openwebtext/85b3ae7051d2d72e7c5fdf6dfb462603aaa26e9ed506202bf3a24d261c6c40a1/openwebtext.py", line 61, in _split_generators dl_dir = dl_manager.download_and_extract(_URL) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 261, in extract partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 197, in map_nested return function(data_struct) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 316, in cached_path output_path, force_extract=download_config.force_extract File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 40, in extract self.extractor.extract(input_path, output_path, extractor=extractor) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 179, in extract return extractor.extract(input_path, output_path) File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 53, in extract tar_file.extractall(output_path) File "/usr/lib/python3.6/tarfile.py", line 2010, in extractall numeric_owner=numeric_owner) File "/usr/lib/python3.6/tarfile.py", line 2052, in extract numeric_owner=numeric_owner) File "/usr/lib/python3.6/tarfile.py", line 2122, in _extract_member self.makefile(tarinfo, targetpath) File "/usr/lib/python3.6/tarfile.py", line 2171, in makefile copyfileobj(source, target, tarinfo.size, ReadError, bufsize) File "/usr/lib/python3.6/tarfile.py", line 249, in copyfileobj buf = src.read(bufsize) File "/usr/lib/python3.6/lzma.py", line 200, in read return self._buffer.read(size) File "/usr/lib/python3.6/_compression.py", line 68, in readinto data = self.read(len(byte_view)) File "/usr/lib/python3.6/_compression.py", line 99, in read raise EOFError("Compressed file ended before the " python-BaseException EOFError: Compressed file ended before the end-of-stream marker was reached ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial - Python version: 3.6.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3060/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3060/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3058/comments
https://api.github.com/repos/huggingface/datasets/issues/3058/events
https://github.com/huggingface/datasets/issues/3058
1,022,612,664
I_kwDODunzps4889S4
3,058
Dataset wikipedia and Bookcorpusopen cannot be fetched from dataloader.
{ "login": "hobbitlzy", "id": 35392624, "node_id": "MDQ6VXNlcjM1MzkyNjI0", "avatar_url": "https://avatars.githubusercontent.com/u/35392624?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hobbitlzy", "html_url": "https://github.com/hobbitlzy", "followers_url": "https://api.github.com/users/hobbitlzy/followers", "following_url": "https://api.github.com/users/hobbitlzy/following{/other_user}", "gists_url": "https://api.github.com/users/hobbitlzy/gists{/gist_id}", "starred_url": "https://api.github.com/users/hobbitlzy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hobbitlzy/subscriptions", "organizations_url": "https://api.github.com/users/hobbitlzy/orgs", "repos_url": "https://api.github.com/users/hobbitlzy/repos", "events_url": "https://api.github.com/users/hobbitlzy/events{/privacy}", "received_events_url": "https://api.github.com/users/hobbitlzy/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! I think this issue is more related to the `transformers` project. Could you open an issue on https://github.com/huggingface/transformers ?\r\n\r\nAnyway I think the issue could be that both wikipedia and bookcorpusopen have an additional \"title\" column, contrary to wikitext which only has a \"text\" column. After calling `load_dataset`, can you try doing `dataset = dataset.remove_columns(\"title\")` ?", "Removing the \"title\" column works! Thanks for your advice.\r\n\r\nMaybe I should still create an issue to `transformers' to mark this solution?" ]
1,633,953,299,000
1,634,182,083,000
null
NONE
null
## Describe the bug I have used the previous version of `transformers` and `datasets`. The dataset `wikipedia` can be successfully used. Recently, I upgrade them to the newest version and find it raises errors. I also tried other datasets. The `wikitext` works and the `bookcorpusopen` raises the same errors as `wikipedia`. ## Steps to reproduce the bug Run the `run_mlm_no_trainer.py` and the given script on this [link](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling). Change the dataset from wikitext to wikipedia or bookcorpusopen. BTW, the library transformers is of version 4.11.3. ## Expected results The data batchs are fetched from the data loader and train. ## Actual results The first time to fetch data batch occurs error. `Traceback (most recent call last): File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 705, in convert_to_tensors tensor = as_tensor(value) ValueError: too many dimensions 'str' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "src/original_run_mlm_no_trainer.py", line 528, in <module> main() File "src/original_run_mlm_no_trainer.py", line 488, in main for step, batch in enumerate(train_dataloader): File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/accelerate/data_loader.py", line 303, in __iter__ for batch in super().__iter__(): File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in __next__ data = self._next_data() File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 557, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch return self.collate_fn(data) File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/data/data_collator.py", line 41, in __call__ return self.torch_call(features) File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/data/data_collator.py", line 671, in torch_call batch = self.tokenizer.pad(examples, return_tensors="pt", pad_to_multiple_of=self.pad_to_multiple_of) File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2774, in pad return BatchEncoding(batch_outputs, tensor_type=return_tensors) File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 210, in __init__ self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis) File "/home/zyli/anaconda3/envs/LatestStacking/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 722, in convert_to_tensors "Unable to create tensor, you should probably activate truncation and/or padding " ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. ` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Linux-5.8.0-59-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.6 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3058/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3058/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3057
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3057/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3057/comments
https://api.github.com/repos/huggingface/datasets/issues/3057/events
https://github.com/huggingface/datasets/issues/3057
1,022,508,315
I_kwDODunzps488j0b
3,057
Error in per class precision computation
{ "login": "tidhamecha2", "id": 38906722, "node_id": "MDQ6VXNlcjM4OTA2NzIy", "avatar_url": "https://avatars.githubusercontent.com/u/38906722?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tidhamecha2", "html_url": "https://github.com/tidhamecha2", "followers_url": "https://api.github.com/users/tidhamecha2/followers", "following_url": "https://api.github.com/users/tidhamecha2/following{/other_user}", "gists_url": "https://api.github.com/users/tidhamecha2/gists{/gist_id}", "starred_url": "https://api.github.com/users/tidhamecha2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tidhamecha2/subscriptions", "organizations_url": "https://api.github.com/users/tidhamecha2/orgs", "repos_url": "https://api.github.com/users/tidhamecha2/repos", "events_url": "https://api.github.com/users/tidhamecha2/events{/privacy}", "received_events_url": "https://api.github.com/users/tidhamecha2/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @tidhamecha2, thanks for reporting.\r\n\r\nIndeed, we fixed this issue just one week ago: #3008\r\n\r\nThe fix will be included in our next version release.\r\n\r\nIn the meantime, you can incorporate the fix by installing `datasets` from the master branch:\r\n```\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```" ]
1,633,946,719,000
1,633,947,464,000
1,633,947,376,000
NONE
null
## Describe the bug When trying to get the per class precision values by providing `average=None`, following error is thrown `ValueError: can only convert an array of size 1 to a Python scalar` ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric precision_metric = load_metric("precision") predictions = [0, 2, 1, 0, 0, 1] references = [0, 1, 2, 0, 1, 2] results = precision_metric.compute(predictions=predictions, references=references, average=None) ``` ## Expected results ` {'precision': array([0.66666667, 0. , 0. ])}` as per https://github.com/huggingface/datasets/blob/master/metrics/precision/precision.py ## Actual results ``` output = self._compute(predictions=predictions, references=references, **kwargs) File "~/.cache/huggingface/modules/datasets_modules/metrics/precision/94709a71c6fe37171ef49d3466fec24dee9a79846c9f176dff66a649e9811690/precision.py", line 110, in _compute sample_weight=sample_weight, ValueError: can only convert an array of size 1 to a Python scalar ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: linux - Python version: 3.6.9 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3057/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3057/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3055/comments
https://api.github.com/repos/huggingface/datasets/issues/3055/events
https://github.com/huggingface/datasets/issues/3055
1,022,319,238
I_kwDODunzps4871qG
3,055
CI test suite fails after meteor metric update
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,633,934,232,000
1,633,937,431,000
1,633,937,431,000
MEMBER
null
## Describe the bug CI test suite fails: https://app.circleci.com/pipelines/github/huggingface/datasets/8110/workflows/f059ba43-9154-4632-bebb-82318447ddc9/jobs/50010 Stack trace: ``` ___________________ LocalMetricTest.test_load_metric_meteor ____________________ [gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 self = <tests.test_metric_common.LocalMetricTest testMethod=test_load_metric_meteor> metric_name = 'meteor' def test_load_metric(self, metric_name): doctest.ELLIPSIS_MARKER = "[...]" metric_module = importlib.import_module(datasets.load.prepare_module(os.path.join("metrics", metric_name))[0]) metric = datasets.load.import_main_class(metric_module.__name__, dataset=False) # check parameters parameters = inspect.signature(metric._compute).parameters self.assertTrue("predictions" in parameters) self.assertTrue("references" in parameters) self.assertTrue(all([p.kind != p.VAR_KEYWORD for p in parameters.values()])) # no **kwargs # run doctest with self.patch_intensive_calls(metric_name, metric_module.__name__): with self.use_local_metrics(): > results = doctest.testmod(metric_module, verbose=True, raise_on_error=True) tests/test_metric_common.py:75: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1951: in testmod runner.run(test) ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1839: in run r = DocTestRunner.run(self, test, compileflags, out, False) ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1476: in run return self.__run(test, compileflags, out) ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1382: in __run exception) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <doctest.DebugRunner object at 0x7f4c26bd3da0> out = <built-in method write of _io.TextIOWrapper object at 0x7f51a21852d0> test = <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Mete...ets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)> example = <doctest.Example object at 0x7f4c26bd3eb8> exc_info = (<class 'TypeError'>, TypeError('"hypothesis" expects pre-tokenized hypothesis (Iterable[str]): It is a guide to action which ensures that the military always obeys the commands of the party',), <traceback object at 0x7f4cd01afec8>) def report_unexpected_exception(self, out, test, example, exc_info): > raise UnexpectedException(test, example, exc_info) E doctest.UnexpectedException: <DocTest datasets_modules.datasets.meteor.6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7.meteor.Meteor from /tmp/pytest-of-circleci/pytest-0/popen-gw1/cache/modules/datasets_modules/datasets/meteor/6201bb45d2c144ea7963680949d20f523d74a741fa0f8a806f836e6caa5245d7/meteor.py:87 (5 examples)> ../.pyenv/versions/3.6.15/lib/python3.6/doctest.py:1845: UnexpectedException ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3055/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3055/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3053
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3053/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3053/comments
https://api.github.com/repos/huggingface/datasets/issues/3053/events
https://github.com/huggingface/datasets/issues/3053
1,022,076,905
I_kwDODunzps4866fp
3,053
load_dataset('the_pile_openwebtext2') produces ArrowInvalid, value too large to fit in C integer type
{ "login": "davidbau", "id": 3458792, "node_id": "MDQ6VXNlcjM0NTg3OTI=", "avatar_url": "https://avatars.githubusercontent.com/u/3458792?v=4", "gravatar_id": "", "url": "https://api.github.com/users/davidbau", "html_url": "https://github.com/davidbau", "followers_url": "https://api.github.com/users/davidbau/followers", "following_url": "https://api.github.com/users/davidbau/following{/other_user}", "gists_url": "https://api.github.com/users/davidbau/gists{/gist_id}", "starred_url": "https://api.github.com/users/davidbau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidbau/subscriptions", "organizations_url": "https://api.github.com/users/davidbau/orgs", "repos_url": "https://api.github.com/users/davidbau/repos", "events_url": "https://api.github.com/users/davidbau/events{/privacy}", "received_events_url": "https://api.github.com/users/davidbau/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "I encountered the same bug using different datasets.\r\nany suggestions?", "+1, can reproduce here!" ]
1,633,895,721,000
1,636,408,118,000
null
NONE
null
## Describe the bug When loading `the_pile_openwebtext2`, we get the error `pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type` ## Steps to reproduce the bug ```python import datasets ds = datasets.load_dataset('the_pile_openwebtext2') ``` ## Expected results Should download the dataset, convert it to an arrow file, and return a working Dataset object. ## Actual results The download works, but conversion to the arrow file fails as follows: ``` >>> ds = datasets.load_dataset('the_pile_openwebtext2') Downloading and preparing dataset openwebtext2/plain_text (download: 27.33 GiB, generated: 63.86 GiB , post-processed: Unknown size, total: 91.19 GiB) to /home/davidbau/.cache/huggingface/datasets/open webtext2/plain_text/1.0.0/c48ec73ba3483bac673463f48f67e9a4fd8cb49a9d6ec4fb957f0b424b97cf25... Traceback (most recent call last): File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/builder.py", line 1133, in _prepare_split writer.write(example, key) File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 366, in write self.write_examples_on_file() File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 311, in write_examples_on_file pa_array = pa.array(typed_sequence) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/home/davidbau/.conda/envs/tenv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 115, in __arrow_array__ out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type) File "pyarrow/array.pxi", line 305, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Value 2111 too large to fit in C integer type ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: ``` - Platform: Ubuntu 20.04 - Python version: python 3.9 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3053/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3053/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3052/comments
https://api.github.com/repos/huggingface/datasets/issues/3052/events
https://github.com/huggingface/datasets/issues/3052
1,021,944,435
I_kwDODunzps486aJz
3,052
load_dataset cannot download the data and hangs on forever if cache dir specified
{ "login": "BenoitDalFerro", "id": 69694610, "node_id": "MDQ6VXNlcjY5Njk0NjEw", "avatar_url": "https://avatars.githubusercontent.com/u/69694610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BenoitDalFerro", "html_url": "https://github.com/BenoitDalFerro", "followers_url": "https://api.github.com/users/BenoitDalFerro/followers", "following_url": "https://api.github.com/users/BenoitDalFerro/following{/other_user}", "gists_url": "https://api.github.com/users/BenoitDalFerro/gists{/gist_id}", "starred_url": "https://api.github.com/users/BenoitDalFerro/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BenoitDalFerro/subscriptions", "organizations_url": "https://api.github.com/users/BenoitDalFerro/orgs", "repos_url": "https://api.github.com/users/BenoitDalFerro/repos", "events_url": "https://api.github.com/users/BenoitDalFerro/events{/privacy}", "received_events_url": "https://api.github.com/users/BenoitDalFerro/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Issue was environment inconsistency, updating packages did the trick\r\n\r\n`conda install -c huggingface -c conda-forge datasets`\r\n\r\n> Collecting package metadata (current_repodata.json): done\r\n> Solving environment: |\r\n> The environment is inconsistent, please check the package plan carefully\r\n> The following packages are causing the inconsistency:\r\n> \r\n> - conda-forge/noarch::datasets==1.12.1=pyhd8ed1ab_1\r\n> - conda-forge/win-64::multiprocess==0.70.12.2=py38h294d835_0\r\n> done\r\n> \r\n> Package Plan\r\n> \r\n> environment location: C:\\xxx\\anaconda3\\envs\\UnBias-94-1\r\n> \r\n> added / updated specs:\r\n> - datasets\r\n> \r\n> \r\n> The following NEW packages will be INSTALLED:\r\n> \r\n> dill conda-forge/noarch::dill-0.3.4-pyhd8ed1ab_0\r\n> \r\n> The following packages will be UPDATED:\r\n> \r\n> ca-certificates pkgs/main::ca-certificates-2021.9.30-~ --> conda-forge::ca-certificates-2021.10.8-h5b45459_0\r\n> certifi pkgs/main::certifi-2021.5.30-py38haa9~ --> conda-forge::certifi-2021.10.8-py38haa244fe_0\r\n> \r\n> The following packages will be SUPERSEDED by a higher-priority channel:\r\n> " ]
1,633,861,896,000
1,633,949,829,000
1,633,949,796,000
NONE
null
## Describe the bug After updating datasets, a code that ran just fine for ages began to fail. Specifying _datasets.load_dataset_'s _cache_dir_ optional argument on Windows 10 machine results in data download to hang on forever. Same call without cache_dir works just fine. Surprisingly exact same code just runs perfectly fine on Linux docker instance running in cloud. Unfortunately I updated Windows also at the same time and I can't remember which version of datasets was running in my conda environment prior to the update otherwise I would have tried both to check this out. :( ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` cache_dir = 'c:/data/datasets' dataset = load_dataset('wikipedia', '20200501.en', split='train',cache_dir=cache_dir) ``` Note that exact same code without specifying _cache_dir_ argument works perfectly fine. ``` cache_dir = 'c:/data/datasets' dataset = load_dataset('wikipedia', '20200501.en', split='train') ``` ## Expected results Downloads the dataset and cache is handled in the _cache_dir_ directory ## Actual results Data download keeps hanging on forever, **NO TRACEBACK**! ## Environment info - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.8.11 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3052/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3052/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3051
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3051/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3051/comments
https://api.github.com/repos/huggingface/datasets/issues/3051/events
https://github.com/huggingface/datasets/issues/3051
1,021,852,234
I_kwDODunzps486DpK
3,051
Non-Matching Checksum Error with crd3 dataset
{ "login": "RylanSchaeffer", "id": 8942987, "node_id": "MDQ6VXNlcjg5NDI5ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8942987?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RylanSchaeffer", "html_url": "https://github.com/RylanSchaeffer", "followers_url": "https://api.github.com/users/RylanSchaeffer/followers", "following_url": "https://api.github.com/users/RylanSchaeffer/following{/other_user}", "gists_url": "https://api.github.com/users/RylanSchaeffer/gists{/gist_id}", "starred_url": "https://api.github.com/users/RylanSchaeffer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RylanSchaeffer/subscriptions", "organizations_url": "https://api.github.com/users/RylanSchaeffer/orgs", "repos_url": "https://api.github.com/users/RylanSchaeffer/repos", "events_url": "https://api.github.com/users/RylanSchaeffer/events{/privacy}", "received_events_url": "https://api.github.com/users/RylanSchaeffer/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "I got the same error for another dataset (`multi_woz_v22`):\r\n\r\n```\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json']\r\n```", "I'm seeing the same issue as @RylanSchaeffer:\r\nPython 3.7.11, macOs 11.4\r\ndatasets==1.14.0\r\n\r\nfails on:\r\n```python\r\ndataset = datasets.load_dataset(\"multi_woz_v22\")\r\n```" ]
1,633,829,563,000
1,635,654,752,000
null
NONE
null
## Describe the bug When I try loading the crd3 dataset (https://huggingface.co/datasets/crd3), an error is thrown. ## Steps to reproduce the bug ```python dataset = load_dataset('crd3', split='train') ``` ## Expected results I expect no error to be thrown. ## Actual results A non-matching checksum error is thrown. ``` datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/RevanthRameshkumar/CRD3/archive/master.zip'] ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial - Python version: 3.6.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3051/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3051/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3049
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3049/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3049/comments
https://api.github.com/repos/huggingface/datasets/issues/3049/events
https://github.com/huggingface/datasets/issues/3049
1,021,770,008
I_kwDODunzps485vkY
3,049
TimeoutError during streaming
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,633,802,811,000
1,633,944,938,000
1,633,944,938,000
CONTRIBUTOR
null
## Describe the bug I got a TimeoutError after streaming for about 10h. ## Steps to reproduce the bug Very long code but we could do a test of streaming indefinitely data, though error may take a while to appear. ## Expected results This error was not expected in the code which considers only `ClientError` but not `TimeoutError`. See [this line](https://github.com/huggingface/datasets/blob/2814fbd0e18150be409f10804670e98d9ecb87d4/src/datasets/utils/streaming_download_manager.py#L129). Based on the traceback, it looks like the `TimeoutError` was not captured. ## Actual results ``` File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 25, in _runner result[0] = await coro File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/implementations/http.py", line 614, in async_fetch_range out = await r.read() File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/client_reqrep.py", line 1032, in read self._body = await self.content.read() File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 370, in read block = await self.readany() File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 392, in readany await self._wait("readany") File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/streams.py", line 306, in _wait await waiter File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/aiohttp/helpers.py", line 656, in __exit__ raise asyncio.TimeoutError from None asyncio.exceptions.TimeoutError The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 1027, in <module> main() File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 991, in main for batch in tqdm( File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/tqdm/std.py", line 1180, in __iter__ for obj in iterable: File "/home/koush/dalle-mini/dev/seq2seq/run_seq2seq_flax.py", line 376, in data_loader_streaming for item in dataset: File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 341, in __iter__ for key, example in self._iter(): File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 338, in _iter yield from ex_iterable File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 179, in __iter__ key_examples_list = [(key, example)] + [ File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 179, in <listcomp> key_examples_list = [(key, example)] + [ File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 176, in __iter__ for key, example in iterator: File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 225, in __iter__ for x in self.ex_iterable: File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 99, in __iter__ for key, example in self.generate_examples_fn(**kwargs_with_shuffled_shards): File "/home/koush/datasets/src/datasets/iterable_dataset.py", line 287, in wrapper for key, table in generate_tables_fn(**kwargs): File "/home/koush/datasets/src/datasets/packaged_modules/json/json.py", line 107, in _generate_tables batch = f.read(self.config.chunksize) File "/home/koush/datasets/src/datasets/utils/streaming_download_manager.py", line 126, in read_with_retries out = read(*args, **kwargs) File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/implementations/http.py", line 572, in read return super().read(length) File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/spec.py", line 1533, in read out = self.cache._fetch(self.loc, self.loc + length) File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/caching.py", line 390, in _fetch self.cache = self.fetcher(start, bend) File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 91, in wrapper return sync(self.loop, func, *args, **kwargs) File "/home/koush/.pyenv/versions/dev/lib/python3.9/site-packages/fsspec/asyn.py", line 69, in sync raise FSTimeoutError from return_result fsspec.exceptions.FSTimeoutError ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.2.dev0 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3049/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3049/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3048/comments
https://api.github.com/repos/huggingface/datasets/issues/3048/events
https://github.com/huggingface/datasets/issues/3048
1,021,765,661
I_kwDODunzps485ugd
3,048
Identify which shard data belongs to
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Independently of this I think it raises the need to allow multiprocessing during streaming so that we get samples from multiple shards in one batch." ]
1,633,801,595,000
1,633,811,057,000
null
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** I'm training on a large dataset made of multiple sub-datasets. During training I can observe some jumps in loss which may correspond to different shards. ![image](https://user-images.githubusercontent.com/715491/136668758-521263aa-a9b2-4ad2-8d22-060b6bf86a1c.png) My suspicion is that either: * some of the sub-datasets are harder for the model than others * some of the sub-datasets are not formatted properly I'd like to identify which shards correspond to those jumps. **Describe the solution you'd like** It would be nice to have a key associated to each data sample or data batch containing details on where the data comes from (shard idx + item idx within the shard). This should be supported both in local and streaming mode. **Describe alternatives you've considered** A fix would be for me to add myself details (shard id, sample id) as part of each data sample. The inconvenient is that it requires users to process/reupload every dataset when they need this feature.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3048/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3048/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3047
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3047/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3047/comments
https://api.github.com/repos/huggingface/datasets/issues/3047/events
https://github.com/huggingface/datasets/issues/3047
1,021,360,616
I_kwDODunzps484Lno
3,047
Loading from cache a dataset for LM built from a text classification dataset sometimes errors
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "organizations_url": "https://api.github.com/users/sgugger/orgs", "repos_url": "https://api.github.com/users/sgugger/repos", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "received_events_url": "https://api.github.com/users/sgugger/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "This has been fixed in 1.15, let me know if you still have this issue" ]
1,633,717,391,000
1,635,959,588,000
1,635,959,588,000
MEMBER
null
## Describe the bug Yes, I know, that description sucks. So the problem is arising in the course when we build a masked language modeling dataset using the IMDB dataset. To reproduce (or try since it's a bit fickle). Create a dataset for masled-language modeling from the IMDB dataset. ```python from datasets import load_dataset from transformers import Autotokenizer tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased) imdb_dataset = load_dataset("imdb", split="train") def tokenize_function(examples): return tokenizer(examples["text"]) tokenized_dataset = imdb_dataset.map( tokenize_function, batched=True, remove_columns=["text", "label"] ) chunk_size = 128 def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} # Compute length of concatenated texts total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the last chunk if it's smaller than chunk_size total_length = (total_length // chunk_size) * chunk_size # Split by chunks of max_len. result = { k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)] for k, t in concatenated_examples.items() } # Create a new labels column result["labels"] = result["input_ids"].copy() return result lm_dataset = tokenized_dataset.map(group_texts, batched=True) ``` Until now, all is well. The problem comes when you re-execute that code, more specifically: ```python tokenized_dataset = imdb_dataset.map( tokenize_function, batched=True, remove_columns=["text", "label"] ) lm_dataset = tokenized_dataset.map(group_texts, batched=True) ``` Try several times if the bug doesn't appear instantly, or just each line at a time, ideally in a notebook/Colab and you should get at some point: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-40-357a56ee3d53> in <module> ----> 1 lm_dataset = tokenized_dataset.map(group_texts, batched=True) ~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1947 new_fingerprint=new_fingerprint, 1948 disable_tqdm=disable_tqdm, -> 1949 desc=desc, 1950 ) 1951 else: ~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 424 } 425 # apply actual function --> 426 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 427 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 428 # re-apply format to the output ~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs) 404 # Call actual function 405 --> 406 out = func(self, *args, **kwargs) 407 408 # Update fingerprint of in-place transforms + update in-place history of transforms ~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2138 if os.path.exists(cache_file_name) and load_from_cache_file: 2139 logger.warning("Loading cached processed dataset at %s", cache_file_name) -> 2140 info = self.info.copy() 2141 info.features = features 2142 return Dataset.from_file(cache_file_name, info=info, split=self.split) ~/git/datasets/src/datasets/info.py in copy(self) 278 279 def copy(self) -> "DatasetInfo": --> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) 281 282 ~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes) ~/git/datasets/src/datasets/info.py in __post_init__(self) 177 for idx, template in enumerate(self.task_templates): 178 if isinstance(template, TextClassification): --> 179 labels = self.features[template.label_column].names 180 self.task_templates[idx] = TextClassification( 181 text_column=template.text_column, label_column=template.label_column, labels=labels KeyError: 'label' ``` It seems that when loading the cache, the dataset tries to access some kind of text classification template (which I imagine comes from the original dataset) and to look at a key that has since been removed.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3047/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3047/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3044
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3044/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3044/comments
https://api.github.com/repos/huggingface/datasets/issues/3044/events
https://github.com/huggingface/datasets/issues/3044
1,020,869,778
I_kwDODunzps482TyS
3,044
Inconsistent caching behaviour when using `Dataset.map()` with a `new_fingerprint` and `num_proc>1`
{ "login": "vlievin", "id": 9859840, "node_id": "MDQ6VXNlcjk4NTk4NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/9859840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vlievin", "html_url": "https://github.com/vlievin", "followers_url": "https://api.github.com/users/vlievin/followers", "following_url": "https://api.github.com/users/vlievin/following{/other_user}", "gists_url": "https://api.github.com/users/vlievin/gists{/gist_id}", "starred_url": "https://api.github.com/users/vlievin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vlievin/subscriptions", "organizations_url": "https://api.github.com/users/vlievin/orgs", "repos_url": "https://api.github.com/users/vlievin/repos", "events_url": "https://api.github.com/users/vlievin/events{/privacy}", "received_events_url": "https://api.github.com/users/vlievin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Following the discussion in #3045 if would be nice to have a way to let users have a nice experience with caching even if the function is not hashable.\r\n\r\nCurrently a workaround is to make the function picklable. This can be done by implementing a callable class instead, that can be pickled using by implementing a custom `__getstate__` method for example.\r\n\r\nHowever it sounds pretty complicated for a simple thing. Maybe one idea would be to have something similar to streamlit: they allow users to register the hashing of their own objects.\r\n\r\nSee the documentation about their `hash_funcs` here: https://docs.streamlit.io/library/advanced-features/caching#the-hash_funcs-parameter\r\n\r\nHere is the example they give:\r\n\r\n```python\r\nclass FileReference:\r\n def __init__(self, filename):\r\n self.filename = filename\r\n\r\ndef hash_file_reference(file_reference):\r\n filename = file_reference.filename\r\n return (filename, os.path.getmtime(filename))\r\n\r\n@st.cache(hash_funcs={FileReference: hash_file_reference})\r\ndef func(file_reference):\r\n ...\r\n```", "My solution was to generate a custom hash, and use the hash as a `new_fingerprint` argument to the `map()` method to enable caching. This works, but is quite hacky.\r\n\r\n@lhoestq, this approach is very neat, this would make the whole caching mechanic more explicit. I don't have so much time to look into this right now, but I might give it a try in the future. " ]
1,633,684,030,000
1,635,324,058,000
null
NONE
null
## Describe the bug Caching does not work when using `Dataset.map()` with: 1. a function that cannot be deterministically fingerprinted 2. `num_proc>1` 3. using a custom fingerprint set with the argument `new_fingerprint`. This means that the dataset will be mapped with the function for each and every call, which does not happen if `num_proc==1`. In that case (`num_proc==1`) subsequent calls will load the transformed dataset from the cache, which is the expected behaviour. The example can easily be translated into a unit test. I have a fix and will submit a pull request asap. ## Steps to reproduce the bug ```python import hashlib import json import os from typing import Dict, Any import numpy as np from datasets import load_dataset, Dataset Batch = Dict[str, Any] filename = 'example.json' class Transformation(): """A transformation with a random state that cannot be fingerprinted""" def __init__(self): self.state = np.random.random() def __call__(self, batch: Batch) -> Batch: batch['x'] = [np.random.random() for _ in batch['x']] return batch def generate_dataset(): """generate a simple dataset""" rgn = np.random.RandomState(24) data = { 'data': [{'x': float(y), 'y': -float(y)} for y in rgn.random(size=(1000,))]} if not os.path.exists(filename): with open(filename, 'w') as f: f.write(json.dumps(data)) return filename def process_dataset_with_cache(num_proc=1, remove_cache=False, cache_expected_to_exist=False): # load the generated dataset dset: Dataset = next( iter(load_dataset('json', data_files=filename, field='data').values())) new_fingerprint = hashlib.md5("static-id".encode("utf8")).hexdigest() # get the expected cached path cache_path = dset._get_cache_file_path(new_fingerprint) if remove_cache and os.path.exists(cache_path): os.remove(cache_path) # check that the cache exists, and print a statement # if was actually expected to exist cache_exist = os.path.exists(cache_path) print(f"> cache file exists={cache_exist}") if cache_expected_to_exist and not cache_exist: print("=== Cache does not exist! ====") # apply the transformation with the new fingerprint dset = dset.map( Transformation(), batched=True, num_proc=num_proc, new_fingerprint=new_fingerprint, desc="mapping dataset with transformation") generate_dataset() for num_proc in [1, 2]: print(f"# num_proc={num_proc}, first pass") # first pass to generate the cache (always create a new cache here) process_dataset_with_cache(remove_cache=True, num_proc=num_proc, cache_expected_to_exist=False) print(f"# num_proc={num_proc}, second pass") # second pass, expects the cache to exist process_dataset_with_cache(remove_cache=False, num_proc=num_proc, cache_expected_to_exist=True) os.remove(filename) ``` ## Expected results In the above python example, with `num_proc=2`, the **cache file should exist in the second call** of `process_dataset_with_cache` ("=== Cache does not exist! ====" should not be printed). When the cache is successfully created, `map()` is called only one time. ## Actual results In the above python example, with `num_proc=2`, the **cache does not exist in the second call** of `process_dataset_with_cache` (this results in printing "=== Cache does not exist! ===="). Because the cache doesn't exist, the `map()` method is executed a second time and the dataset is not loaded from the cache. ## Environment info - `datasets` version: 1.12.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3044/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3044/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3043
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3043/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3043/comments
https://api.github.com/repos/huggingface/datasets/issues/3043/events
https://github.com/huggingface/datasets/issues/3043
1,020,252,114
I_kwDODunzps48z8_S
3,043
Add PASS dataset
{ "login": "osanseviero", "id": 7246357, "node_id": "MDQ6VXNlcjcyNDYzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/osanseviero", "html_url": "https://github.com/osanseviero", "followers_url": "https://api.github.com/users/osanseviero/followers", "following_url": "https://api.github.com/users/osanseviero/following{/other_user}", "gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}", "starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions", "organizations_url": "https://api.github.com/users/osanseviero/orgs", "repos_url": "https://api.github.com/users/osanseviero/repos", "events_url": "https://api.github.com/users/osanseviero/events{/privacy}", "received_events_url": "https://api.github.com/users/osanseviero/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, "node_id": "LA_kwDODunzps7XHBIh", "url": "https://api.github.com/repos/huggingface/datasets/labels/vision", "name": "vision", "color": "bfdadc", "default": false, "description": "Vision datasets" } ]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[]
1,633,625,023,000
1,639,394,559,000
null
NONE
null
## Adding a Dataset - **Name:** PASS - **Description:** An ImageNet replacement for self-supervised pretraining without humans - **Data:** https://www.robots.ox.ac.uk/~vgg/research/pass/ https://github.com/yukimasano/PASS Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3043/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3043/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3040
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3040/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3040/comments
https://api.github.com/repos/huggingface/datasets/issues/3040/events
https://github.com/huggingface/datasets/issues/3040
1,018,782,475
I_kwDODunzps48uWML
3,040
[save_to_disk] Using `select()` followed by `save_to_disk` saves complete dataset making it hard to create dummy dataset
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi,\r\n\r\nthe `save_to_disk` docstring explains that `flatten_indices` has to be called on a dataset before saving it to save only the shard/slice of the dataset.", "That works! Thansk!\r\n\r\nMight be worth doing that automatically actually in case the `save_to_disk` is called on a dataset that has an indices mapping :-)", "I agree with @patrickvonplaten: this issue is reported recurrently, so better if we implement the `.flatten_indices()` automatically?", "That would be great indeed - I don't really see a use case where one would not like to call `.flatten_indices()` before calling `save_to_disk`", "+1 on this !" ]
1,633,540,127,000
1,635,867,668,000
1,635,867,668,000
MEMBER
null
## Describe the bug When only keeping a dummy size of a dataset (say the first 100 samples), and then saving it to disk to upload it in the following to the hub for easy demo/use - not just the small dataset is saved but the whole dataset with an indices file. The problem with this is that the dataset is still very big. ## Steps to reproduce the bug E.g. run the following: ```python from datasets import load_dataset, save_to_disk nlp = load_dataset("glue", "mnli", split="train") nlp.save_to_disk("full") nlp = nlp.select(range(100)) nlp.save_to_disk("dummy") ``` Now one can see that both `"dummy"` and `"full"` have the same size. This shouldn't be the case IMO. ## Expected results IMO `"dummy"` should be much smaller so that one can easily play around with the dataset on the hub. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.2.dev0 - Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3040/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3040/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3036
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3036/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3036/comments
https://api.github.com/repos/huggingface/datasets/issues/3036/events
https://github.com/huggingface/datasets/issues/3036
1,017,687,944
I_kwDODunzps48qK-I
3,036
Protect master branch to force contributions via Pull Requests
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "It would be nice to protect the master from direct commits, but still having a way to merge our own PRs when no review is required (for example when updating a dataset_infos.json file, or minor bug fixes - things that happen quite often actually).\r\nDo you know if there's a way ?", "you can if you're an admin of the repo", "This is done. Now the master branch is protected:\r\n- [x] Require a pull request before merging: all commits must be made to a non-protected branch and submitted via a pull request\r\n - Required number of approvals before merging: 1 \r\n- [x] Require linear history: prevent merge commits from being pushed\r\n- [x] These requirements are not enforced for administrators\r\n- [x] Additionally, the master branch is also protected against deletion and force pushes\r\n\r\nCC: @lhoestq @julien-c @thomwolf " ]
1,633,505,657,000
1,633,589,507,000
1,633,589,392,000
MEMBER
null
In order to have a clearer Git history in the master branch, I propose to protect it so that all contributions must be done through a Pull Request and no direct commits to master are allowed. - The Pull Request allows to give context, discuss any potential issues and improve the quality of the contribution - The Pull Request will eventually be squashed and merged into master with a single commit that links to the Pull Request page (with all the context/discussions) Note that we already implemented a protection in the master branch to avoid *merge* commits and ensure a linear history. This proposal goes one step further by avoiding all kind of direct commits and forcing contributions **only** through Pull Requests. Please note that we can temporarily deactivate this protection if we need to make a direct commit, e.g. at each new version release. The only way GitHub allows this kind or protection is by requiring a minimal number (at least one) of approvals of the Pull Request. The inconvenient is that the PR creator cannot approve their own PR: another person must approve it before it can be merged into master. To circumvent this, we could eventually disable this protection in the master branch when an urgent commit is needed (e.g. for a hotfix) and there is no other person available at that time to approve the PR.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3036/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3036/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3035
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3035/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3035/comments
https://api.github.com/repos/huggingface/datasets/issues/3035/events
https://github.com/huggingface/datasets/issues/3035
1,016,770,071
I_kwDODunzps48mq4X
3,035
`load_dataset` does not work with uploaded arrow file
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! This is not a bug, this is simply not implemented.\r\n`save_to_disk` is for on-disk serialization and was not made compatible for the Hub.\r\nThat being said, I agree we actually should make it work with the Hub x)", "cc @LysandreJik maybe we can solve this at the same time as adding `push_to_hub`" ]
1,633,464,910,000
1,633,539,697,000
null
MEMBER
null
## Describe the bug I've preprocessed and uploaded a dataset here: https://huggingface.co/datasets/ami-wav2vec2/ami_headset_single_preprocessed . The dataset is in `.arrow` format. The dataset can correctly be loaded when doing: ```bash git lfs install git clone https://huggingface.co/datasets/ami-wav2vec2/ami_headset_single_preprocessed ``` followed by ```python from datasets import load_from_disk ds = load_from_disk("./ami_headset_single_preprocessed") ``` However when I try to directly download the dataset as follows: ```python from datasets import load_dataset ds = load_dataset("ami-wav2vec2/ami_headset_single_preprocessed") ``` the following error occurs: ```bash /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 1115 ignore_verifications=ignore_verifications, 1116 try_from_hf_gcs=try_from_hf_gcs, -> 1117 use_auth_token=use_auth_token, 1118 ) 1119 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 635 if not downloaded_from_gcs: 636 self._download_and_prepare( --> 637 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 638 ) 639 # Sync info /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 724 try: 725 # Prepare split will record examples associated to the split --> 726 self._prepare_split(split_generator, **prepare_split_kwargs) 727 except OSError as e: 728 raise OSError( /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator) 1186 generator, unit=" tables", leave=False, disable=bool(logging.get_verbosity() == logging.NOTSET) 1187 ): -> 1188 writer.write_table(table) 1189 num_examples, num_bytes = writer.finalize() 1190 /usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_table(self, pa_table, writer_batch_size) 424 # reorder the arrays if necessary + cast to self._schema 425 # we can't simply use .cast here because we may need to change the order of the columns --> 426 pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) 427 batches: List[pa.RecordBatch] = pa_table.to_batches(max_chunksize=writer_batch_size) 428 self._num_bytes += sum(batch.nbytes for batch in batches) /usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays() /usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib._sanitize_arrays() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray() /usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.ChunkedArray.cast() /usr/local/lib/python3.7/dist-packages/pyarrow/compute.py in cast(arr, target_type, safe) 279 else: 280 options = CastOptions.unsafe(target_type) --> 281 return call_function("cast", [arr], options) 282 283 /usr/local/lib/python3.7/dist-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function() /usr/local/lib/python3.7/dist-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowNotImplementedError: Unsupported cast from struct<train: struct<name: string, num_bytes: int64, num_examples: int64, dataset_name: string>, validation: struct<name: string, num_bytes: int64, num_examples: int64, dataset_name: string>, test: struct<name: string, num_bytes: int64, num_examples: int64, dataset_name: string>> to list using function cast_list ``` ## Expected results The dataset should be correctly loaded with `load_dataset` IMO. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.2.dev0 - Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3035/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3035/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3034
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3034/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3034/comments
https://api.github.com/repos/huggingface/datasets/issues/3034/events
https://github.com/huggingface/datasets/issues/3034
1,016,759,202
I_kwDODunzps48moOi
3,034
Errors loading dataset using fs = a gcsfs.GCSFileSystem
{ "login": "dconatha", "id": 74556552, "node_id": "MDQ6VXNlcjc0NTU2NTUy", "avatar_url": "https://avatars.githubusercontent.com/u/74556552?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dconatha", "html_url": "https://github.com/dconatha", "followers_url": "https://api.github.com/users/dconatha/followers", "following_url": "https://api.github.com/users/dconatha/following{/other_user}", "gists_url": "https://api.github.com/users/dconatha/gists{/gist_id}", "starred_url": "https://api.github.com/users/dconatha/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dconatha/subscriptions", "organizations_url": "https://api.github.com/users/dconatha/orgs", "repos_url": "https://api.github.com/users/dconatha/repos", "events_url": "https://api.github.com/users/dconatha/events{/privacy}", "received_events_url": "https://api.github.com/users/dconatha/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,633,464,428,000
1,633,465,599,000
null
NONE
null
## Describe the bug Cannot load dataset using a `gcsfs.GCSFileSystem`. I'm not sure if this should be a bug in `gcsfs` or here... Basically what seems to be happening is that since datasets saves datasets as folders and folders aren't "real objects" in gcs, gcsfs raises a 404 error. There are workarounds if you use gcsfs directly to download the file, but as is I can't get `load_from_disk` to work. ## Steps to reproduce the bug ```python from datasets import load_dataset # load some dataset dataset = load_dataset("squad", split="train") # save it to gcs import gcsfs fs = gcsfs.GCSFileSystem(project="my-gs-project") dataset.save_to_disk("gs://my-bucket/squad", fs=fs) # try to load it from gcs from datasets import load_from_disk dataset2 = load_from_disk("my-bucket/squad", fs=fs) ``` ## Expected results `dataset2` would be a copy of `dataset` but loaded from my bucket. ## Actual results Long traceback but essentially it's a 404 error from gcsfs saying the object `my-bucket/squad` doesn't exist when this is called: https://github.com/huggingface/datasets/blob/9c81b7d2e6d9feae69a084a3abda265a4ca07fb5/src/datasets/arrow_dataset.py#L977 This is because there is no actual object called `my-bucket/squad`, there are objects called `my-bucket/squad/dataset.arrow`, etc. Note that *this* works fine, since it's explicitly saying "download all the objects with this prefix": ```python fs.download(src_dataset_path + "/*", dataset_path.as_posix(), recursive=True) ``` For example, I can do a workaround this way: ```python import tempfile with tempfile.TemporaryDirectory() as temppath: fs.download("gs://my-bucket/squad/*", temppath) dataset2 = load_from_disk(temppath) ``` It's unclear to me if it's `gcsfs`'s responsibility to say "hey that's folder not a file, I should try to get objects inside of it not the object itself", or if that's `datasets`'s responsibility... I'm leaning towards the latter since you're never loading a dataset from one file using this function/method, only a dataset folder? Another minor thing that should maybe should be rolled into this bug... https://github.com/huggingface/datasets/blob/9c81b7d2e6d9feae69a084a3abda265a4ca07fb5/src/datasets/arrow_dataset.py#L968 These fail if you pass in a `gs://` path, e.g. ```python dataset2 = load_from_disk("gs://my-bucket/squad", fs=fs) ``` Because at this point, `dataset_info_path` is `gs:/my-bucket/squad/dataset_info.json`, gcsfs throws a: ``` Invalid bucket name: 'gs:' ``` error ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: macOS Big Sur 11.6 - Python version: 3.7.12 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3034/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3034/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3032
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3032/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3032/comments
https://api.github.com/repos/huggingface/datasets/issues/3032/events
https://github.com/huggingface/datasets/issues/3032
1,016,488,475
I_kwDODunzps48lmIb
3,032
Error when loading private dataset with "data_files" arg
{ "login": "borisdayma", "id": 715491, "node_id": "MDQ6VXNlcjcxNTQ5MQ==", "avatar_url": "https://avatars.githubusercontent.com/u/715491?v=4", "gravatar_id": "", "url": "https://api.github.com/users/borisdayma", "html_url": "https://github.com/borisdayma", "followers_url": "https://api.github.com/users/borisdayma/followers", "following_url": "https://api.github.com/users/borisdayma/following{/other_user}", "gists_url": "https://api.github.com/users/borisdayma/gists{/gist_id}", "starred_url": "https://api.github.com/users/borisdayma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/borisdayma/subscriptions", "organizations_url": "https://api.github.com/users/borisdayma/orgs", "repos_url": "https://api.github.com/users/borisdayma/repos", "events_url": "https://api.github.com/users/borisdayma/events{/privacy}", "received_events_url": "https://api.github.com/users/borisdayma/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "We'll do a release tomorrow or on wednesday to make the fix available :)\r\n\r\nThanks for reproting !" ]
1,633,448,787,000
1,634,052,382,000
1,634,052,346,000
CONTRIBUTOR
null
## Describe the bug A clear and concise description of what the bug is. Private datasets with no loading script can't be loaded using `data_files` parameter. ## Steps to reproduce the bug ```python from datasets import load_dataset data_files = {"train": "**/train/*/*.jsonl", "valid": "**/valid/*/*.jsonl"} dataset = load_dataset('dalle-mini/encoded', data_files=data_files, use_auth_token=True, streaming=True) ``` Same error happens in non-streaming mode. ## Expected results Files should be loaded (whether in streaming or not). ## Actual results Error: ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs) 539 try: --> 540 local_path = cached_path(file_path, download_config=download_config) 541 except FileNotFoundError: 8 frames FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/dalle-mini/encoded/resolve/main/encoded.py During handling of the above exception, another exception occurred: HTTPError Traceback (most recent call last) HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/datasets/dalle-mini/encoded?full=true During handling of the above exception, another exception occurred: FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs) 547 except Exception: 548 raise FileNotFoundError( --> 549 f"Couldn't find a directory or a {resource_type} named '{path}'. " 550 f"It doesn't exist locally at {expected_dir_for_combined_path_abs} or remotely on {hf_api.endpoint}/datasets" 551 ) FileNotFoundError: Couldn't find a directory or a dataset named 'dalle-mini/encoded'. It doesn't exist locally at /content/dalle-mini/encoded or remotely on https://huggingface.co/datasets ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0 @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3032/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3032/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3027
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3027/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3027/comments
https://api.github.com/repos/huggingface/datasets/issues/3027/events
https://github.com/huggingface/datasets/issues/3027
1,016,150,117
I_kwDODunzps48kThl
3,027
Resolve data_files by split name
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Awesome @lhoestq I like the proposal and it works great on my JSON community dataset. Here is the [log](https://gist.github.com/vblagoje/714babc325bcbdd5de579fd8e1648892). ", "From my discussion with @borisdayma it would be more general the files match if their paths contains the split name - not only if the filename contains the split name. For example for a dataset like this:\r\n```\r\ntrain/\r\n└── data.csv\r\ntest/\r\n└── data.csv\r\n```\r\n\r\nBut IMO the default should be \r\n```\r\ndata/\r\n├── train.csv\r\n└── test.csv\r\n```\r\nbecause it allows people to have other directories if they have different subsets of their data (different configurations, not splits)", "I just created a PR for this at https://github.com/huggingface/datasets/pull/3221, let me know what you think :)" ]
1,633,429,476,000
1,636,134,598,000
1,636,134,597,000
MEMBER
null
This issue is about discussing the default behavior when someone loads a dataset that consists in data files. For example: ```python load_dataset("lhoestq/demo1") ``` should return two splits "train" and "test" since the dataset repostiory is like ``` data/ ├── train.csv └── test.csv ``` Currently it returns only one split "train" which contains the data of both files I started playing with this idea on this branch btw: `resolve-data_files-by-split-name` Basically the idea is that if you named you data files after split names then the default pattern is ```python { "train": ["*train*"], "test": ["*test*"], "validation": ["*dev*", "valid"], } ``` otherwise it's ```python { "train": ["*"] } ``` Let me know what you think ! cc @albertvillanova @LysandreJik @vblagoje
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3027/reactions", "total_count": 3, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3027/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3024
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3024/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3024/comments
https://api.github.com/repos/huggingface/datasets/issues/3024/events
https://github.com/huggingface/datasets/issues/3024
1,016,052,911
I_kwDODunzps48j7yv
3,024
Windows test suite fails
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[]
1,633,423,606,000
1,633,427,907,000
1,633,427,907,000
MEMBER
null
## Describe the bug There is an error during installation of tests dependencies for Windows: https://app.circleci.com/pipelines/github/huggingface/datasets/7981/workflows/9b6a0114-2b8e-4069-94e5-e844dbbdba4e/jobs/49206 ``` ERROR: Cannot uninstall 'ruamel-yaml'. It is a distutils installed project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3024/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3024/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3018
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3018/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3018/comments
https://api.github.com/repos/huggingface/datasets/issues/3018/events
https://github.com/huggingface/datasets/issues/3018
1,015,311,877
I_kwDODunzps48hG4F
3,018
Support multiple zipped CSV data files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "@lhoestq I would like to draw your attention to the proposed API by @lewtun, using `data_dir` to pass the ZIP URL.\r\n\r\nI'm not totally convinced with this... What do you think?\r\n\r\nMaybe we could discuss other approaches...\r\n\r\nOne brainstorming idea: what about using URL chaining with the hop operator in `data_files`?", "`data_dir` is currently exclusively used for manually downloaded data.\r\n\r\nMaybe we can have an API that only uses data_files as you are suggesting, using URL chaining ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nurl = \"https://domain.org/filename.zip\"\r\ndata_files = {\"train\": \"zip://train_filename.csv::\" + url, \"test\": \"zip://test_filename.csv::\" + url}\r\ndataset = load_dataset(\"csv\", data_files=data_files)\r\n```\r\n\r\nURL chaining is used by `fsspec` to get access to files in nested filesystems of any kind. Since `fsspec` is being used by `pandas`, `dask` and also extensively by `datasets` I think it would be nice to use it here too", "URL chaining sounds super nice to me! And it's also a nice way to leverage the same concepts we currently have in the docs around `fsspec` :)" ]
1,633,360,619,000
1,633,444,377,000
null
MEMBER
null
As requested by @lewtun, support loading multiple zipped CSV data files. ```python from datasets import load_dataset url = "https://domain.org/filename.zip" data_files = {"train": "train_filename.csv", "test": "test_filename.csv"} dataset = load_dataset("csv", data_dir=url, data_files=data_files) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3018/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3018/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3013
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3013/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3013/comments
https://api.github.com/repos/huggingface/datasets/issues/3013/events
https://github.com/huggingface/datasets/issues/3013
1,014,960,419
I_kwDODunzps48fxEj
3,013
Improve `get_dataset_infos`?
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
[ "To keeps things simple maybe we should use `load_dataset_builder` in `get_dataset_infos`.\r\n`load_dataset_builder` instantiates a builder and runs the _infos() method in order to give you the most up-to-date infos, even if the dataset_infos.json is outdated or missing." ]
1,633,340,824,000
1,634,895,369,000
null
CONTRIBUTOR
null
Using the dedicated function `get_dataset_infos` on a dataset that has no dataset-info.json file returns an empty info: ``` >>> from datasets import get_dataset_infos >>> get_dataset_infos('wit') {} ``` While it's totally possible to get it (regenerate it) with: ``` >>> from datasets import load_dataset_builder >>> builder = load_dataset_builder('wit') >>> builder.info DatasetInfo(description='Wikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set\n of 37.6 million entity rich image-text examples with 11.5 million unique images across 108 Wikipedia languages. Its\n size enables WIT to be used as a pretraining dataset for multimodal machine learning models.\n', citation='@article{srinivasan2021wit,\n title={WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning},\n author={Srinivasan, Krishna and Raman, Karthik and Chen, Jiecao and Bendersky, Michael and Najork, Marc},\n journal={arXiv preprint arXiv:2103.01913},\n year={2021}\n}\n', homepage='https://github.com/google-research-datasets/wit', license='', features={'b64_bytes': Value(dtype='string', id=None), 'embedding': Sequence(feature=Value(dtype='float64', id=None), length=-1, id=None), 'image_url': Value(dtype='string', id=None), 'metadata_url': Value(dtype='string', id=None), 'original_height': Value(dtype='int32', id=None), 'original_width': Value(dtype='int32', id=None), 'mime_type': Value(dtype='string', id=None), 'caption_attribution_description': Value(dtype='string', id=None), 'wit_features': Sequence(feature={'language': Value(dtype='string', id=None), 'page_url': Value(dtype='string', id=None), 'attribution_passes_lang_id': Value(dtype='string', id=None), 'caption_alt_text_description': Value(dtype='string', id=None), 'caption_reference_description': Value(dtype='string', id=None), 'caption_title_and_reference_description': Value(dtype='string', id=None), 'context_page_description': Value(dtype='string', id=None), 'context_section_description': Value(dtype='string', id=None), 'hierarchical_section_title': Value(dtype='string', id=None), 'is_main_image': Value(dtype='string', id=None), 'page_changed_recently': Value(dtype='string', id=None), 'page_title': Value(dtype='string', id=None), 'section_title': Value(dtype='string', id=None)}, length=-1, id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='wit', config_name='default', version=0.0.0, splits=None, download_checksums=None, download_size=None, post_processing_size=None, dataset_size=None, size_in_bytes=None) ``` Should we test if info is empty, and in that case regenerate it? Or always generate it?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3013/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3013/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3011
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3011/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3011/comments
https://api.github.com/repos/huggingface/datasets/issues/3011/events
https://github.com/huggingface/datasets/issues/3011
1,014,935,713
I_kwDODunzps48frCh
3,011
load_dataset_builder should error if "name" does not exist?
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
[ "Yes I think it should raise an error. Currently it looks like it instantiates a custom configuration with the name given by the user:\r\nhttps://github.com/huggingface/datasets/blob/ba27ce33bf568374cf23a07669fdd875b5718bc2/src/datasets/builder.py#L391-L397" ]
1,633,339,246,000
1,634,895,369,000
null
CONTRIBUTOR
null
``` import datasets as ds builder = ds.load_dataset_builder('sent_comp', name="doesnotexist") builder.info.config_name ``` returns ``` 'doesnotexist' ``` Shouldn't it raise an error instead? For this dataset, the only valid values for `name` should be: `"default"` or `None` (ie. argument not passed)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3011/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3011/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3010/comments
https://api.github.com/repos/huggingface/datasets/issues/3010/events
https://github.com/huggingface/datasets/issues/3010
1,014,918,470
I_kwDODunzps48fm1G
3,010
Chain filtering is leaking
{ "login": "DrMatters", "id": 22641583, "node_id": "MDQ6VXNlcjIyNjQxNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DrMatters", "html_url": "https://github.com/DrMatters", "followers_url": "https://api.github.com/users/DrMatters/followers", "following_url": "https://api.github.com/users/DrMatters/following{/other_user}", "gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}", "starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions", "organizations_url": "https://api.github.com/users/DrMatters/orgs", "repos_url": "https://api.github.com/users/DrMatters/repos", "events_url": "https://api.github.com/users/DrMatters/events{/privacy}", "received_events_url": "https://api.github.com/users/DrMatters/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "### Update:\r\nI wrote a bit cleaner code snippet (without transforming to json) that can expose leaking.\r\n```python\r\nimport datasets\r\nimport json\r\n\r\nitems = ['ab', 'c', 'df']\r\n\r\nds = datasets.Dataset.from_dict({'col': items})\r\nprint(list(ds))\r\n# > Prints: [{'col': 'ab'}, {'col': 'c'}, {'col': 'df'}]\r\n\r\nfiltered = ds\r\n\r\n# get all items that are starting with a character with ascii code bigger than 'a'\r\nfiltered = filtered.filter(lambda x: x['col'][0] > 'a', load_from_cache_file=False)\r\nprint(list(filtered))\r\n# > Prints: [{'col': 'c'}, {'col': 'df'}] as expected\r\n\r\n# get all items that are shorter than 2\r\nfiltered = filtered.filter(lambda x: len(x['col']) < 2, load_from_cache_file=False)\r\nprint(list(filtered))\r\n# > Prints: [{'col': 'ab'}] -> this is a leaked item from the first filter\r\n# > Should be: [{'col': 'c'}]\r\n```", "Thanks for reporting. I'm looking into it", "I just pushed a fix ! We'll do a new release soon.\r\nIn the meantime feel free to install `datasets` from source to play with it", "Thanks, I'm already using it from your branch!" ]
1,633,338,295,000
1,633,422,968,000
null
NONE
null
## Describe the bug As there's no support for lists within dataset fields, I convert my lists to json-string format. However, the bug described is occurring even when the data format is 'string'. These samples show that filtering behavior diverges from what's expected when chaining filterings. On sample 2 the second filtering leads to "leaking" of data that should've been filtered on the first filtering into the results. ## Steps to reproduce the bug Sample 1: ```python import datasets import json items = [[1, 2], [3], [4]] jsoned_items = map(json.dumps, [[1, 2], [3], [4]]) ds = datasets.Dataset.from_dict({'a': jsoned_items}) print(list(ds)) # > Prints: [{'a': '[1, 2]'}, {'a': '[3]'}, {'a': '[4]'}] as expected filtered = ds # get all lists that are shorter than 2 filtered = filtered.filter(lambda x: len(json.loads(x['a'])) < 2, load_from_cache_file=False) print(list(filtered)) # > Prints: [{'a': '[3]'}, {'a': '[4]'}] as expected # get all lists, which have a value bigger than 3 on its zero index filtered = filtered.filter(lambda x: json.loads(x['a'])[0] > 3, load_from_cache_file=False) print(list(filtered)) # > Should be: [{'a': [4]}] # > Prints: [{'a': [3]}] ``` Sample 2: ```python import datasets import json items = [[1, 2], [3], [4]] jsoned_items = map(json.dumps, [[1, 2], [3], [4]]) ds = datasets.Dataset.from_dict({'a': jsoned_items}) print(list(ds)) # > Prints: [{'a': '[1, 2]'}, {'a': '[3]'}, {'a': '[4]'}] filtered = ds # get all lists, which have a value bigger than 3 on its zero index filtered = filtered.filter(lambda x: json.loads(x['a'])[0] > 3, load_from_cache_file=False) print(list(filtered)) # > Prints: [{'a': '[4]'}] as expected # get all lists that are shorter than 2 filtered = filtered.filter(lambda x: len(json.loads(x['a'])) < 2, load_from_cache_file=False) print(list(filtered)) # > Prints: [{'a': '[1, 2]'}] # > Should be: [{'a': '[4]'}] (remain intact) ``` ## Expected results Expected and actual results are attached to the code snippets. ## Actual results Expected and actual results are attached to the code snippets. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3010/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3010/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3005/comments
https://api.github.com/repos/huggingface/datasets/issues/3005/events
https://github.com/huggingface/datasets/issues/3005
1,014,615,420
I_kwDODunzps48ec18
3,005
DatasetDict.filter and Dataset.filter crashes with any "fn_kwargs" argument
{ "login": "DrMatters", "id": 22641583, "node_id": "MDQ6VXNlcjIyNjQxNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/22641583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/DrMatters", "html_url": "https://github.com/DrMatters", "followers_url": "https://api.github.com/users/DrMatters/followers", "following_url": "https://api.github.com/users/DrMatters/following{/other_user}", "gists_url": "https://api.github.com/users/DrMatters/gists{/gist_id}", "starred_url": "https://api.github.com/users/DrMatters/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/DrMatters/subscriptions", "organizations_url": "https://api.github.com/users/DrMatters/orgs", "repos_url": "https://api.github.com/users/DrMatters/repos", "events_url": "https://api.github.com/users/DrMatters/events{/privacy}", "received_events_url": "https://api.github.com/users/DrMatters/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @DrMatters, thanks for reporting.\r\n\r\nThis issue was fixed 14 days ago: #2950.\r\n\r\nCurrently, the fix is only in the master branch and will be made available in our next library release.\r\n\r\nIn the meantime, you can incorporate the fix by installing datasets from the master branch:\r\n```shell\r\npip install -U git+ssh://git@github.com/huggingface/datasets.git@master#egg=datasest\r\n```\r\nor\r\n```shell\r\npip install -U git+https://github.com/huggingface/datasets.git@master#egg=datasets\r\n```", "Thanks, sorry for bothering" ]
1,633,308,569,000
1,633,947,481,000
1,633,337,173,000
NONE
null
## Describe the bug The ".filter" method of DatasetDict or Dataset objects fails when passing any "fn_kwargs" argument ## Steps to reproduce the bug ```python import datasets example_dataset = datasets.Dataset.from_dict({"a": {1, 2, 3, 4}}) def filter_value(example, value): return example['a'] == value filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3}) ``` ## Expected results `filtered` is a dataset containing {"a": {3}} ## Actual results > Traceback (most recent call last): > File "C:\Users\qsemi\Documents\git\nlp_experiments\gpt_celebrity\src\test_faulty_filter.py", line 8, in <module> > filtered = example_dataset.filter(filter_value, fn_kwargs={'value': 3}) > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper > out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper > out = func(self, *args, **kwargs) > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2169, in filter > indices = self.map( > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1686, in map > return self._map_single( > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 185, in wrapper > out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\fingerprint.py", line 398, in wrapper > out = func(self, *args, **kwargs) > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 2048, in _map_single > batch = apply_function_on_filtered_inputs( > File "C:\Users\qsemi\miniconda3\envs\main\lib\site-packages\datasets\arrow_dataset.py", line 1939, in apply_function_on_filtered_inputs > function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) > TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'value' ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/3005/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/3005/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2998
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2998/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2998/comments
https://api.github.com/repos/huggingface/datasets/issues/2998/events
https://github.com/huggingface/datasets/issues/2998
1,013,372,871
I_kwDODunzps48ZtfH
2,998
cannot shuffle dataset loaded from disk
{ "login": "pya25", "id": 54274249, "node_id": "MDQ6VXNlcjU0Mjc0MjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/54274249?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pya25", "html_url": "https://github.com/pya25", "followers_url": "https://api.github.com/users/pya25/followers", "following_url": "https://api.github.com/users/pya25/following{/other_user}", "gists_url": "https://api.github.com/users/pya25/gists{/gist_id}", "starred_url": "https://api.github.com/users/pya25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pya25/subscriptions", "organizations_url": "https://api.github.com/users/pya25/orgs", "repos_url": "https://api.github.com/users/pya25/repos", "events_url": "https://api.github.com/users/pya25/events{/privacy}", "received_events_url": "https://api.github.com/users/pya25/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,633,096,192,000
1,633,096,192,000
null
NONE
null
## Describe the bug dataset loaded from disk cannot be shuffled. ## Steps to reproduce the bug ``` my_dataset = load_from_disk('s3://my_file/validate', fs=s3) sample = my_dataset.select(range(100)).shuffle(seed=1234) ``` ## Actual results ``` sample = my_dataset .select(range(100)).shuffle(seed=1234) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2494, in shuffle new_fingerprint=new_fingerprint, File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2303, in select tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(indices_cache_file_name), delete=False) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/tempfile.py", line 547, in NamedTemporaryFile (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type) File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/tempfile.py", line 258, in _mkstemp_inner fd = _os.open(file, flags, 0o600) FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpnnu5uhnx/my_file/validate/tmpy76d70g4' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Python version: 3.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2998/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2998/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2997/comments
https://api.github.com/repos/huggingface/datasets/issues/2997/events
https://github.com/huggingface/datasets/issues/2997
1,013,270,069
I_kwDODunzps48ZUY1
2,997
Dataset has incorrect labels
{ "login": "marshmellow77", "id": 63367770, "node_id": "MDQ6VXNlcjYzMzY3Nzcw", "avatar_url": "https://avatars.githubusercontent.com/u/63367770?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marshmellow77", "html_url": "https://github.com/marshmellow77", "followers_url": "https://api.github.com/users/marshmellow77/followers", "following_url": "https://api.github.com/users/marshmellow77/following{/other_user}", "gists_url": "https://api.github.com/users/marshmellow77/gists{/gist_id}", "starred_url": "https://api.github.com/users/marshmellow77/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marshmellow77/subscriptions", "organizations_url": "https://api.github.com/users/marshmellow77/orgs", "repos_url": "https://api.github.com/users/marshmellow77/repos", "events_url": "https://api.github.com/users/marshmellow77/events{/privacy}", "received_events_url": "https://api.github.com/users/marshmellow77/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi @marshmellow77, thanks for reporting.\r\n\r\nThat issue is fixed since `datasets` version 1.9.0 (see 16bc665f2753677c765011ef79c84e55486d4347).\r\n\r\nPlease, update `datasets` with: `pip install -U datasets`", "Thanks. Please note that the dataset explorer (https://huggingface.co/datasets/viewer/?dataset=turkish_product_reviews) still shows the incorrect state. The sentiment for the first few customer reviews is actually negative and should be labelled with \"0\", see screenshot:\r\n\r\n![Capture1](https://user-images.githubusercontent.com/63367770/135637150-93d9b09b-f1dd-4701-97a5-5cb2672ec0c7.PNG)\r\n\r\n\r\n", "Thanks @marshmellow77, good catch! I'm transferring this issue to https://github.com/huggingface/datasets-viewer. " ]
1,633,090,146,000
1,633,102,320,000
1,633,096,474,000
NONE
null
The dataset https://huggingface.co/datasets/turkish_product_reviews has incorrect labels - all reviews are labelled with "1" (positive sentiment). None of the reviews is labelled with "0". See screenshot attached: ![Capture](https://user-images.githubusercontent.com/63367770/135617428-14ce0b27-5208-4e66-a3ee-71542e3257b4.PNG)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2997/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2997/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2993/comments
https://api.github.com/repos/huggingface/datasets/issues/2993/events
https://github.com/huggingface/datasets/issues/2993
1,012,702,665
I_kwDODunzps48XJ3J
2,993
Can't download `trivia_qa/unfiltered`
{ "login": "VictorSanh", "id": 16107619, "node_id": "MDQ6VXNlcjE2MTA3NjE5", "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/VictorSanh", "html_url": "https://github.com/VictorSanh", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "repos_url": "https://api.github.com/users/VictorSanh/repos", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "wooo that was fast! thank you @lhoestq !\r\nit is able to process now, though it's ignoring all files and ending up with 0 examples now haha :/\r\n\r\nFor subset \"unfiltered\":\r\n```python\r\n>>> load_dataset(\"trivia_qa\", \"unfiltered\")\r\nDownloading and preparing dataset trivia_qa/unfiltered (download: 3.07 GiB, generated: 27.23 GiB, post-processed: Unknown size, total: 30.30 GiB) to /gpfsscratch/rech/six/commun/datasets/trivia_qa/unfiltered/1.1.0/910043a609bb2bdf62b4874f68e0c24fb648cf81e40a358f4bd54c919d72c9ab...\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1354.53it/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 40.60it/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/load.py\", line 1198, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py\", line 647, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py\", line 748, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=2906575347, num_examples=10832, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}, {'expected': SplitInfo(name='validation', num_bytes=3038966234, num_examples=11313, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='validation', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}]\r\n```\r\nFor subset \"rc\":\r\n```python\r\n>>> load_dataset(\"trivia_qa\", \"rc\")\r\nDownloading and preparing dataset trivia_qa/rc (download: 2.48 GiB, generated: 14.92 GiB, post-processed: Unknown size, total: 17.40 GiB) to /gpfsscratch/rech/six/commun/datasets/trivia_qa/rc/1.1.0/910043a609bb2bdf62b4874f68e0c24fb648cf81e40a358f4bd54c919d72c9ab...\r\n100%|███████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 3806.08it/s]\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 51.57it/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/load.py\", line 1198, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py\", line 647, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py\", line 748, in _download_and_prepare\r\n verify_splits(self.info.splits, split_dict)\r\n File \"/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 74, in verify_splits\r\n raise NonMatchingSplitsSizesError(str(bad_splits))\r\ndatasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=1577814583, num_examples=17210, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='test', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}, {'expected': SplitInfo(name='train', num_bytes=12750976012, num_examples=138384, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}, {'expected': SplitInfo(name='validation', num_bytes=1688535379, num_examples=18669, dataset_name='trivia_qa'), 'recorded': SplitInfo(name='validation', num_bytes=0, num_examples=0, dataset_name='trivia_qa')}]\r\n```\r\n\r\nCould you look into that when you get a chance?\r\nI wonder if it's not something they changed on the file to download? i couldn't find any information", "@VictorSanh have you tried passing `download_mode=\"force_redownload\"`?\r\n```python\r\nds = load_dataset(\"trivia_qa\", \"unfiltered\", download_mode=\"force_redownload\")\r\n```", "I aggressively rmed caches, especially rming the `datasets/downloads/extracted/c3d265fa20d99a147a76e4f5e...` solved the issue.\r\nthank you both!\r\n" ]
1,633,042,818,000
1,633,115,243,000
1,633,115,242,000
MEMBER
null
## Describe the bug For some reason, I can't download `trivia_qa/unfilted`. A file seems to be missing... I am able to see it fine though the viewer tough... ## Steps to reproduce the bug ```python >>> from datasets import load_dataset >>> load_dataset("trivia_qa", "unfiltered") Downloading and preparing dataset trivia_qa/unfiltered (download: 3.07 GiB, generated: 27.23 GiB, post-processed: Unknown size, total: 30.30 GiB) to /gpfsscratch/rech/six/commun/datasets/trivia_qa/unfiltered/1.1.0/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6... Traceback (most recent call last): File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 251, in _add_context with open(os.path.join(file_dir, fname), encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: '/gpfsscratch/rech/six/commun/datasets/downloads/extracted/9fcb7eddc6afd46fd074af3c5128931dfe4b548f933c925a23847faf4c1995ad/evidence/wikipedia/Peanuts.txt' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/load.py", line 852, in load_dataset use_auth_token=use_auth_token, File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 616, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split disable=bool(logging.get_verbosity() == logging.NOTSET), File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 303, in _generate_examples example = parse_example(article) File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 274, in parse_example _add_context(article.get("EntityPages", []), "WikiContext", wiki_dir), File "/gpfswork/rech/six/commun/modules/datasets_modules/datasets/trivia_qa/9977a5d6f72acfd92f587de052403e8138b43bb0d1ce595016c3baf7e14deba6/trivia_qa.py", line 253, in _add_context except (IOError, datasets.Value("errors").NotFoundError): File "<string>", line 5, in __init__ File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/features.py", line 265, in __post_init__ self.pa_type = string_to_arrow(self.dtype) File "/gpfswork/rech/six/commun/conda/victor/lib/python3.7/site-packages/datasets/features.py", line 134, in string_to_arrow f"Neither {datasets_dtype} nor {datasets_dtype + '_'} seems to be a pyarrow data type. " ValueError: Neither errors nor errors_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions ``` ## Expected results I am able to load another subset (`rc`), but unable to load. I am not sure why the try/except doesn't catch it... https://github.com/huggingface/datasets/blob/9675a5a1e7b99a86f9c250f6ea5fa5d1e6d5cc7d/datasets/trivia_qa/trivia_qa.py#L253 ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Linux-4.18.0-147.51.2.el8_1.x86_64-x86_64-with-redhat-8.1-Ootpa - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2993/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2993/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2991
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2991/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2991/comments
https://api.github.com/repos/huggingface/datasets/issues/2991/events
https://github.com/huggingface/datasets/issues/2991
1,012,174,823
I_kwDODunzps48VI_n
2,991
add docmentation for the `Unix style pattern` matching feature that can be leverage for `data_files` into `load_dataset`
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,633,008,121,000
1,633,008,121,000
null
NONE
null
Unless I'm mistaken, it seems that in the new documentation it is no longer mentioned that you can use Unix style pattern matching in the `data_files` argument of the `load_dataset` method. This feature was mentioned [here](https://huggingface.co/docs/datasets/loading_datasets.html#from-a-community-dataset-on-the-hugging-face-hub) in the previous documentation. I'd love to hear your opinion @lhoestq , @albertvillanova and @stevhliu
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2991/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2991/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2988/comments
https://api.github.com/repos/huggingface/datasets/issues/2988/events
https://github.com/huggingface/datasets/issues/2988
1,011,148,017
I_kwDODunzps48ROTx
2,988
IndexError: Invalid key: 14 is out of bounds for size 0
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "repos_url": "https://api.github.com/users/dorost1234/repos", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! Could you check the length of the `self.dataset` object (i.e. the Dataset object passed to the data loader) ? It looks like the dataset is empty.\r\nNot sure why the SWA optimizer would cause this though.", "Any updates on this? \r\nThe same error occurred to me too when running `cardiffnlp/twitter-roberta-base-sentiment` on a custom dataset. This happened when I tried to do `model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3])` without using sagemaker distribution. \r\nPython: 3.6.13\r\ndatasets: 1.6.2", "Hi @ruisi-su, do you have this issue while using SWA as well, or just data parallel ?\r\n\r\nIf you have a code example to reproduce this issue it would also be helpful", "@lhoestq I had this issue without SWA. I followed [this](https://github.com/huggingface/notebooks/blob/master/sagemaker/03_distributed_training_data_parallelism/sagemaker-notebook.ipynb) notebook to utilize multiple gpus on the [roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) model. This tutorial could only work if I am on `ml.p3.16xlarge`, which I don't have access to. So I tried using just `model = torch.nn.DataParallel(model, device_ids=[0, 1, 2, 3]` before calling `trainer.fit()`. But maybe this is not the right way to do distributed training. I can provide a code example if that will be more helpful.", "It might be an issue with old versions of `datasets`, can you try updating `datasets` ?" ]
1,632,931,464,000
1,639,416,247,000
null
NONE
null
## Describe the bug A clear and concise description of what the bug is. Hi. I am trying to implement stochastic weighted averaging optimizer with transformer library as described here https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/ , for this I am using a run_clm.py codes which is working fine before adding SWA optimizer, the moment I modify the model with `swa_model = AveragedModel(model)` in this script, I am getting the below error, since I am NOT touching the dataloader part, I am confused why this is occurring, I very much appreciate your opinion on this @lhoestq ## Steps to reproduce the bug ``` Traceback (most recent call last): File "run_clm.py", line 723, in <module> main() File "run_clm.py", line 669, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/transformers/trainer.py", line 1258, in train for step, inputs in enumerate(epoch_iterator): File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 475, in _next_data data = self._dataset_fetcher.fetch(index) # may raise StopIteration File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1530, in __getitem__ format_kwargs=self._format_kwargs, File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1517, in _getitem pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/formatting/formatting.py", line 368, in query_table _check_valid_index_key(key, size) File "/user/dara/libs/anaconda3/envs/success/lib/python3.7/site-packages/datasets/formatting/formatting.py", line 311, in _check_valid_index_key raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") IndexError: Invalid key: 14 is out of bounds for size 0 ``` ## Expected results not getting the index error ## Actual results Please see the above ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets 1.12.1 - Platform: linux - Python version: 3.7.11 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2988/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2988/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2987/comments
https://api.github.com/repos/huggingface/datasets/issues/2987/events
https://github.com/huggingface/datasets/issues/2987
1,011,026,141
I_kwDODunzps48Qwjd
2,987
ArrowInvalid: Can only convert 1-dimensional array values
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/users/NielsRogge/followers", "following_url": "https://api.github.com/users/NielsRogge/following{/other_user}", "gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}", "starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions", "organizations_url": "https://api.github.com/users/NielsRogge/orgs", "repos_url": "https://api.github.com/users/NielsRogge/repos", "events_url": "https://api.github.com/users/NielsRogge/events{/privacy}", "received_events_url": "https://api.github.com/users/NielsRogge/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @NielsRogge, thanks for reporting!\r\n\r\nIn `datasets`, we were handling N-dimensional arrays only when passed as an instance of `np.array`, not when passed as a list of `np.array`s.\r\n\r\nI'm fixing it." ]
1,632,925,132,000
1,633,096,665,000
1,633,096,665,000
NONE
null
## Describe the bug For the ViT and LayoutLMv2 demo notebooks in my [Transformers-Tutorials repo](https://github.com/NielsRogge/Transformers-Tutorials), people reported an ArrowInvalid issue after applying the following function to a Dataset: ``` def preprocess_data(examples): images = [Image.open(path).convert("RGB") for path in examples['image_path']] words = examples['words'] boxes = examples['bboxes'] word_labels = examples['ner_tags'] encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels, padding="max_length", truncation=True) return encoded_inputs ``` ``` Full trace: --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <ipython-input-8-0fc3efc6f0c2> in <module>() 27 28 train_dataset = datasets['train'].map(preprocess_data, batched=True, remove_columns=datasets['train'].column_names, ---> 29 features=features) 30 test_dataset = datasets['test'].map(preprocess_data, batched=True, remove_columns=datasets['test'].column_names, 31 features=features) 13 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1701 new_fingerprint=new_fingerprint, 1702 disable_tqdm=disable_tqdm, -> 1703 desc=desc, 1704 ) 1705 else: /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 183 } 184 # apply actual function --> 185 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 186 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 187 # re-apply format to the output /usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 396 # Call actual function 397 --> 398 out = func(self, *args, **kwargs) 399 400 # Update fingerprint of in-place transforms + update in-place history of transforms /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2063 writer.write_table(batch) 2064 else: -> 2065 writer.write_batch(batch) 2066 if update_data and writer is not None: 2067 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file /usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size) 409 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col) 410 typed_sequence_examples[col] = typed_sequence --> 411 pa_table = pa.Table.from_pydict(typed_sequence_examples) 412 self.write_table(pa_table, writer_batch_size) 413 /usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol() /usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 106 storage = numpy_to_pyarrow_listarray(self.data, type=type.value_type) 107 else: --> 108 storage = pa.array(self.data, type.storage_dtype) 109 out = pa.ExtensionArray.from_storage(type, storage) 110 elif isinstance(self.data, np.ndarray): /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array() /usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Can only convert 1-dimensional array values ``` It can be fixed by adding the following line: ```diff def preprocess_data(examples): images = [Image.open(path).convert("RGB") for path in examples['image_path']] words = examples['words'] boxes = examples['bboxes'] word_labels = examples['ner_tags'] encoded_inputs = processor(images, words, boxes=boxes, word_labels=word_labels, padding="max_length", truncation=True) + encoded_inputs["image"] = np.array(encoded_inputs["image"]) return encoded_inputs ``` However, would be great if this can be fixed within Datasets itself.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2987/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2987/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2984
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2984/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2984/comments
https://api.github.com/repos/huggingface/datasets/issues/2984/events
https://github.com/huggingface/datasets/issues/2984
1,010,484,326
I_kwDODunzps48OsRm
2,984
Exceeded maximum rows when reading large files
{ "login": "zijwang", "id": 25057983, "node_id": "MDQ6VXNlcjI1MDU3OTgz", "avatar_url": "https://avatars.githubusercontent.com/u/25057983?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zijwang", "html_url": "https://github.com/zijwang", "followers_url": "https://api.github.com/users/zijwang/followers", "following_url": "https://api.github.com/users/zijwang/following{/other_user}", "gists_url": "https://api.github.com/users/zijwang/gists{/gist_id}", "starred_url": "https://api.github.com/users/zijwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zijwang/subscriptions", "organizations_url": "https://api.github.com/users/zijwang/orgs", "repos_url": "https://api.github.com/users/zijwang/repos", "events_url": "https://api.github.com/users/zijwang/events{/privacy}", "received_events_url": "https://api.github.com/users/zijwang/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @zijwang, thanks for reporting this issue.\r\n\r\nYou did not mention which `datasets` version you are using, but looking at the code in the stack trace, it seems you are using an old version.\r\n\r\nCould you please update `datasets` (`pip install -U datasets`) and check if the problem persists?" ]
1,632,890,962,000
1,634,018,742,000
1,634,018,742,000
NONE
null
## Describe the bug A clear and concise description of what the bug is. When using `load_dataset` with json files, if the files are too large, there will be "Exceeded maximum rows" error. ## Steps to reproduce the bug ```python dataset = load_dataset('json', data_files=data_files) # data files have 3M rows in a single file ``` ## Expected results No error ## Actual results ``` ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 134 with open(file, encoding="utf-8") as f: --> 135 dataset = json.load(f) 136 except json.JSONDecodeError: ~/anaconda3/envs/python/lib/python3.9/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 292 """ --> 293 return loads(fp.read(), 294 cls=cls, object_hook=object_hook, ~/anaconda3/envs/python/lib/python3.9/json/__init__.py in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw) 345 parse_constant is None and object_pairs_hook is None and not kw): --> 346 return _default_decoder.decode(s) 347 if cls is None: ~/anaconda3/envs/python/lib/python3.9/json/decoder.py in decode(self, s, _w) 339 if end != len(s): --> 340 raise JSONDecodeError("Extra data", s, end) 341 return obj JSONDecodeError: Extra data: line 2 column 1 (char 20321) During handling of the above exception, another exception occurred: ArrowInvalid Traceback (most recent call last) <ipython-input-20-ab3718a6482f> in <module> ----> 1 dataset = load_dataset('json', data_files=data_files) ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 841 842 # Download and prepare data --> 843 builder_instance.download_and_prepare( 844 download_config=download_config, 845 download_mode=download_mode, ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 606 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 607 if not downloaded_from_gcs: --> 608 self._download_and_prepare( 609 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 610 ) ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 684 try: 685 # Prepare split will record examples associated to the split --> 686 self._prepare_split(split_generator, **prepare_split_kwargs) 687 except OSError as e: 688 raise OSError( ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/builder.py in _prepare_split(self, split_generator) 1153 generator = self._generate_tables(**split_generator.gen_kwargs) 1154 with ArrowWriter(features=self.info.features, path=fpath) as writer: -> 1155 for key, table in utils.tqdm( 1156 generator, unit=" tables", leave=False, disable=bool(logging.get_verbosity() == logging.NOTSET) 1157 ): ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 135 dataset = json.load(f) 136 except json.JSONDecodeError: --> 137 raise e 138 raise ValueError( 139 f"Not able to read records in the JSON file at {file}. " ~/anaconda3/envs/python/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py in _generate_tables(self, files) 114 while True: 115 try: --> 116 pa_table = paj.read_json( 117 BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size) 118 ) ~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/_json.pyx in pyarrow._json.read_json() ~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda3/envs/python/lib/python3.9/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Exceeded maximum rows ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux - Python version: 3.9 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2984/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2984/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2980
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2980/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2980/comments
https://api.github.com/repos/huggingface/datasets/issues/2980/events
https://github.com/huggingface/datasets/issues/2980
1,009,873,482
I_kwDODunzps48MXJK
2,980
OpenSLR 25: ASR data for Amharic, Swahili and Wolof
{ "login": "cdleong", "id": 4109253, "node_id": "MDQ6VXNlcjQxMDkyNTM=", "avatar_url": "https://avatars.githubusercontent.com/u/4109253?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cdleong", "html_url": "https://github.com/cdleong", "followers_url": "https://api.github.com/users/cdleong/followers", "following_url": "https://api.github.com/users/cdleong/following{/other_user}", "gists_url": "https://api.github.com/users/cdleong/gists{/gist_id}", "starred_url": "https://api.github.com/users/cdleong/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cdleong/subscriptions", "organizations_url": "https://api.github.com/users/cdleong/orgs", "repos_url": "https://api.github.com/users/cdleong/repos", "events_url": "https://api.github.com/users/cdleong/events{/privacy}", "received_events_url": "https://api.github.com/users/cdleong/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Whoever handles this just needs to: \r\n\r\n- [ ] fork the HuggingFace Datasets repo\r\n- [ ] update the [existing dataset script](https://github.com/huggingface/datasets/blob/master/datasets/openslr/openslr.py) to add SLR25. Lots of copypasting from other sections of the script should make that easy. \r\nAmharic URL: https://www.openslr.org/resources/25/data_readspeech_am.tar.bz2. \r\nSwahili URL: https://www.openslr.org/resources/25/data_broadcastnews_sw.tar.bz2, \r\nWolof URL: https://www.openslr.org/resources/25/data_readspeech_wo.tar.bz2\r\n- [ ] update the [data card](https://github.com/huggingface/datasets/blob/master/datasets/openslr/README.md) to include information about SLR25. There's lots of other examples to draw from. \r\n- [ ] add the appropriate language tags to the data card as well. https://www.w3.org/International/questions/qa-choosing-language-tags, or just use `sw`, `am`, and `wo` for consistency. \r\n- [ ] make a pull request to merge your changes back into HuggingFace's repo", "... also the example in \"use in datasets library\" should be updated. It currently says \r\n![image](https://user-images.githubusercontent.com/4109253/135115980-8583a44a-cae6-4121-b699-00667020849f.png)\r\nBut you actually have to specify a subset, e.g. \r\n```python\r\ndataset = load_dataset(\"openslr\", \"SLR32\")\r\n```", "![image](https://user-images.githubusercontent.com/4109253/135116070-82d4e732-b7b3-4c5b-bd4e-a40d8ccabb0e.png)\r\n\r\n" ]
1,632,841,476,000
1,632,936,314,000
null
CONTRIBUTOR
null
## Adding a Dataset - **Name:** *SLR25* - **Description:** *Subset 25 from OpenSLR. Other subsets have been added to https://huggingface.co/datasets/openslr, 25 covers Amharic, Swahili and Wolof data* - **Paper:** *https://www.openslr.org/25/ has citations for each of the three subsubsets. * - **Data:** *Currently the three links to the .tar.bz2 files can be found a thttps://www.openslr.org/25/* - **Motivation:** *Increase ASR data for underrepresented African languages. Also, other subsets of OpenSLR speech recognition have been uploaded, so this would be easy.* https://github.com/huggingface/datasets/blob/master/datasets/openslr/openslr.py already has been created for various other OpenSLR subsets, this should be relatively straightforward to do.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2980/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2980/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2979
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2979/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2979/comments
https://api.github.com/repos/huggingface/datasets/issues/2979/events
https://github.com/huggingface/datasets/issues/2979
1,009,634,147
I_kwDODunzps48Lctj
2,979
ValueError when computing f1 metric with average None
{ "login": "asofiaoliveira", "id": 74454835, "node_id": "MDQ6VXNlcjc0NDU0ODM1", "avatar_url": "https://avatars.githubusercontent.com/u/74454835?v=4", "gravatar_id": "", "url": "https://api.github.com/users/asofiaoliveira", "html_url": "https://github.com/asofiaoliveira", "followers_url": "https://api.github.com/users/asofiaoliveira/followers", "following_url": "https://api.github.com/users/asofiaoliveira/following{/other_user}", "gists_url": "https://api.github.com/users/asofiaoliveira/gists{/gist_id}", "starred_url": "https://api.github.com/users/asofiaoliveira/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/asofiaoliveira/subscriptions", "organizations_url": "https://api.github.com/users/asofiaoliveira/orgs", "repos_url": "https://api.github.com/users/asofiaoliveira/repos", "events_url": "https://api.github.com/users/asofiaoliveira/events{/privacy}", "received_events_url": "https://api.github.com/users/asofiaoliveira/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @asofiaoliveira, thanks for reporting.\r\n\r\nI'm fixing it." ]
1,632,828,893,000
1,633,097,858,000
1,633,097,858,000
NONE
null
## Describe the bug When I try to compute the f1 score for each class in a multiclass classification problem, I get a ValueError. The same happens with recall and precision. I traced the error to the `.item()` in these scripts, which is probably there for the other averages. E.g. from f1.py: ```python return { "f1": f1_score( references, predictions, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight, ).item(), } ``` Since the result is an array with more than one item, the `.item()` throws the error. I didn't submit a PR because this might be needed for the other averages, I'm not very familiar with the library ## Steps to reproduce the bug ```python from datasets import load_metric metric = load_metric("f1") metric.add_batch(predictions=[2,34,1,34,1,2,3], references=[23,52,1,3,523,5,8]) metric.compute(average=None) ``` ## Expected results `array([0.66666667, 0. , 0. , 0. , 0. , 0. , 0. , 0. , 0. ])` ## Actual results ValueError: can only convert an array of size 1 to a Python scalar ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.5 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2979/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2979/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2978
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2978/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2978/comments
https://api.github.com/repos/huggingface/datasets/issues/2978/events
https://github.com/huggingface/datasets/issues/2978
1,009,521,419
I_kwDODunzps48LBML
2,978
Run CI tests against non-production server
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hey @albertvillanova could you provide more context, including extracts from the discussion we had ?\r\n\r\nLet's ping @Pierrci @julien-c and @n1t0 for their opinion about that", "@julien-c increased the huggingface.co production workers in order to see if it solve [the 502 you had this morning](https://app.circleci.com/pipelines/github/huggingface/datasets/7843/workflows/fc83fa32-18f5-4dc3-9e2f-ba277ae1af74)\r\n\r\nFor the decision process: be aware that moon-staging does not have persistent repos (they are deleted regularly). as a consequence, **if the moon-staging solution is validated**, you should consider a way to keep the repository that are loaded in tests. These are the ones I found : https://github.com/huggingface/datasets/blob/d488db2f64f312f88f72bbc57a09b7eddb329182/tests/test_load.py and https://github.com/huggingface/datasets/blob/40773111c3e7db8a992fa1c48af32d900a1018d6/tests/test_streaming_download_manager." ]
1,632,822,086,000
1,632,842,630,000
null
MEMBER
null
Currently, the CI test suite performs requests to the HF production server. As discussed with @elishowk, we should refactor our tests to use the HF staging server instead, like `huggingface_hub` and `transformers`.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2978/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2978/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2977
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2977/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2977/comments
https://api.github.com/repos/huggingface/datasets/issues/2977/events
https://github.com/huggingface/datasets/issues/2977
1,009,378,692
I_kwDODunzps48KeWE
2,977
Impossible to load compressed csv
{ "login": "Valahaar", "id": 19476123, "node_id": "MDQ6VXNlcjE5NDc2MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Valahaar", "html_url": "https://github.com/Valahaar", "followers_url": "https://api.github.com/users/Valahaar/followers", "following_url": "https://api.github.com/users/Valahaar/following{/other_user}", "gists_url": "https://api.github.com/users/Valahaar/gists{/gist_id}", "starred_url": "https://api.github.com/users/Valahaar/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Valahaar/subscriptions", "organizations_url": "https://api.github.com/users/Valahaar/orgs", "repos_url": "https://api.github.com/users/Valahaar/repos", "events_url": "https://api.github.com/users/Valahaar/events{/privacy}", "received_events_url": "https://api.github.com/users/Valahaar/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @Valahaar, thanks for reporting and for your investigation about the source cause.\r\n\r\nYou are right and that commit prevents `pandas` from inferring the compression. On the other hand, @lhoestq did that change to support loading that dataset in streaming mode. \r\n\r\nI'm fixing it." ]
1,632,813,534,000
1,633,103,596,000
1,633,103,595,000
CONTRIBUTOR
null
## Describe the bug It is not possible to load from a compressed csv anymore. ## Steps to reproduce the bug ```python load_dataset('csv', data_files=['/path/to/csv.bz2']) ``` ## Problem and possible solution This used to work, but the commit that broke it is [this one](https://github.com/huggingface/datasets/commit/ad489d4597381fc2d12c77841642cbeaecf7a2e0#diff-6f60f8d0552b75be8b3bfd09994480fd60dcd4e7eb08d02f721218c3acdd2782). `pandas` usually gets the compression information from the filename itself (which was previously directly passed). Now, since it gets a file descriptor, it might be good to auto-infer the compression or let the user pass the `compression` kwarg to `load_dataset` (or maybe warn the user if the file ends with a commonly known compression scheme?). ## Environment info - `datasets` version: 1.10.0 (and over) - Platform: Linux-5.8.0-45-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2977/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2977/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2976
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2976/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2976/comments
https://api.github.com/repos/huggingface/datasets/issues/2976/events
https://github.com/huggingface/datasets/issues/2976
1,008,647,889
I_kwDODunzps48Hr7R
2,976
Can't load dataset
{ "login": "mskovalova", "id": 77006774, "node_id": "MDQ6VXNlcjc3MDA2Nzc0", "avatar_url": "https://avatars.githubusercontent.com/u/77006774?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mskovalova", "html_url": "https://github.com/mskovalova", "followers_url": "https://api.github.com/users/mskovalova/followers", "following_url": "https://api.github.com/users/mskovalova/following{/other_user}", "gists_url": "https://api.github.com/users/mskovalova/gists{/gist_id}", "starred_url": "https://api.github.com/users/mskovalova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mskovalova/subscriptions", "organizations_url": "https://api.github.com/users/mskovalova/orgs", "repos_url": "https://api.github.com/users/mskovalova/repos", "events_url": "https://api.github.com/users/mskovalova/events{/privacy}", "received_events_url": "https://api.github.com/users/mskovalova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @mskovalova, \r\n\r\nSome datasets have multiple configurations. Therefore, in order to load them, you have to specify both the *dataset name* and the *configuration name*.\r\n\r\nIn the error message you got, you have a usage example:\r\n- To load the 'wikitext-103-raw-v1' configuration of the 'wikitext' dataset, you should use: \r\n ```python\r\n load_dataset('wikitext', 'wikitext-103-raw-v1')\r\n ```\r\n\r\nIn your case, if you would like to load the 'wikitext-2-v1' configuration of the 'wikitext' dataset, please use:\r\n```python\r\nraw_datasets = load_dataset(\"wikitext\", \"wikitext-2-v1\")\r\n```" ]
1,632,778,694,000
1,632,811,981,000
1,632,811,981,000
NONE
null
I'm trying to load a wikitext dataset ``` from datasets import load_dataset raw_datasets = load_dataset("wikitext") ``` ValueError: Config name is missing. Please pick one among the available configs: ['wikitext-103-raw-v1', 'wikitext-2-raw-v1', 'wikitext-103-v1', 'wikitext-2-v1'] Example of usage: `load_dataset('wikitext', 'wikitext-103-raw-v1')`. If I try ``` from datasets import load_dataset raw_datasets = load_dataset("wikitext-2-v1") ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.12.1/datasets/wikitext-2-v1/wikitext-2-v1.py #### Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic (colab) - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2976/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2976/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2972
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2972/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2972/comments
https://api.github.com/repos/huggingface/datasets/issues/2972/events
https://github.com/huggingface/datasets/issues/2972
1,007,808,714
I_kwDODunzps48EfDK
2,972
OSError: Not enough disk space.
{ "login": "qqaatw", "id": 24835382, "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "gravatar_id": "", "url": "https://api.github.com/users/qqaatw", "html_url": "https://github.com/qqaatw", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "repos_url": "https://api.github.com/users/qqaatw/repos", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Maybe we can change the disk space calculating API from `shutil.disk_usage` to `os.statvfs` in UNIX-like system, which can provide correct results.\r\n```\r\nstatvfs = os.statvfs('path')\r\navail_space_bytes = statvfs.f_frsize * statvfs.f_bavail\r\n```", "Hi @qqaatw, thanks for reporting.\r\n\r\nCould you please try:\r\n```python\r\ndataset = load_dataset(\"natural_questions\", cache_dir=os.path.abspath(args.dataset_cache_dir))\r\n```", "@albertvillanova it works! Thanks for your suggestion. Is that a bug of `DownloadConfig`?", "`DownloadConfig` only sets the location to download the files. On the other hand, `cache_dir` sets the location for both downloading and caching the data. You can find more information here: https://huggingface.co/docs/datasets/loading_datasets.html#cache-directory" ]
1,632,728,482,000
1,632,811,527,000
1,632,811,395,000
CONTRIBUTOR
null
## Describe the bug I'm trying to download `natural_questions` dataset from the Internet, and I've specified the cache_dir which locates in a mounted disk and has enough disk space. However, even though the space is enough, the disk space checking function still reports the space of root `/` disk having no enough space. The file system structure is like below. The root `/` has `115G` disk space available, and the `sda1` is mounted to `/mnt`, which has `1.2T` disk space available: ``` / /mnt/sda1/path/to/args.dataset_cache_dir ``` ## Steps to reproduce the bug ```python dataset_config = DownloadConfig( cache_dir=os.path.abspath(args.dataset_cache_dir), resume_download=True, ) dataset = load_dataset("natural_questions", download_config=dataset_config) ``` ## Expected results Can download the dataset without an error. ## Actual results The following error raised: ``` OSError: Not enough disk space. Needed: 134.92 GiB (download: 41.97 GiB, generated: 92.95 GiB, post-processed: Unknown size) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Ubuntu 18.04 - Python version: 3.8.10 - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2972/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2972/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2971
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2971/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2971/comments
https://api.github.com/repos/huggingface/datasets/issues/2971/events
https://github.com/huggingface/datasets/issues/2971
1,007,696,522
I_kwDODunzps48EDqK
2,971
masakhaner dataset load problem
{ "login": "ontocord", "id": 8900094, "node_id": "MDQ6VXNlcjg5MDAwOTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8900094?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ontocord", "html_url": "https://github.com/ontocord", "followers_url": "https://api.github.com/users/ontocord/followers", "following_url": "https://api.github.com/users/ontocord/following{/other_user}", "gists_url": "https://api.github.com/users/ontocord/gists{/gist_id}", "starred_url": "https://api.github.com/users/ontocord/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ontocord/subscriptions", "organizations_url": "https://api.github.com/users/ontocord/orgs", "repos_url": "https://api.github.com/users/ontocord/repos", "events_url": "https://api.github.com/users/ontocord/events{/privacy}", "received_events_url": "https://api.github.com/users/ontocord/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, @ontocord. We are fixing the wrong metadata." ]
1,632,718,747,000
1,632,747,599,000
1,632,747,599,000
CONTRIBUTOR
null
## Describe the bug Masakhaner dataset is not loading ## Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("masakhaner",'amh') ``` ## Expected results Expected the return of a dataset ## Actual results ``` NonMatchingSplitsSizesError Traceback (most recent call last) <ipython-input-3-a6abc1161d4c> in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("masakhaner",'amh') 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_splits(expected_splits, recorded_splits) 72 ] 73 if len(bad_splits) > 0: ---> 74 raise NonMatchingSplitsSizesError(str(bad_splits)) 75 logger.info("All the splits matched successfully.") 76 NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=639927, num_examples=1751, dataset_name='masakhaner'), 'recorded': SplitInfo(name='train', num_bytes=639911, num_examples=1750, dataset_name='masakhaner')}, {'expected': SplitInfo(name='validation', num_bytes=92768, num_examples=251, dataset_name='masakhaner'), 'recorded': SplitInfo(name='validation', num_bytes=92753, num_examples=250, dataset_name='masakhaner')}, {'expected': SplitInfo(name='test', num_bytes=184286, num_examples=501, dataset_name='masakhaner'), 'recorded': SplitInfo(name='test', num_bytes=184271, num_examples=500, dataset_name='masakhaner')}] ``` ## Environment info Google Colab
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2971/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2971/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2970
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2970/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2970/comments
https://api.github.com/repos/huggingface/datasets/issues/2970/events
https://github.com/huggingface/datasets/issues/2970
1,007,340,089
I_kwDODunzps48Cso5
2,970
Magnet’s
{ "login": "rcacho172", "id": 90449239, "node_id": "MDQ6VXNlcjkwNDQ5MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/90449239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcacho172", "html_url": "https://github.com/rcacho172", "followers_url": "https://api.github.com/users/rcacho172/followers", "following_url": "https://api.github.com/users/rcacho172/following{/other_user}", "gists_url": "https://api.github.com/users/rcacho172/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcacho172/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcacho172/subscriptions", "organizations_url": "https://api.github.com/users/rcacho172/orgs", "repos_url": "https://api.github.com/users/rcacho172/repos", "events_url": "https://api.github.com/users/rcacho172/events{/privacy}", "received_events_url": "https://api.github.com/users/rcacho172/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,632,649,829,000
1,632,652,739,000
1,632,652,739,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2970/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2970/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2969
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2969/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2969/comments
https://api.github.com/repos/huggingface/datasets/issues/2969/events
https://github.com/huggingface/datasets/issues/2969
1,007,217,867
I_kwDODunzps48COzL
2,969
medical-dialog error
{ "login": "smeyerhot", "id": 43877130, "node_id": "MDQ6VXNlcjQzODc3MTMw", "avatar_url": "https://avatars.githubusercontent.com/u/43877130?v=4", "gravatar_id": "", "url": "https://api.github.com/users/smeyerhot", "html_url": "https://github.com/smeyerhot", "followers_url": "https://api.github.com/users/smeyerhot/followers", "following_url": "https://api.github.com/users/smeyerhot/following{/other_user}", "gists_url": "https://api.github.com/users/smeyerhot/gists{/gist_id}", "starred_url": "https://api.github.com/users/smeyerhot/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/smeyerhot/subscriptions", "organizations_url": "https://api.github.com/users/smeyerhot/orgs", "repos_url": "https://api.github.com/users/smeyerhot/repos", "events_url": "https://api.github.com/users/smeyerhot/events{/privacy}", "received_events_url": "https://api.github.com/users/smeyerhot/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @smeyerhot, thanks for reporting.\r\n\r\nYou are right: there is an issue with the dataset metadata. I'm fixing it.\r\n\r\nIn the meantime, you can circumvent the issue by passing `ignore_verifications=True`:\r\n```python\r\nraw_datasets = load_dataset(\"medical_dialog\", \"en\", split=\"train\", download_mode=\"force_redownload\", data_dir=\"./Medical-Dialogue-Dataset-English\", ignore_verifications=True)\r\n```" ]
1,632,611,324,000
1,633,938,402,000
1,633,938,402,000
NONE
null
## Describe the bug A clear and concise description of what the bug is. When I attempt to download the huggingface datatset medical_dialog it errors out midway through ## Steps to reproduce the bug ```python raw_datasets = load_dataset("medical_dialog", "en", split="train", download_mode="force_redownload", data_dir="./Medical-Dialogue-Dataset-English") ``` ## Expected results A clear and concise description of the expected results. No error ## Actual results ``` 3 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_splits(expected_splits, recorded_splits) 72 ] 73 if len(bad_splits) > 0: ---> 74 raise NonMatchingSplitsSizesError(str(bad_splits)) 75 logger.info("All the splits matched successfully.") 76 NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='medical_dialog'), 'recorded': SplitInfo(name='train', num_bytes=295097913, num_examples=229674, dataset_name='medical_dialog')}] ``` Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.21.1 - Platform: colab - Python version: colab 3.7 - PyArrow version: N/A
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2969/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2969/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2968
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2968/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2968/comments
https://api.github.com/repos/huggingface/datasets/issues/2968/events
https://github.com/huggingface/datasets/issues/2968
1,007,209,488
I_kwDODunzps48CMwQ
2,968
`DatasetDict` cannot be exported to parquet if the splits have different features
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "repos_url": "https://api.github.com/users/LysandreJik/repos", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "This is because you have to specify which split corresponds to what file:\r\n```python\r\ndata_files = {\"train\": \"train/split.parquet\", \"validation\": \"validation/split.parquet\"}\r\nbrand_new_dataset_2 = load_dataset(\"ds\", data_files=data_files)\r\n```\r\n\r\nOtherwise it tries to concatenate the two splits, and it fails because they don't have the same features.\r\n\r\nIt works with save_to_disk/load_from_disk because it also stores json files that contain the information about which files goes into which split", "Wonderful, thanks for the help!", "I may be mistaken but I think the following doesn't work either:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"lhoestq/custom_squad\")\r\n\r\n\r\ndef identical_answers(e):\r\n e['identical_answers'] = len(set(e['answers']['text'])) == 1\r\n return e\r\n\r\n\r\nds['validation'] = ds['validation'].map(identical_answers)\r\nds['train'].to_parquet(\"./ds/train/split.parquet\")\r\nds['validation'].to_parquet(\"./ds/validation/split.parquet\")\r\n\r\ndata_files = {\"train\": \"train/split.parquet\", \"validation\": \"validation/split.parquet\"}\r\nbrand_new_dataset_2 = load_dataset(\"ds\", data_files=data_files)\r\n```", "It works on my side as soon as the directories named `ds/train` and `ds/validation` exist (otherwise it returns a FileNotFoundError). What error are you getting ?", "Also we may introduce a default mapping for the data files:\r\n```python\r\n{\r\n \"train\": [\"*train*\"],\r\n \"test\": [\"*test*\"],\r\n \"validation\": [\"*dev*\", \"valid\"],\r\n}\r\n```\r\nthis way if you name your files according to the splits you won't have to specify the data_files parameter. What do you think ?\r\n\r\nI moved this discussion to #3027 ", "I'm getting the following error:\r\n\r\n```\r\nDownloading and preparing dataset custom_squad/plain_text to /home/lysandre/.cache/huggingface/datasets/lhoestq___custom_squad)/plain_text/1.0.0/397916d1ae99584877e0fb4f5b8b6f01e66fcbbeff4d178afb30c933a8d0d93a...\r\n100%|██████████| 2/2 [00:00<00:00, 7760.04it/s]\r\n100%|██████████| 2/2 [00:00<00:00, 2020.38it/s]\r\n 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last):\r\n File \"<input>\", line 1, in <module>\r\n File \"/opt/pycharm-professional/plugins/python/helpers/pydev/_pydev_bundle/pydev_umd.py\", line 198, in runfile\r\n pydev_imports.execfile(filename, global_vars, local_vars) # execute the script\r\n File \"/opt/pycharm-professional/plugins/python/helpers/pydev/_pydev_imps/_pydev_execfile.py\", line 18, in execfile\r\n exec(compile(contents+\"\\n\", file, 'exec'), glob, loc)\r\n File \"/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py\", line 12, in <module>\r\n ds = load_dataset(\"lhoestq/custom_squad\")\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py\", line 1207, in load_dataset\r\n ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py\", line 823, in as_dataset\r\n datasets = utils.map_nested(\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py\", line 207, in map_nested\r\n mapped = [\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py\", line 208, in <listcomp>\r\n _single_map_nested((function, obj, types, None, True))\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py\", line 143, in _single_map_nested\r\n return function(data_struct)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py\", line 854, in _build_single_dataset\r\n ds = self._as_dataset(\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py\", line 924, in _as_dataset\r\n dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 217, in read\r\n return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 238, in read_files\r\n pa_table = self._read_files(files, in_memory=in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 173, in _read_files\r\n pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 308, in _get_table_from_filename\r\n table = ArrowReader.read_table(filename, in_memory=in_memory)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py\", line 327, in read_table\r\n return table_cls.from_file(filename)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py\", line 458, in from_file\r\n table = _memory_mapped_arrow_table_from_file(filename)\r\n File \"/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py\", line 45, in _memory_mapped_arrow_table_from_file\r\n pa_table = opened_stream.read_all()\r\n File \"pyarrow/ipc.pxi\", line 563, in pyarrow.lib.RecordBatchReader.read_all\r\n File \"pyarrow/error.pxi\", line 114, in pyarrow.lib.check_status\r\nOSError: Header-type of flatbuffer-encoded Message is not RecordBatch.\r\n```\r\n\r\nTried on current master, after updating latest dependencies and obtained the same result", "The proposal in #3027 sounds good to me!", "I just tried again on colab by installing `datasets` from source with pyarrow 3.0.0 and didn't get any error.\r\n\r\nYou error seems to happen when doing\r\n```python\r\nds = load_dataset(\"lhoestq/custom_squad\")\r\n```\r\n\r\nMore specifically it fails when trying to read the arrow file that just got generated. I haven't issues like this before. Can you make sure you have a recent version of `pyarrow` ? Maybe it was an old version that wrote the arrow file and some header was missing.", "Thank you for your pointer! This seems to have been linked to Python 3.9.7: it works flawlessly with Python 3.8.6. This can be closed, thanks a lot for your help." ]
1,632,608,319,000
1,633,646,862,000
1,633,646,846,000
MEMBER
null
## Describe the bug I'm trying to use parquet as a means of serialization for both `Dataset` and `DatasetDict` objects. Using `to_parquet` alongside `from_parquet` or `load_dataset` for a `Dataset` works perfectly. For `DatasetDict`, I use `to_parquet` on each split to save the parquet files in individual folders representing individual splits. This works too, as long as the splits have identical features. If a split has different features to neighboring splits, then loading the dataset will fail: a single schema is used to load both splits, resulting in a failure to load the second parquet file. ## Steps to reproduce the bug The following works as expected: ```python from datasets import load_dataset ds = load_dataset("lhoestq/custom_squad") ds['train'].to_parquet("./ds/train/split.parquet") ds['validation'].to_parquet("./ds/validation/split.parquet") brand_new_dataset = load_dataset("ds") ``` Modifying a single split to add a new feature ends up in a crash: ```python from datasets import load_dataset ds = load_dataset("lhoestq/custom_squad") def identical_answers(e): e['identical_answers'] = len(set(e['answers']['text'])) == 1 return e ds['validation'] = ds['validation'].map(identical_answers) ds['train'].to_parquet("./ds/train/split.parquet") ds['validation'].to_parquet("./ds/validation/split.parquet") brand_new_dataset = load_dataset("ds") ``` ``` File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 26, in <module> brand_new_dataset = load_dataset("ds") File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1151, in load_dataset builder_instance.download_and_prepare( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 642, in download_and_prepare self._download_and_prepare( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 732, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 1194, in _prepare_split writer.write_table(table) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_writer.py", line 428, in <listcomp> pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1257, in pyarrow.lib.Table.__getitem__ File "pyarrow/table.pxi", line 1833, in pyarrow.lib.Table.column File "pyarrow/table.pxi", line 1808, in pyarrow.lib.Table._ensure_integer_index KeyError: 'Field "identical_answers" does not exist in table schema' ``` It does work, however, to use the `save_to_disk` and `load_from_disk` methods: ```py from datasets import load_from_disk ds = load_dataset("lhoestq/custom_squad") def identical_answers(e): e['identical_answers'] = len(set(e['answers']['text'])) == 1 return e ds['validation'] = ds['validation'].map(identical_answers) ds.save_to_disk("local_path") brand_new_dataset = load_from_disk("local_path") ``` ## Expected results The saving works correctly - but the loading fails. I would expect either an error when saving or an error-less instantiation of the dataset through the parquet files. If it's helpful, I've traced a possible patch to the `write_table` method here: https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L424-L425 The writer is built only if the parquet writer is `None`, but I expect we would want to build a new writer as the table schema has changed. Furthermore, it relies on having the property `update_features` set to `True` in order to update the features: https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/arrow_writer.py#L254-L255 but the `ArrowWriter` is instantiated without that option in the `_prepare_split` method of the `ArrowBasedBuilder`: https://github.com/huggingface/datasets/blob/26ff41aa3a642e46489db9e95be1e9a8c4e64bea/src/datasets/builder.py#L1190 Updating these two parts to recreate a schema on each split results in an error that is, unfortunately, out of my expertise: ``` File "/home/lysandre/.config/JetBrains/PyCharm2021.2/scratches/datasets/upload_dataset.py", line 27, in <module> brand_new_dataset = load_dataset("ds") File "/home/lysandre/Workspaces/Python/datasets/src/datasets/load.py", line 1163, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 819, in as_dataset datasets = utils.map_nested( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 207, in map_nested mapped = [ File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 208, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/utils/py_utils.py", line 143, in _single_map_nested return function(data_struct) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 850, in _build_single_dataset ds = self._as_dataset( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/builder.py", line 920, in _as_dataset dataset_kwargs = ArrowReader(self._cache_dir, self.info).read( File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 217, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 238, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 173, in _read_files pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 308, in _get_table_from_filename table = ArrowReader.read_table(filename, in_memory=in_memory) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/arrow_reader.py", line 327, in read_table return table_cls.from_file(filename) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 458, in from_file table = _memory_mapped_arrow_table_from_file(filename) File "/home/lysandre/Workspaces/Python/datasets/src/datasets/table.py", line 45, in _memory_mapped_arrow_table_from_file pa_table = opened_stream.read_all() File "pyarrow/ipc.pxi", line 563, in pyarrow.lib.RecordBatchReader.read_all File "pyarrow/error.pxi", line 114, in pyarrow.lib.check_status OSError: Header-type of flatbuffer-encoded Message is not RecordBatch. ``` ## Environment info - `datasets` version: 1.12.2.dev0 - Platform: Linux-5.14.7-arch1-1-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2968/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2968/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2967
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2967/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2967/comments
https://api.github.com/repos/huggingface/datasets/issues/2967/events
https://github.com/huggingface/datasets/issues/2967
1,007,194,837
I_kwDODunzps48CJLV
2,967
Adding vision-and-language datasets (e.g., VQA, VCR) to Datasets
{ "login": "WadeYin9712", "id": 42200725, "node_id": "MDQ6VXNlcjQyMjAwNzI1", "avatar_url": "https://avatars.githubusercontent.com/u/42200725?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WadeYin9712", "html_url": "https://github.com/WadeYin9712", "followers_url": "https://api.github.com/users/WadeYin9712/followers", "following_url": "https://api.github.com/users/WadeYin9712/following{/other_user}", "gists_url": "https://api.github.com/users/WadeYin9712/gists{/gist_id}", "starred_url": "https://api.github.com/users/WadeYin9712/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WadeYin9712/subscriptions", "organizations_url": "https://api.github.com/users/WadeYin9712/orgs", "repos_url": "https://api.github.com/users/WadeYin9712/repos", "events_url": "https://api.github.com/users/WadeYin9712/events{/privacy}", "received_events_url": "https://api.github.com/users/WadeYin9712/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[]
1,632,603,495,000
1,633,293,262,000
1,633,293,262,000
NONE
null
**Is your feature request related to a problem? Please describe.** Would you like to add any vision-and-language datasets (e.g., VQA, VCR) to Huggingface Datasets? **Describe the solution you'd like** N/A **Describe alternatives you've considered** N/A **Additional context** This is Da Yin at UCLA. Recently, we have published an EMNLP 2021 paper about geo-diverse visual commonsense reasoning (https://arxiv.org/abs/2109.06860). We propose a new dataset called GD-VCR, a vision-and-language dataset to evaluate how well V&L models perform on scenarios involving geo-location-specific commonsense. We hope to have our V&L dataset incorporated into Huggingface to further promote our project, but I haven't seen much V&L datasets in the current package. Is it possible to add V&L datasets, and if so, how should we prepare for the loading? Thank you very much!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2967/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2967/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2965
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2965/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2965/comments
https://api.github.com/repos/huggingface/datasets/issues/2965/events
https://github.com/huggingface/datasets/issues/2965
1,007,084,153
I_kwDODunzps48BuJ5
2,965
Invalid download URL of WMT17 `zh-en` data
{ "login": "Ririkoo", "id": 3339950, "node_id": "MDQ6VXNlcjMzMzk5NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/3339950?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Ririkoo", "html_url": "https://github.com/Ririkoo", "followers_url": "https://api.github.com/users/Ririkoo/followers", "following_url": "https://api.github.com/users/Ririkoo/following{/other_user}", "gists_url": "https://api.github.com/users/Ririkoo/gists{/gist_id}", "starred_url": "https://api.github.com/users/Ririkoo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ririkoo/subscriptions", "organizations_url": "https://api.github.com/users/Ririkoo/orgs", "repos_url": "https://api.github.com/users/Ririkoo/repos", "events_url": "https://api.github.com/users/Ririkoo/events{/privacy}", "received_events_url": "https://api.github.com/users/Ririkoo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,632,575,852,000
1,632,575,852,000
null
NONE
null
## Describe the bug Partial data (wmt17 zh-en) cannot be downloaded due to an invalid URL. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('wmt17','zh-en') ``` ## Expected results ConnectionError: Couldn't reach ftp://cwmt-wmt:cwmt-wmt@datasets.nju.edu.cn/parallel/casia2015.zip
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2965/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2965/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2964
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2964/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2964/comments
https://api.github.com/repos/huggingface/datasets/issues/2964/events
https://github.com/huggingface/datasets/issues/2964
1,006,605,904
I_kwDODunzps47_5ZQ
2,964
Error when calculating Matthews Correlation Coefficient loaded with `load_metric`
{ "login": "alvarobartt", "id": 36760800, "node_id": "MDQ6VXNlcjM2NzYwODAw", "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alvarobartt", "html_url": "https://github.com/alvarobartt", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "repos_url": "https://api.github.com/users/alvarobartt/repos", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "After some more tests I've realized that this \"issue\" is due to the `numpy.float64` to `float` conversion, but when defining a function named `compute_metrics` as it follows:\r\n\r\n```python\r\ndef compute_metrics(eval_preds):\r\n metric = load_metric(\"matthews_correlation\")\r\n logits, labels = eval_preds\r\n predictions = np.argmax(logits, axis=1)\r\n return metric.compute(predictions=predictions, references=labels)\r\n```\r\n\r\nIt fails when the evaluation metrics are computed in the `Trainer` with the same error code `AttributeError: 'float' object has no attribute 'item'` as the output is not a `numpy.float64`... Maybe I'm doing something wrong, not sure!", "Ok after some more experiments I've realized that it's an issue from my side, at first I thought it was due to `fp16=True` in `TrainingArguments`, but in the end that may not be the issue, so I'll close this for now and check later, since the mistake is on my side :weary: Sorry for the inconvenience!" ]
1,632,498,921,000
1,632,557,167,000
1,632,557,167,000
NONE
null
## Describe the bug After loading the metric named "[Matthews Correlation Coefficient](https://huggingface.co/metrics/matthews_correlation)" from `🤗datasets`, the `.compute` method fails with the following exception `AttributeError: 'float' object has no attribute 'item'` (complete stack trace can be provided if required). ## Steps to reproduce the bug ```python import torch predictions = torch.ones((10,)) references = torch.zeros((10,)) from datasets import load_metric METRIC = load_metric("matthews_correlation") result = METRIC.compute(predictions=predictions, references=references) ``` ## Expected results We should expect a Python `dict` as it follows: ``` { "matthews_correlation": float() } ``` as defined in https://github.com/huggingface/datasets/blob/master/metrics/matthews_correlation/matthews_correlation.py, so the fix will imply removing `.item()`, since the value returned by the `scikit-learn` function is not a `torch.Tensor` but a `float`, which means that the `.item()` will fail. ## Actual results ``` Traceback (most recent call last): File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 59, in main app() File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 214, in __call__ return get_command(self)(*args, **kwargs) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1137, in __call__ return self.main(*args, **kwargs) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1062, in main rv = self.invoke(ctx) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1668, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 1404, in invoke return ctx.invoke(self.callback, **ctx.params) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/click/core.py", line 763, in invoke return __callback(*args, **kwargs) File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/typer/main.py", line 500, in wrapper return callback(**use_params) # type: ignore File "/home/alvaro.bartolome/XXX/xxx/cli.py", line 43, in train metrics = trainer.evaluate() File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2051, in evaluate output = eval_loop( File "/home/alvaro.bartolome/miniconda3/envs/xxx/lib/python3.9/site-packages/transformers/trainer.py", line 2292, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "/home/alvaro.bartolome/XXX/xxx/metrics.py", line 20, in compute_metrics res = METRIC.compute(predictions=predictions, references=eval_preds.label_ids) File "/home/alvaro.bartolome/miniconda3/envs/lang/lib/python3.9/site-packages/datasets/metric.py", line 402, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/alvaro.bartolome/.cache/huggingface/modules/datasets_modules/metrics/matthews_correlation/0275f1e9a4d318e3ea8cdd87547ee0d58d894966616052e3d18444ac8ddd2357/matthews_correlation.py", line 88, in _compute "matthews_correlation": matthews_corrcoef(references, predictions, sample_weight=sample_weight).item(), AttributeError: 'float' object has no attribute 'item' ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-4.15.0-1113-azure-x86_64-with-glibc2.23 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2964/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2964/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2963
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2963/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2963/comments
https://api.github.com/repos/huggingface/datasets/issues/2963/events
https://github.com/huggingface/datasets/issues/2963
1,006,588,605
I_kwDODunzps47_1K9
2,963
raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects.
{ "login": "keloemma", "id": 40454218, "node_id": "MDQ6VXNlcjQwNDU0MjE4", "avatar_url": "https://avatars.githubusercontent.com/u/40454218?v=4", "gravatar_id": "", "url": "https://api.github.com/users/keloemma", "html_url": "https://github.com/keloemma", "followers_url": "https://api.github.com/users/keloemma/followers", "following_url": "https://api.github.com/users/keloemma/following{/other_user}", "gists_url": "https://api.github.com/users/keloemma/gists{/gist_id}", "starred_url": "https://api.github.com/users/keloemma/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/keloemma/subscriptions", "organizations_url": "https://api.github.com/users/keloemma/orgs", "repos_url": "https://api.github.com/users/keloemma/repos", "events_url": "https://api.github.com/users/keloemma/events{/privacy}", "received_events_url": "https://api.github.com/users/keloemma/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[]
1,632,497,711,000
1,632,497,904,000
1,632,497,904,000
NONE
null
## Describe the bug A clear and concise description of what the bug is. I am trying to use Dataset to load my file in order to use Bert embeddings model baut when I finished loading using dataset and I want to pass to the tokenizer using the function map; I get the following error : raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects. I was able to load my file using dataset before but since this morning , I keep getting this erreor. ## Steps to reproduce the bug ```python # Xtrain, ytrain, filename, len_labels = read_file_2(fic) # Xtrain, lge_size = get_flaubert_layer(Xtrain, path_to_model_lge) data_preprocessed = make_new_traindata(Xtrain) my_dict = {"verbatim": data_preprocessed[1], "label": ytrain} # lemme avec conjonction dataset = Dataset.from_dict(my_dict) ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2963/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2963/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2962
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2962/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2962/comments
https://api.github.com/repos/huggingface/datasets/issues/2962/events
https://github.com/huggingface/datasets/issues/2962
1,006,557,666
I_kwDODunzps47_tni
2,962
Enable splits during streaming the dataset
{ "login": "merveenoyan", "id": 53175384, "node_id": "MDQ6VXNlcjUzMTc1Mzg0", "avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4", "gravatar_id": "", "url": "https://api.github.com/users/merveenoyan", "html_url": "https://github.com/merveenoyan", "followers_url": "https://api.github.com/users/merveenoyan/followers", "following_url": "https://api.github.com/users/merveenoyan/following{/other_user}", "gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}", "starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions", "organizations_url": "https://api.github.com/users/merveenoyan/orgs", "repos_url": "https://api.github.com/users/merveenoyan/repos", "events_url": "https://api.github.com/users/merveenoyan/events{/privacy}", "received_events_url": "https://api.github.com/users/merveenoyan/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
1,632,495,689,000
1,632,495,689,000
null
CONTRIBUTOR
null
## Describe the Problem I'd like to stream only a specific percentage or part of the dataset. I want to do splitting when I'm streaming dataset as well. ## Solution Enabling splits when `streaming = True` as well. `e.g. dataset = load_dataset('dataset', split='train[:100]', streaming = True)` ## Alternatives Below is the alternative of doing it. `dataset = load_dataset("dataset", split='train', streaming = True).take(100)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2962/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2962/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2957
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2957/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2957/comments
https://api.github.com/repos/huggingface/datasets/issues/2957/events
https://github.com/huggingface/datasets/issues/2957
1,004,868,337
I_kwDODunzps475RLx
2,957
MultiWOZ Dataset NonMatchingChecksumError
{ "login": "bradyneal", "id": 8754873, "node_id": "MDQ6VXNlcjg3NTQ4NzM=", "avatar_url": "https://avatars.githubusercontent.com/u/8754873?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bradyneal", "html_url": "https://github.com/bradyneal", "followers_url": "https://api.github.com/users/bradyneal/followers", "following_url": "https://api.github.com/users/bradyneal/following{/other_user}", "gists_url": "https://api.github.com/users/bradyneal/gists{/gist_id}", "starred_url": "https://api.github.com/users/bradyneal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bradyneal/subscriptions", "organizations_url": "https://api.github.com/users/bradyneal/orgs", "repos_url": "https://api.github.com/users/bradyneal/repos", "events_url": "https://api.github.com/users/bradyneal/events{/privacy}", "received_events_url": "https://api.github.com/users/bradyneal/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi Brady! I met the similar issue, it stuck in the downloading stage instead of download anything, maybe it is broken. After I change the downloading from URLs to one url of the [Multiwoz project](https://github.com/budzianowski/multiwoz/archive/44f0f8479f11721831c5591b839ad78827da197b.zip) and use dirs to get separate files, the problems gone." ]
1,632,354,300,000
1,633,069,412,000
null
NONE
null
## Describe the bug The checksums for the downloaded MultiWOZ dataset and source MultiWOZ dataset aren't matching. ## Steps to reproduce the bug Both of the below dataset versions yield the checksum error: ```python from datasets import load_dataset dataset = load_dataset('multi_woz_v22', 'v2.2') dataset = load_dataset('multi_woz_v22', 'v2.2_active_only') ``` ## Expected results For the above calls to `load_dataset` to work. ## Actual results NonMatchingChecksumError. Traceback: > Traceback (most recent call last): File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/IPython/core/interactiveshell.py", line 3441, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-15-4e91280e112e>", line 1, in <module> dataset = load_dataset('multi_woz_v22', 'v2.2') File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/load.py", line 847, in load_dataset builder_instance.download_and_prepare( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/builder.py", line 615, in download_and_prepare self._download_and_prepare( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare verify_checksums( File "/Users/brady/anaconda3/envs/elysium/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json'] ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2957/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2957/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2956
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2956/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2956/comments
https://api.github.com/repos/huggingface/datasets/issues/2956/events
https://github.com/huggingface/datasets/issues/2956
1,004,306,367
I_kwDODunzps473H-_
2,956
Cache problem in the `load_dataset` method for local compressed file(s)
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[]
1,632,317,672,000
1,632,317,672,000
null
NONE
null
## Describe the bug Cache problem in the `load_dataset` method: when modifying a compressed file in a local folder `load_dataset` doesn't detect the change and load the previous version. ## Steps to reproduce the bug To test it directly, I have prepared a [Google Colaboratory notebook](https://colab.research.google.com/drive/11Em_Amoc-aPGhSBIkSHU2AvEh24nVayy?usp=sharing) that shows this behavior. For this example, I have created a toy dataset at: https://huggingface.co/datasets/SaulLu/toy_struc_dataset This dataset is composed of two versions: - v1 on commit `a6beb46` which has a single example `{'id': 1, 'value': {'tag': 'a', 'value': 1}}` in file `train.jsonl.gz` - v2 on commit `e7935f4` (`main` head) which has a single example `{'attr': 1, 'id': 1, 'value': 'a'}` in file `train.jsonl.gz` With a terminal, we can start to get the v1 version of the dataset ```bash git lfs install git clone https://huggingface.co/datasets/SaulLu/toy_struc_dataset cd toy_struc_dataset git checkout a6beb46 ``` Then we can load it with python and look at the content: ```python from datasets import load_dataset path = "/content/toy_struc_dataset" dataset = load_dataset(path, data_files={"train": "*.jsonl.gz"}) print(dataset["train"][0]) ``` Output ``` {'id': 1, 'value': {'tag': 'a', 'value': 1}} # This is the example in v1 ``` With a terminal, we can now start to get the v1 version of the dataset ```bash git checkout main ``` Then we can load it with python and look at the content: ```python from datasets import load_dataset path = "/content/toy_struc_dataset" dataset = load_dataset(path, data_files={"train": "*.jsonl.gz"}) print(dataset["train"][0]) ``` Output ``` {'id': 1, 'value': {'tag': 'a', 'value': 1}} # This is the example in v1 (not v2) ``` ## Expected results The last output should have been ``` {"id":1, "value": "a", "attr": 1} # This is the example in v2 ``` ## Ideas As discussed offline with Quentin, if the cache hash was ever sensitive to changes in a compressed file we would probably not have the problem anymore. This situation leads me to suggest 2 other features: - to also have an `load_from_cache_file` argument in the "load_dataset" method - to reorganize the cache so that we can delete the caches related to a dataset (cf issue #ToBeFilledSoon) And thanks again for this great library :hugs: ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2956/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2956/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2953
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2953/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2953/comments
https://api.github.com/repos/huggingface/datasets/issues/2953/events
https://github.com/huggingface/datasets/issues/2953
1,002,766,517
I_kwDODunzps47xQC1
2,953
Trying to get in touch regarding a security issue
{ "login": "JamieSlome", "id": 55323451, "node_id": "MDQ6VXNlcjU1MzIzNDUx", "avatar_url": "https://avatars.githubusercontent.com/u/55323451?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JamieSlome", "html_url": "https://github.com/JamieSlome", "followers_url": "https://api.github.com/users/JamieSlome/followers", "following_url": "https://api.github.com/users/JamieSlome/following{/other_user}", "gists_url": "https://api.github.com/users/JamieSlome/gists{/gist_id}", "starred_url": "https://api.github.com/users/JamieSlome/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JamieSlome/subscriptions", "organizations_url": "https://api.github.com/users/JamieSlome/orgs", "repos_url": "https://api.github.com/users/JamieSlome/repos", "events_url": "https://api.github.com/users/JamieSlome/events{/privacy}", "received_events_url": "https://api.github.com/users/JamieSlome/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @JamieSlome,\r\n\r\nThanks for reaching out. Yes, you are right: I'm opening a PR to add the `SECURITY.md` file and a contact method.\r\n\r\nIn the meantime, please feel free to report the security issue to: feedback@huggingface.co" ]
1,632,239,893,000
1,634,829,403,000
1,634,829,403,000
NONE
null
Hey there! I'd like to report a security issue but cannot find contact instructions on your repository. If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future. Thank you for your consideration, and I look forward to hearing from you! (cc @huntr-helper)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2953/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2953/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2945
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2945/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2945/comments
https://api.github.com/repos/huggingface/datasets/issues/2945/events
https://github.com/huggingface/datasets/issues/2945
1,000,624,883
I_kwDODunzps47pFLz
2,945
Protect master branch
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Cool, I think we can do both :)", "@lhoestq now the 2 are implemented.\r\n\r\nPlease note that for the the second protection, finally I have chosen to protect the master branch only from **merge commits** (see update comment above), so no need to disable/re-enable the protection on each release (direct commits, different from merge commits, can be pushed to the remote master branch; and eventually reverted without messing up the repo history)." ]
1,632,120,421,000
1,632,139,287,000
1,632,139,216,000
MEMBER
null
After accidental merge commit (91c55355b634d0dc73350a7ddee1a6776dbbdd69) into `datasets` master branch, all commits present in the feature branch were permanently added to `datasets` master branch history, as e.g.: - 00cc036fea7c7745cfe722360036ed306796a3f2 - 13ae8c98602bbad8197de3b9b425f4c78f582af1 - ... I propose to protect our master branch, so that we avoid we can accidentally make this kind of mistakes in the future: - [x] For Pull Requests using GitHub, allow only squash merging, so that only a single commit per Pull Request is merged into the master branch - Currently, simple merge commits are already disabled - I propose to disable rebase merging as well - ~~Protect the master branch from direct pushes (to avoid accidentally pushing of merge commits)~~ - ~~This protection would reject direct pushes to master branch~~ - ~~If so, for each release (when we need to commit directly to the master branch), we should previously disable the protection and re-enable it again after the release~~ - [x] Protect the master branch only from direct pushing of **merge commits** - GitHub offers the possibility to protect the master branch only from merge commits (which are the ones that introduce all the commits from the feature branch into the master branch). - No need to disable/re-enable this protection on each release This purpose of this Issue is to open a discussion about this problem and to agree in a solution.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2945/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2945/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2944
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2944/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2944/comments
https://api.github.com/repos/huggingface/datasets/issues/2944/events
https://github.com/huggingface/datasets/issues/2944
1,000,544,370
I_kwDODunzps47oxhy
2,944
Add `remove_columns` to `IterableDataset `
{ "login": "cccntu", "id": 31893406, "node_id": "MDQ6VXNlcjMxODkzNDA2", "avatar_url": "https://avatars.githubusercontent.com/u/31893406?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cccntu", "html_url": "https://github.com/cccntu", "followers_url": "https://api.github.com/users/cccntu/followers", "following_url": "https://api.github.com/users/cccntu/following{/other_user}", "gists_url": "https://api.github.com/users/cccntu/gists{/gist_id}", "starred_url": "https://api.github.com/users/cccntu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cccntu/subscriptions", "organizations_url": "https://api.github.com/users/cccntu/orgs", "repos_url": "https://api.github.com/users/cccntu/repos", "events_url": "https://api.github.com/users/cccntu/events{/privacy}", "received_events_url": "https://api.github.com/users/cccntu/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Hi ! Good idea :)\r\nIf you are interested in contributing, feel free to give it a try and open a Pull Request. Also let me know if I can help you with this or if you have questions" ]
1,632,110,460,000
1,633,707,113,000
1,633,707,113,000
CONTRIBUTOR
null
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. ```python from datasets import load_dataset dataset = load_dataset("c4", 'realnewslike', streaming =True, split='train') dataset = dataset.remove_columns('url') ``` ``` AttributeError: 'IterableDataset' object has no attribute 'remove_columns' ``` **Describe the solution you'd like** It would be nice to have `.remove_columns()` to match the `Datasets` api. **Describe alternatives you've considered** This can be done with a single call to `.map()`, I can try to help add this. 🤗
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2944/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2944/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2943
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2943/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2943/comments
https://api.github.com/repos/huggingface/datasets/issues/2943/events
https://github.com/huggingface/datasets/issues/2943
1,000,355,115
I_kwDODunzps47oDUr
2,943
Backwards compatibility broken for cached datasets that use `.filter()`
{ "login": "anton-l", "id": 26864830, "node_id": "MDQ6VXNlcjI2ODY0ODMw", "avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anton-l", "html_url": "https://github.com/anton-l", "followers_url": "https://api.github.com/users/anton-l/followers", "following_url": "https://api.github.com/users/anton-l/following{/other_user}", "gists_url": "https://api.github.com/users/anton-l/gists{/gist_id}", "starred_url": "https://api.github.com/users/anton-l/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anton-l/subscriptions", "organizations_url": "https://api.github.com/users/anton-l/orgs", "repos_url": "https://api.github.com/users/anton-l/repos", "events_url": "https://api.github.com/users/anton-l/events{/privacy}", "received_events_url": "https://api.github.com/users/anton-l/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi ! I guess the caching mechanism should have considered the new `filter` to be different from the old one, and don't use cached results from the old `filter`.\r\nTo avoid other users from having this issue we could make the caching differentiate the two, what do you think ?", "If it's easy enough to implement, then yes please 😄 But this issue can be low-priority, since I've only encountered it in a couple of `transformers` CI tests.", "Well it can cause issue with anyone that updates `datasets` and re-run some code that uses filter, so I'm creating a PR", "I just merged a fix, let me know if you're still having this kind of issues :)\r\n\r\nWe'll do a release soon to make this fix available", "Definitely works on several manual cases with our dummy datasets, thank you @lhoestq !", "Fixed by #2947." ]
1,632,068,197,000
1,632,155,143,000
1,632,155,142,000
CONTRIBUTOR
null
## Describe the bug After upgrading to datasets `1.12.0`, some cached `.filter()` steps from `1.11.0` started failing with `ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)}` Related feature: https://github.com/huggingface/datasets/pull/2836 :question: This is probably a `wontfix` bug, since it can be solved by simply cleaning the related cache dirs, but the workaround could be useful for someone googling the error :) ## Workaround Remove the cache for the given dataset, e.g. `rm -rf ~/.cache/huggingface/datasets/librispeech_asr`. ## Steps to reproduce the bug 1. Delete `~/.cache/huggingface/datasets/librispeech_asr` if it exists. 2. `pip install datasets==1.11.0` and run the following snippet: ```python from datasets import load_dataset ids = ["1272-141231-0000"] ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.filter(lambda x: x["id"] in ids) ``` 3. `pip install datasets==1.12.1` and re-run the code again ## Expected results Same result as with the previous `datasets` version. ## Actual results ```bash Reusing dataset librispeech_asr (./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1) Loading cached processed dataset at ./.cache/huggingface/datasets/librispeech_asr/clean/2.1.0/468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1/cache-cd1c29844fdbc87a.arrow Traceback (most recent call last): File "./repos/transformers/src/transformers/models/wav2vec2/try_dataset.py", line 5, in <module> ds = ds.filter(lambda x: x["id"] in ids) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2169, in filter indices = self.map( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1686, in map return self._map_single( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/fingerprint.py", line 398, in wrapper out = func(self, *args, **kwargs) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1896, in _map_single return Dataset.from_file(cache_file_name, info=info, split=self.split) File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 343, in from_file return cls( File "./envs/transformers/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 282, in __init__ self.info.features = self.info.features.reorder_fields_as(inferred_features) File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1151, in reorder_fields_as return Features(recursive_reorder(self, other)) File "./envs/transformers/lib/python3.8/site-packages/datasets/features.py", line 1140, in recursive_reorder raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position) ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'file': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None), 'speaker_id': Value(dtype='int64', id=None), 'chapter_id': Value(dtype='int64', id=None), 'id': Value(dtype='string', id=None)} Process finished with exit code 1 ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-34-generic-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2943/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2943/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2941
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2941/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2941/comments
https://api.github.com/repos/huggingface/datasets/issues/2941/events
https://github.com/huggingface/datasets/issues/2941
1,000,000,711
I_kwDODunzps47mszH
2,941
OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError
{ "login": "ayaka14732", "id": 68557794, "node_id": "MDQ6VXNlcjY4NTU3Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/68557794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayaka14732", "html_url": "https://github.com/ayaka14732", "followers_url": "https://api.github.com/users/ayaka14732/followers", "following_url": "https://api.github.com/users/ayaka14732/following{/other_user}", "gists_url": "https://api.github.com/users/ayaka14732/gists{/gist_id}", "starred_url": "https://api.github.com/users/ayaka14732/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ayaka14732/subscriptions", "organizations_url": "https://api.github.com/users/ayaka14732/orgs", "repos_url": "https://api.github.com/users/ayaka14732/repos", "events_url": "https://api.github.com/users/ayaka14732/events{/privacy}", "received_events_url": "https://api.github.com/users/ayaka14732/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "I tried `unshuffled_original_da` and it is also not working" ]
1,631,961,553,000
1,631,982,333,000
null
NONE
null
## Describe the bug Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python >>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko') NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num_examples=7345075, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=25284578514, num_examples=7344907, dataset_name='oscar')}] ``` ## Expected results Loading is successful. ## Actual results Loading throws above error. ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2941/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2941/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2937
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2937/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2937/comments
https://api.github.com/repos/huggingface/datasets/issues/2937/events
https://github.com/huggingface/datasets/issues/2937
999,548,277
I_kwDODunzps47k-V1
2,937
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
{ "login": "daqieq", "id": 40532020, "node_id": "MDQ6VXNlcjQwNTMyMDIw", "avatar_url": "https://avatars.githubusercontent.com/u/40532020?v=4", "gravatar_id": "", "url": "https://api.github.com/users/daqieq", "html_url": "https://github.com/daqieq", "followers_url": "https://api.github.com/users/daqieq/followers", "following_url": "https://api.github.com/users/daqieq/following{/other_user}", "gists_url": "https://api.github.com/users/daqieq/gists{/gist_id}", "starred_url": "https://api.github.com/users/daqieq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/daqieq/subscriptions", "organizations_url": "https://api.github.com/users/daqieq/orgs", "repos_url": "https://api.github.com/users/daqieq/repos", "events_url": "https://api.github.com/users/daqieq/events{/privacy}", "received_events_url": "https://api.github.com/users/daqieq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi @daqieq, thanks for reporting.\r\n\r\nUnfortunately, I was not able to reproduce this bug:\r\n```ipython\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset('wiki_bio')\r\nDownloading: 7.58kB [00:00, 26.3kB/s]\r\nDownloading: 2.71kB [00:00, ?B/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\\r\n1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9...\r\nDownloading: 334MB [01:17, 4.32MB/s]\r\nDataset wiki_bio downloaded and prepared to C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9. Subsequent calls will reuse thi\r\ns data.\r\n```\r\n\r\nThis kind of error messages usually happen because:\r\n- Your running Python script hasn't write access to that directory\r\n- You have another program (the File Explorer?) already browsing inside that directory", "Thanks @albertvillanova for looking at it! I tried on my personal Windows machine and it downloaded just fine.\r\n\r\nRunning on my work machine and on a colleague's machine it is consistently hitting this error. It's not a write access issue because the `.incomplete` directory is written just fine. It just won't rename and then it deletes the directory in the `finally` step. Also the zip file is written and extracted fine in the downloads directory.\r\n\r\nThat leaves another program that might be interfering, and there are plenty of those in my work machine ... (full antivirus, data loss prevention, etc.). So the question remains, why not extend the `try` block to allow catching the error and circle back to the rename after the unknown program is finished doing its 'stuff'. This is the approach that I read about in the linked repo (see my comments above).\r\n\r\nIf it's not high priority, that's fine. However, if someone were to write an PR that solved this issue in our environment in an `except` clause, would it be reviewed for inclusion in a future release? Just wondering whether I should spend any more time on this issue." ]
1,631,897,530,000
1,632,189,875,000
null
NONE
null
## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('wiki_bio') ``` ## Expected results It is expected that the dataset downloads without any errors. ## Actual results PermissionError see trace below: ``` Using custom data configuration default Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.94 MiB, post-processed: Unknown size, total: 1.03 GiB) to C:\Users\username\.cache\huggingface\datasets\wiki_bio\default\1.1.0\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 644, in download_and_prepare self._save_info() File "C:\Users\username\.conda\envs\hf\lib\contextlib.py", line 120, in __exit__ next(self.gen) File "C:\Users\username\.conda\envs\hf\lib\site-packages\datasets\builder.py", line 598, in incomplete_dir os.rename(tmp_dir, dirname) PermissionError: [WinError 5] Access is denied: 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9.incomplete' -> 'C:\\Users\\username\\.cache\\huggingface\\datasets\\wiki_bio\\default\\1.1.0\\5293ce565954ba965dada626f1e79684e98172d950371d266bf3caaf87e911c9' ``` By commenting out the os.rename() [L604](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L604) and the shutil.rmtree() [L607](https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L607) lines, in my virtual environment, I was able to get the load process to complete, rename the directory manually and then rerun the `load_dataset('wiki_bio')` to get what I needed. It seems that os.rename() in the `incomplete_dir` content manager is the culprit. Here's another project [Conan](https://github.com/conan-io/conan/issues/6560) with similar issue with os.rename() if it helps debug this issue. ## Environment info - `datasets` version: 1.12.1 - Platform: Windows-10-10.0.22449-SP0 - Python version: 3.8.12 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2937/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2937/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2934
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2934/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2934/comments
https://api.github.com/repos/huggingface/datasets/issues/2934/events
https://github.com/huggingface/datasets/issues/2934
999,477,413
I_kwDODunzps47ktCl
2,934
to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) solution involves wrapping the linked dataset with `weakref.proxy` and adding a custom `__del__` to `tf.python.data.ops.dataset_ops.TensorSliceDataset` (this is the type of a dataset that is returned by `tf.data.Dataset.from_tensor_slices`; this works for TF 2.x, but I'm not sure `tf.python.data.ops.dataset_ops` is a valid path for TF 1.x) that deletes the linked dataset, which is assigned to the dataset object as a property. Will open a draft PR soon!", "Thanks a lot for investigating !" ]
1,631,892,413,000
1,634,115,803,000
1,634,115,803,000
MEMBER
null
To reproduce: ```python import datasets as ds import weakref import gc d = ds.load_dataset("mnist", split="train") ref = weakref.ref(d._data.table) tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label") del tfd, d gc.collect() assert ref() is None, "Error: there is at least one reference left" ``` This causes issues because the table holds a reference to an open arrow file that should be closed. So on windows it's not possible to delete or move the arrow file afterwards. Moreover the CI test of the `to_tf_dataset` method isn't able to clean up the temporary arrow files because of this. cc @Rocketknight1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2934/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2934/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2932
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2932/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2932/comments
https://api.github.com/repos/huggingface/datasets/issues/2932/events
https://github.com/huggingface/datasets/issues/2932
999,317,750
I_kwDODunzps47kGD2
2,932
Conda build fails
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Why 1.9 ?\r\n\r\nhttps://anaconda.org/HuggingFace/datasets currently says 1.11", "Alright I added 1.12.0 and 1.12.1 and fixed the conda build #2952 " ]
1,631,882,962,000
1,632,238,270,000
1,632,238,270,000
MEMBER
null
## Describe the bug Current `datasets` version in conda is 1.9 instead of 1.12. The build of the conda package fails.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2932/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2932/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2930
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2930/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2930/comments
https://api.github.com/repos/huggingface/datasets/issues/2930/events
https://github.com/huggingface/datasets/issues/2930
998,154,311
I_kwDODunzps47fqBH
2,930
Mutable columns argument breaks set_format
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }
[ { "login": "Rocketknight1", "id": 12866554, "node_id": "MDQ6VXNlcjEyODY2NTU0", "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Rocketknight1", "html_url": "https://github.com/Rocketknight1", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Pushed a fix to my branch #2731 " ]
1,631,795,242,000
1,631,800,253,000
1,631,800,253,000
CONTRIBUTOR
null
## Describe the bug If you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("glue", "cola") column_list = ["idx", "label"] dataset.set_format("python", columns=column_list) column_list[1] = "foo" # Change the list after we call `set_format` dataset['train'][:4].keys() ``` ## Expected results ```python dict_keys(['idx', 'label']) ``` ## Actual results ```python dict_keys(['idx']) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2930/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2930/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2927
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2927/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2927/comments
https://api.github.com/repos/huggingface/datasets/issues/2927/events
https://github.com/huggingface/datasets/issues/2927
997,654,680
I_kwDODunzps47dwCY
2,927
Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.github.com/users/timothyjlaurent/followers", "following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}", "gists_url": "https://api.github.com/users/timothyjlaurent/gists{/gist_id}", "starred_url": "https://api.github.com/users/timothyjlaurent/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/timothyjlaurent/subscriptions", "organizations_url": "https://api.github.com/users/timothyjlaurent/orgs", "repos_url": "https://api.github.com/users/timothyjlaurent/repos", "events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}", "received_events_url": "https://api.github.com/users/timothyjlaurent/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Thanks for reporting, I'm looking into it :)", "Fixed by #2950." ]
1,631,754,842,000
1,632,155,002,000
1,632,155,001,000
NONE
null
## Describe the bug Upgrading to 1.12 caused `dataset.filter` call to fail with > get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels ## Steps to reproduce the bug ```pythondef filter_good_rows( ex: Dict, valid_rel_labels: Set[str], valid_ner_labels: Set[str], tokenizer: PreTrainedTokenizerFast, ) -> bool: """Get the good rows""" encoding = get_encoding_for_text(text=ex["text"], tokenizer=tokenizer) ex["encoding"] = encoding for relation in ex["relations"]: if not is_valid_relation(relation, valid_rel_labels): return False for span in ex["spans"]: if not is_valid_span(span, valid_ner_labels, encoding): return False return True def get_dataset(): loader_path = str(Path(__file__).parent / "prodigy_dataset_builder.py") ds = load_dataset( loader_path, name="prodigy-dataset", data_files=sorted(file_paths), cache_dir=cache_dir, )["train"] valid_ner_labels = set(vocab.ner_category) valid_relations = set(vocab.relation_types.keys()) ds = ds.filter( filter_good_rows, fn_kwargs=dict( valid_rel_labels=valid_relations, valid_ner_labels=valid_ner_labels, tokenizer=vocab.tokenizer, ), keep_in_memory=True, num_proc=num_proc, ) ``` `ds` is a `DatasetDict` produced by a jsonl dataset. This runs fine on 1.11 but fails on 1.12 **Stack Trace** ## Expected results I expect 1.12 datasets filter to filter the dataset without raising as it does on 1.11 ## Actual results ``` tf_ner_rel_lib/dataset.py:695: in load_prodigy_arrow_datasets_from_jsonl ds = ds.filter( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper out = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2169: in filter indices = self.map( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1686: in map return self._map_single( ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:185: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/fingerprint.py:398: in wrapper out = func(self, *args, **kwargs) ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:2048: in _map_single batch = apply_function_on_filtered_inputs( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ inputs = {'_input_hash': [2108817714, 1477695082, -1021597032, 2130671338, -1260483858, -1203431639, ...], '_task_hash': [18070...ons', 'relations', 'relations', ...], 'answer': ['accept', 'accept', 'accept', 'accept', 'accept', 'accept', ...], ...} indices = [0, 1, 2, 3, 4, 5, ...], check_same_num_examples = False, offset = 0 def apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples=False, offset=0): """Utility to apply the function on a selection of columns.""" nonlocal update_data fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] if offset == 0: effective_indices = indices else: effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset processed_inputs = ( > function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) ) E TypeError: get_indices_from_mask_function() got an unexpected keyword argument 'valid_rel_labels' ../../../../.pyenv/versions/tf_ner_rel_lib/lib/python3.8/site-packages/datasets/arrow_dataset.py:1939: TypeError ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Mac - Python version: 3.8.9 - PyArrow version: pyarrow==5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2927/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2927/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2926
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2926/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2926/comments
https://api.github.com/repos/huggingface/datasets/issues/2926/events
https://github.com/huggingface/datasets/issues/2926
997,463,277
I_kwDODunzps47dBTt
2,926
Error when downloading datasets to non-traditional cache directories
{ "login": "dar-tau", "id": 45885627, "node_id": "MDQ6VXNlcjQ1ODg1NjI3", "avatar_url": "https://avatars.githubusercontent.com/u/45885627?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dar-tau", "html_url": "https://github.com/dar-tau", "followers_url": "https://api.github.com/users/dar-tau/followers", "following_url": "https://api.github.com/users/dar-tau/following{/other_user}", "gists_url": "https://api.github.com/users/dar-tau/gists{/gist_id}", "starred_url": "https://api.github.com/users/dar-tau/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dar-tau/subscriptions", "organizations_url": "https://api.github.com/users/dar-tau/orgs", "repos_url": "https://api.github.com/users/dar-tau/repos", "events_url": "https://api.github.com/users/dar-tau/events{/privacy}", "received_events_url": "https://api.github.com/users/dar-tau/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Same here !" ]
1,631,735,986,000
1,637,790,151,000
null
NONE
null
## Describe the bug When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails. ## Steps to reproduce the bug ```bash ln -s /path/to/netapp/.cache ~/.cache ``` ```python load_dataset("imdb") ``` ## Expected results Successfully loading IMDB dataset ## Actual results ``` datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33432835, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=659932, num_examples=503, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.1.2 - Platform: Ubuntu - Python version: 3.8 ## Extra notes Stranger yet, trying to debug the phenomenon, I found the range of results to vary a lot without clear direction: - With `cache_dir="/path/to/netapp/.cache"` the same thing happens. - However, when linking `~/netapp/` to `/path/to/netapp` *and* setting `cache_dir="~/netapp/.cache/huggingface/datasets"` - it does work - On the other hand, when linking `~/.cache` to `~/netapp/.cache` without using `cache_dir`, it does work anymore. While I could test it only for a NetApp device, it might have to do with any other mounted FS. Thanks :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2926/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2926/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2924/comments
https://api.github.com/repos/huggingface/datasets/issues/2924/events
https://github.com/huggingface/datasets/issues/2924
997,378,113
I_kwDODunzps47cshB
2,924
"File name too long" error for file locks
{ "login": "gar1t", "id": 184949, "node_id": "MDQ6VXNlcjE4NDk0OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/184949?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gar1t", "html_url": "https://github.com/gar1t", "followers_url": "https://api.github.com/users/gar1t/followers", "following_url": "https://api.github.com/users/gar1t/following{/other_user}", "gists_url": "https://api.github.com/users/gar1t/gists{/gist_id}", "starred_url": "https://api.github.com/users/gar1t/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gar1t/subscriptions", "organizations_url": "https://api.github.com/users/gar1t/orgs", "repos_url": "https://api.github.com/users/gar1t/repos", "events_url": "https://api.github.com/users/gar1t/events{/privacy}", "received_events_url": "https://api.github.com/users/gar1t/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi, the filename here is less than 255\r\n```python\r\n>>> len(\"_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock\")\r\n154\r\n```\r\nso not sure why it's considered too long for your filesystem.\r\n(also note that the lock files we use always have smaller filenames than 255)\r\n\r\nhttps://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135", "Yes, you're right! I need to get you more info here. Either there's something going with the name itself that the file system doesn't like (an encoding that blows up the name length??) or perhaps there's something with the path that's causing the entire string to be used as a name. I haven't seen this on any system before and the Internet's not forthcoming with any info.", "Snap, encountered when trying to run [this example from PyTorch Lightning Flash](https://lightning-flash.readthedocs.io/en/latest/reference/speech_recognition.html):\r\n\r\n```py\r\nimport torch\r\n\r\nimport flash\r\nfrom flash.audio import SpeechRecognition, SpeechRecognitionData\r\nfrom flash.core.data.utils import download_data\r\n\r\n# 1. Create the DataModule\r\ndownload_data(\"https://pl-flash-data.s3.amazonaws.com/timit_data.zip\", \"./data\")\r\n\r\ndatamodule = SpeechRecognitionData.from_json(\r\n input_fields=\"file\",\r\n target_fields=\"text\",\r\n train_file=\"data/timit/train.json\",\r\n test_file=\"data/timit/test.json\",\r\n)\r\n```\r\n\r\nGave this traceback:\r\n\r\n```py\r\nTraceback (most recent call last):\r\n File \"lf_ft.py\", line 10, in <module>\r\n datamodule = SpeechRecognitionData.from_json(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py\", line 1005, in from_json\r\n return cls.from_data_source(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_module.py\", line 571, in from_data_source\r\n train_dataset, val_dataset, test_dataset, predict_dataset = data_source.to_datasets(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py\", line 307, in to_datasets\r\n train_dataset = self.generate_dataset(train_data, RunningStage.TRAINING)\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/core/data/data_source.py\", line 344, in generate_dataset\r\n data = load_data(data, mock_dataset)\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/flash/audio/speech_recognition/data.py\", line 103, in load_data\r\n dataset_dict = load_dataset(self.filetype, data_files={stage: str(file)})\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py\", line 1599, in load_dataset\r\n builder_instance = load_dataset_builder(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/load.py\", line 1457, in load_dataset_builder\r\n builder_instance: DatasetBuilder = builder_cls(\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/builder.py\", line 285, in __init__\r\n with FileLock(lock_path):\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py\", line 323, in __enter__\r\n self.acquire()\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py\", line 272, in acquire\r\n self._acquire()\r\n File \"/home/louis/miniconda3/envs/w2vlf/lib/python3.8/site-packages/datasets/utils/filelock.py\", line 403, in _acquire\r\n fd = os.open(self._lock_file, open_mode)\r\nOSError: [Errno 36] File name too long: '/home/louis/.cache/huggingface/datasets/_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock'\r\n```\r\n\r\nMy home directory is encrypted, therefore the maximum length is 143 ([source 1](https://github.com/ray-project/ray/issues/1463#issuecomment-425674521), [source 2](https://stackoverflow.com/a/6571568/2668831))\r\n\r\nFrom what I've read I think the error is in reference to the file name (just the final part of the path) which is 145 chars long:\r\n\r\n```py\r\n>>> len(\"_home_louis_.cache_huggingface_datasets_json_default-98e6813a547f72fa_0.0.0_c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426.lock\")\r\n145\r\n```\r\n\r\nI also have a file in this directory (i.e. whose length is not a problem):\r\n\r\n```py\r\n>>> len(\"_home_louis_.cache_huggingface_datasets_librispeech_asr_clean_2.1.0_468ec03677f46a8714ac6b5b64dba02d246a228d92cbbad7f3dc190fa039eab1.lock\")\r\n137\r\n```", "Perhaps this could be exposed as a config setting so you could change it manually?\r\n\r\nhttps://github.com/huggingface/datasets/blob/5d1a9f1e3c6c495dc0610b459e39d2eb8893f152/src/datasets/utils/filelock.py#L135-L135\r\n\r\nRather than hard-code 255, default it to 255, and allow it to be changed, the same way is done for `datasets.config.IN_MEMORY_MAX_SIZE`:\r\n\r\nhttps://github.com/huggingface/datasets/blob/12b7e13bc568b9f92705f64b249e148f3bc9a9ea/src/datasets/config.py#L171-L173\r\n\r\nIn fact there already appears to be an existing variable to do so:\r\n\r\nhttps://github.com/huggingface/datasets/blob/12b7e13bc568b9f92705f64b249e148f3bc9a9ea/src/datasets/config.py#L187\r\n\r\nIt's used here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/efe89edd36e4ffa562fc3eebaf07a5fec26e6dac/src/datasets/builder.py#L163-L165\r\n\r\nPerhaps it could be set based on a test (trying to create a 255 char length named lock file and seeing if it fails)", "Just fixed it, sending a PR :smile:", "Hi @lmmx @gar1t ,\r\n\r\nit would be helpful if you could run the following code and copy-paste the output here:\r\n```python\r\nimport datasets\r\nimport os\r\nos.statvfs(datasets.config.HF_DATASETS_CACHE)\r\n```", "`os.statvfs_result(f_bsize=4096, f_frsize=4096, f_blocks=240046344, f_bfree=96427610, f_bavail=84216487, f_files=61038592, f_ffree=58216027, f_favail=58216027, f_flag=4102, f_namemax=143)`", "Hi @lmmx,\r\n\r\nThanks for providing the result of the command. I've opened a PR, and it would be great if you could verify that the fix works on your system. To install the version of the datasets with the fix, please run the following command:\r\n```\r\npip install git+https://github.com/huggingface/datasets.git@fix-2924\r\n```\r\n\r\nBtw, I saw your PR, and I appreciate your effort. However, my approach is a bit simpler for the end-user, so that's why I decided to fix the issue myself.", "No problem Mario I didn't know that was where that value was recorded so I learnt something :smiley: I just wanted to get a local version working, of course you should implement whatever fix is best for HF. Yes can confirm this fixes it too. Thanks!" ]
1,631,729,810,000
1,635,500,544,000
1,635,500,544,000
NONE
null
## Describe the bug Getting the following error when calling `load_dataset("gar1t/test")`: ``` OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock' ``` ## Steps to reproduce the bug Where the user cache dir (e.g. `~/.cache`) is on a file system that limits filenames to 255 chars (e.g. ext4): ```python from datasets import load_dataset load_dataset("gar1t/test") ``` ## Expected results Expect the function to return without an error. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<python_venv>/lib/python3.9/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 644, in download_and_prepare self._save_info() File "<python_venv>/lib/python3.9/site-packages/datasets/builder.py", line 765, in _save_info with FileLock(lock_path): File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 323, in __enter__ self.acquire() File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 272, in acquire self._acquire() File "<python_venv>/lib/python3.9/site-packages/datasets/utils/filelock.py", line 403, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock' ``` ## Environment info - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-27-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 5.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2924/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2924/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2923/comments
https://api.github.com/repos/huggingface/datasets/issues/2923/events
https://github.com/huggingface/datasets/issues/2923
997,351,590
I_kwDODunzps47cmCm
2,923
Loading an autonlp dataset raises in normal mode but not in streaming mode
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3470211881, "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer", "name": "dataset-viewer", "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co" } ]
open
false
null
[]
null
[]
1,631,727,878,000
1,634,895,369,000
null
CONTRIBUTOR
null
## Describe the bug The same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=False) ## raises an error load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=True) ## does not raise an error ``` ## Expected results Both calls should raise the same error ## Actual results Call with streaming=False: ``` 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 5825.42it/s] Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b Downloading and preparing dataset json/autonlp-data-sentiment_detection-3c8bcd36 to /home/slesage/.cache/huggingface/datasets/json/autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b/0.0.0/d75ead8d5cfcbe67495df0f89bd262f0023257fbbbd94a730313295f3d756d50... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 15923.71it/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 3346.88it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1187, in _prepare_split writer.write_table(table) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/arrow_writer.py", line 418, in <listcomp> pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1249, in pyarrow.lib.Table.__getitem__ File "pyarrow/table.pxi", line 1825, in pyarrow.lib.Table.column File "pyarrow/table.pxi", line 1800, in pyarrow.lib.Table._ensure_integer_index KeyError: 'Field "splits" does not exist in table schema' ``` Call with `streaming=False`: ``` 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6000.43it/s] Using custom data configuration autonlp-data-sentiment_detection-3c8bcd36-fe30267462d1d42b 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 46916.15it/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 148734.18it/s] ``` ## Environment info - `datasets` version: 1.12.1.dev0 - Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2923/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2923/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2921
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2921/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2921/comments
https://api.github.com/repos/huggingface/datasets/issues/2921/events
https://github.com/huggingface/datasets/issues/2921
997,325,424
I_kwDODunzps47cfpw
2,921
Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values"
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[]
1,631,725,931,000
1,631,726,505,000
1,631,726,505,000
MEMBER
null
This error has been introduced in https://github.com/huggingface/datasets/pull/2361 To reproduce: ```python import numpy as np from datasets import Dataset d = Dataset.from_dict({"a": [np.zeros((2, 2))]}) ``` raises ```python Traceback (most recent call last): File "playground/ttest.py", line 5, in <module> d = Dataset.from_dict({"a": [np.zeros((2, 2))]}).with_format("torch") File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 458, in from_dict pa_table = InMemoryTable.from_pydict(mapping=mapping) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 365, in from_pydict return cls(pa.Table.from_pydict(*args, **kwargs)) File "pyarrow/table.pxi", line 1639, in pyarrow.lib.Table.from_pydict File "pyarrow/array.pxi", line 332, in pyarrow.lib.asarray File "pyarrow/array.pxi", line 223, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_writer.py", line 107, in __arrow_array__ out = pa.array(self.data, type=type) File "pyarrow/array.pxi", line 306, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2921/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2921/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2919
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2919/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2919/comments
https://api.github.com/repos/huggingface/datasets/issues/2919/events
https://github.com/huggingface/datasets/issues/2919
997,127,487
I_kwDODunzps47bvU_
2,919
Unwanted progress bars when accessing examples
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "doing a patch release now :)" ]
1,631,714,710,000
1,631,726,509,000
1,631,726,303,000
MEMBER
null
When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples: ```python In [1]: import datasets as ds In [2]: d = ds.Dataset.from_dict({"a": [0, 1, 2]}).with_format("torch") In [3]: d[0] 100%|████████████████████████████████| 1/1 [00:00<00:00, 3172.70it/s] Out[3]: {'a': tensor(0)} ``` This is because the pytorch formatter calls `map_nested` that uses progress bars cc @sgugger
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2919/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2919/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2918
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2918/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2918/comments
https://api.github.com/repos/huggingface/datasets/issues/2918/events
https://github.com/huggingface/datasets/issues/2918
997,063,347
I_kwDODunzps47bfqz
2,918
`Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "repos_url": "https://api.github.com/users/SBrandeis/repos", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" }, { "id": 3287858981, "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming", "name": "streaming", "color": "fef2c0", "default": false, "description": "" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Hi @SBrandeis, thanks for reporting! ^^\r\n\r\nI think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389\r\n\r\nI will ask them if they are planning to fix it...", "Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`\r\n```python\r\nIn [1]: import fsspec\r\n\r\nIn [2]: import json\r\n\r\nIn [3]: with fsspec.open('https://raw.githubusercontent.com/allenai/scitldr/master/SciTLDR-Data/SciTLDR-FullText/test.jsonl', encoding=\"utf-8\") as f:\r\n ...: for row in f:\r\n ...: data = json.loads(row)\r\n ...:\r\n---------------------------------------------------------------------------\r\nClientPayloadError Traceback (most recent call last)\r\n```", "Thanks for investigating @albertvillanova ! 🤗 " ]
1,631,711,167,000
1,638,346,500,000
1,638,346,500,000
CONTRIBUTOR
null
## Describe the bug Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`: ```python ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` cc @lhoestq ## Steps to reproduce the bug ```python from datasets import load_dataset iter_dset = iter( load_dataset("scitldr", name="FullText", split="test", streaming=True) ) next(iter_dset) ``` ## Expected results Returns the first sample of the dataset ## Actual results Calling `__next__` crashes with the following Traceback: ```python ----> 1 next(dset_iter) ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self) 339 340 def __iter__(self): --> 341 for key, example in self._iter(): 342 if self.features: 343 # we encode the example for ClassLabel feature types for example ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in _iter(self) 336 else: 337 ex_iterable = self._ex_iterable --> 338 yield from ex_iterable 339 340 def __iter__(self): ~\miniconda3\envs\datasets\lib\site-packages\datasets\iterable_dataset.py in __iter__(self) 76 77 def __iter__(self): ---> 78 for key, example in self.generate_examples_fn(**self.kwargs): 79 yield key, example 80 ~\.cache\huggingface\modules\datasets_modules\datasets\scitldr\72d6e2195786c57e1d343066fb2cc4f93ea39c5e381e53e6ae7c44bbfd1f05ef\scitldr.py in _generate_examples(self, filepath, split) 162 163 with open(filepath, encoding="utf-8") as f: --> 164 for id_, row in enumerate(f): 165 data = json.loads(row) 166 if self.config.name == "AIC": ~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in read(self, length) 496 else: 497 length = min(self.size - self.loc, length) --> 498 return super().read(length) 499 500 async def async_fetch_all(self): ~\miniconda3\envs\datasets\lib\site-packages\fsspec\spec.py in read(self, length) 1481 # don't even bother calling fetch 1482 return b"" -> 1483 out = self.cache._fetch(self.loc, self.loc + length) 1484 self.loc += len(out) 1485 return out ~\miniconda3\envs\datasets\lib\site-packages\fsspec\caching.py in _fetch(self, start, end) 378 elif start < self.start: 379 if self.end - end > self.blocksize: --> 380 self.cache = self.fetcher(start, bend) 381 self.start = start 382 else: ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in wrapper(*args, **kwargs) 86 def wrapper(*args, **kwargs): 87 self = obj or args[0] ---> 88 return sync(self.loop, func, *args, **kwargs) 89 90 return wrapper ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in sync(loop, func, timeout, *args, **kwargs) 67 raise FSTimeoutError 68 if isinstance(result[0], BaseException): ---> 69 raise result[0] 70 return result[0] 71 ~\miniconda3\envs\datasets\lib\site-packages\fsspec\asyn.py in _runner(event, coro, result, timeout) 23 coro = asyncio.wait_for(coro, timeout=timeout) 24 try: ---> 25 result[0] = await coro 26 except Exception as ex: 27 result[0] = ex ~\miniconda3\envs\datasets\lib\site-packages\fsspec\implementations\http.py in async_fetch_range(self, start, end) 538 if r.status == 206: 539 # partial content, as expected --> 540 out = await r.read() 541 elif "Content-Length" in r.headers: 542 cl = int(r.headers["Content-Length"]) ~\miniconda3\envs\datasets\lib\site-packages\aiohttp\client_reqrep.py in read(self) 1030 if self._body is None: 1031 try: -> 1032 self._body = await self.content.read() 1033 for trace in self._traces: 1034 await trace.send_response_chunk_received( ~\miniconda3\envs\datasets\lib\site-packages\aiohttp\streams.py in read(self, n) 342 async def read(self, n: int = -1) -> bytes: 343 if self._exception is not None: --> 344 raise self._exception 345 346 # migration problem; with DataQueue you have to catch ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` ## Environment info - `datasets` version: 1.12.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.5 - PyArrow version: 2.0.0 - aiohttp version: 3.7.4.post0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2918/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2918/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2917
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2917/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2917/comments
https://api.github.com/repos/huggingface/datasets/issues/2917/events
https://github.com/huggingface/datasets/issues/2917
997,041,658
I_kwDODunzps47baX6
2,917
windows download abnormal
{ "login": "wei1826676931", "id": 52347799, "node_id": "MDQ6VXNlcjUyMzQ3Nzk5", "avatar_url": "https://avatars.githubusercontent.com/u/52347799?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wei1826676931", "html_url": "https://github.com/wei1826676931", "followers_url": "https://api.github.com/users/wei1826676931/followers", "following_url": "https://api.github.com/users/wei1826676931/following{/other_user}", "gists_url": "https://api.github.com/users/wei1826676931/gists{/gist_id}", "starred_url": "https://api.github.com/users/wei1826676931/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wei1826676931/subscriptions", "organizations_url": "https://api.github.com/users/wei1826676931/orgs", "repos_url": "https://api.github.com/users/wei1826676931/repos", "events_url": "https://api.github.com/users/wei1826676931/events{/privacy}", "received_events_url": "https://api.github.com/users/wei1826676931/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used", "It is indeed an agency problem, thank you very, very much", "Let me know if you have other questions :)\r\n\r\nClosing this issue now" ]
1,631,709,935,000
1,631,812,668,000
1,631,812,668,000
NONE
null
## Describe the bug The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why?? ## Steps to reproduce the bug ```python3.7 + windows ![image](https://user-images.githubusercontent.com/52347799/133436174-4303f847-55d5-434f-a749-08da3bb9b654.png) # Sample code to reproduce the bug ``` ## Expected results It can be downloaded normally. ## Actual results it cann't ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.11.0 - Platform:windows - Python version:3.7 - PyArrow version:
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2917/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2917/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2914
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2914/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2914/comments
https://api.github.com/repos/huggingface/datasets/issues/2914/events
https://github.com/huggingface/datasets/issues/2914
996,770,168
I_kwDODunzps47aYF4
2,914
Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets
{ "login": "pierre-godard", "id": 3969168, "node_id": "MDQ6VXNlcjM5NjkxNjg=", "avatar_url": "https://avatars.githubusercontent.com/u/3969168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pierre-godard", "html_url": "https://github.com/pierre-godard", "followers_url": "https://api.github.com/users/pierre-godard/followers", "following_url": "https://api.github.com/users/pierre-godard/following{/other_user}", "gists_url": "https://api.github.com/users/pierre-godard/gists{/gist_id}", "starred_url": "https://api.github.com/users/pierre-godard/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pierre-godard/subscriptions", "organizations_url": "https://api.github.com/users/pierre-godard/orgs", "repos_url": "https://api.github.com/users/pierre-godard/repos", "events_url": "https://api.github.com/users/pierre-godard/events{/privacy}", "received_events_url": "https://api.github.com/users/pierre-godard/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Closed by #2915." ]
1,631,692,446,000
1,631,724,557,000
1,631,724,556,000
CONTRIBUTOR
null
## Describe the bug In one of my project, I defined a custom fsspec filesystem with an entrypoint. My guess is that by doing so, a variable named `spec` is created in the module `fsspec` (created by entering a for loop as there are entrypoints defined, see the loop in question [here](https://github.com/intake/filesystem_spec/blob/0589358d8a029ed6b60d031018f52be2eb721291/fsspec/__init__.py#L55)). So that `fsspec.spec`, that was previously referring to the `spec` submodule, is now referring to that `spec` variable. This make the import of datasets failing as it is using that `fsspec.spec`. ## Steps to reproduce the bug I could reproduce the bug with a dummy poetry project. Here is the pyproject.toml: ```toml [tool.poetry] name = "debug-datasets" version = "0.1.0" description = "" authors = ["Pierre Godard"] [tool.poetry.dependencies] python = "^3.8" datasets = "^1.11.0" [tool.poetry.dev-dependencies] [build-system] requires = ["poetry-core>=1.0.0"] build-backend = "poetry.core.masonry.api" [tool.poetry.plugins."fsspec.specs"] "file2" = "fsspec.implementations.local.LocalFileSystem" ``` The only other file being a `debug_datasets/__init__.py` empty file. The overall structure of the project is as follows: ``` . ├── pyproject.toml └── debug_datasets └── __init__.py ``` Then, within the project folder run: ``` poetry install poetry run python ``` And in the python interpreter, try to import `datasets`: ``` import datasets ``` ## Expected results The import should run successfully. ## Actual results Here is the trace of the error I get: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/__init__.py", line 33, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 48, in <module> from .filesystems import extract_path_from_uri, is_remote_filesystem File "/home/godarpi/.cache/pypoetry/virtualenvs/debug-datasets-JuFzTKL--py3.8/lib/python3.8/site-packages/datasets/filesystems/__init__.py", line 30, in <module> def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool: AttributeError: 'EntryPoint' object has no attribute 'AbstractFileSystem' ``` ## Suggested fix `datasets/filesystems/__init__.py`, line 30, replace: ``` def is_remote_filesystem(fs: fsspec.spec.AbstractFileSystem) -> bool: ``` by: ``` def is_remote_filesystem(fs: fsspec.AbstractFileSystem) -> bool: ``` I will come up with a PR soon if this effectively solves the issue. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: WSL2 (Ubuntu 20.04.1 LTS) - Python version: 3.8.5 - PyArrow version: 5.0.0 - `fsspec` version: 2021.8.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2914/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2914/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2913
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2913/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2913/comments
https://api.github.com/repos/huggingface/datasets/issues/2913/events
https://github.com/huggingface/datasets/issues/2913
996,436,368
I_kwDODunzps47ZGmQ
2,913
timit_asr dataset only includes one text phrase
{ "login": "margotwagner", "id": 39107794, "node_id": "MDQ6VXNlcjM5MTA3Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/39107794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/margotwagner", "html_url": "https://github.com/margotwagner", "followers_url": "https://api.github.com/users/margotwagner/followers", "following_url": "https://api.github.com/users/margotwagner/following{/other_user}", "gists_url": "https://api.github.com/users/margotwagner/gists{/gist_id}", "starred_url": "https://api.github.com/users/margotwagner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/margotwagner/subscriptions", "organizations_url": "https://api.github.com/users/margotwagner/orgs", "repos_url": "https://api.github.com/users/margotwagner/repos", "events_url": "https://api.github.com/users/margotwagner/events{/privacy}", "received_events_url": "https://api.github.com/users/margotwagner/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "Hi @margotwagner, \r\nThis bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally)", "Hi @margotwagner,\r\n\r\nYes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:\r\n> Environment info\r\n> - `datasets` version: 1.4.1" ]
1,631,653,567,000
1,631,693,119,000
1,631,693,118,000
NONE
null
## Describe the bug The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases. ## Steps to reproduce the bug Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-english 1. Install the dataset and other packages ```python !pip install datasets>=1.5.0 !pip install transformers==4.4.0 !pip install soundfile !pip install jiwer ``` 2. Load the dataset ```python from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") ``` 3. Remove columns that we don't want ```python timit = timit.remove_columns(["phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"]) ``` 4. Write a short function to display some random samples of the dataset. ```python from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) display(HTML(df.to_html())) show_random_elements(timit["train"].remove_columns(["file"])) ``` ## Expected results 10 random different transcription phrases. ## Actual results 10 of the same transcription phrase "Would such an act of refusal be useful?" ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.4.1 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: not listed
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2913/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2913/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2904
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2904/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2904/comments
https://api.github.com/repos/huggingface/datasets/issues/2904/events
https://github.com/huggingface/datasets/issues/2904
995,814,222
I_kwDODunzps47WutO
2,904
FORCE_REDOWNLOAD does not work
{ "login": "anoopkatti", "id": 5278299, "node_id": "MDQ6VXNlcjUyNzgyOTk=", "avatar_url": "https://avatars.githubusercontent.com/u/5278299?v=4", "gravatar_id": "", "url": "https://api.github.com/users/anoopkatti", "html_url": "https://github.com/anoopkatti", "followers_url": "https://api.github.com/users/anoopkatti/followers", "following_url": "https://api.github.com/users/anoopkatti/following{/other_user}", "gists_url": "https://api.github.com/users/anoopkatti/gists{/gist_id}", "starred_url": "https://api.github.com/users/anoopkatti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anoopkatti/subscriptions", "organizations_url": "https://api.github.com/users/anoopkatti/orgs", "repos_url": "https://api.github.com/users/anoopkatti/repos", "events_url": "https://api.github.com/users/anoopkatti/events{/privacy}", "received_events_url": "https://api.github.com/users/anoopkatti/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
null
[]
null
[ "Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.\r\n\r\nThe second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompressed data in the extraction cache directory.\r\n\r\nIf we fix the extraction cache mechanism to uncompress a local file if it changed then it should fix the issue.\r\nCurrently the extraction cache mechanism only takes into account the path of the compressed file, which is an issue.", "Facing the same issue, is there any way to overtake this issue until it will be fixed? ", "You can clear your extraction cache in the meantime (by default at `~/.cache/huggingface/datasets/downloads/extracted`)" ]
1,631,612,726,000
1,633,513,039,000
null
NONE
null
## Describe the bug With GenerateMode.FORCE_REDOWNLOAD, the documentation says +------------------------------------+-----------+---------+ | | Downloads | Dataset | +====================================+===========+=========+ | `REUSE_DATASET_IF_EXISTS` (default)| Reuse | Reuse | +------------------------------------+-----------+---------+ | `REUSE_CACHE_IF_EXISTS` | Reuse | Fresh | +------------------------------------+-----------+---------+ | `FORCE_REDOWNLOAD` | Fresh | Fresh | +------------------------------------+-----------+---------+ However, the old dataset is loaded even when FORCE_REDOWNLOAD is chosen. ## Steps to reproduce the bug ```python import pandas as pd from datasets import load_dataset, GenerateMode pd.DataFrame(range(5), columns=['numbers']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) pd.DataFrame(range(10), columns=['numerals']).to_csv('/tmp/test.tsv.gz', index=False) ee = load_dataset('csv', data_files=['/tmp/test.tsv.gz'], delimiter='\t', split='train', download_mode=GenerateMode.FORCE_REDOWNLOAD) print(ee) ``` ## Expected results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numerals'], num_rows: 10 }) ## Actual results Dataset({ features: ['numbers'], num_rows: 5 }) Dataset({ features: ['numbers'], num_rows: 5 }) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-4.14.181-108.257.amzn1.x86_64-x86_64-with-glibc2.10 - Python version: 3.7.10 - PyArrow version: 3.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2904/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2904/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2902
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2902/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2902/comments
https://api.github.com/repos/huggingface/datasets/issues/2902/events
https://github.com/huggingface/datasets/issues/2902
995,254,216
MDU6SXNzdWU5OTUyNTQyMTY=
2,902
Add WIT Dataset
{ "login": "nateraw", "id": 32437151, "node_id": "MDQ6VXNlcjMyNDM3MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nateraw", "html_url": "https://github.com/nateraw", "followers_url": "https://api.github.com/users/nateraw/followers", "following_url": "https://api.github.com/users/nateraw/following{/other_user}", "gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}", "starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nateraw/subscriptions", "organizations_url": "https://api.github.com/users/nateraw/orgs", "repos_url": "https://api.github.com/users/nateraw/repos", "events_url": "https://api.github.com/users/nateraw/events{/privacy}", "received_events_url": "https://api.github.com/users/nateraw/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "@hassiahk is working on it #2810 ", "WikiMedia is now hosting the pixel values directly which should make it a lot easier!\r\nThe files can be found here:\r\nhttps://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/\r\nhttps://analytics.wikimedia.org/published/datasets/one-off/caption_competition/training/image_pixels/", "> @hassiahk is working on it #2810\r\n\r\nThank you @bhavitvyamalik! Added this issue so we could track progress 😄 . Just linked the PR as well for visibility. ", "Hey folks, we are now hosting the merged pixel values + embeddings + metadata ourselves. I gave it a try - [nateraw/wit](https://huggingface.co/datasets/nateraw/wit)\r\n\r\n**⚠️ - Make sure you add `streaming=True` unless you're prepared to download 400GB of data!**\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset('nateraw/wit', streaming=True)\r\nexample = next(iter(ds))\r\n```\r\n\r\n```python\r\n>>> example = next(iter(ds['train']))\r\n>>> example.keys()\r\ndict_keys(['b64_bytes', 'original_width', 'image_url', 'wit_features', 'original_height', 'metadata_url', 'mime_type', 'caption_attribution_description', 'embedding'])\r\n>>> example['wit_features'].keys()\r\ndict_keys(['hierarchical_section_title', 'language', 'attribution_passes_lang_id', 'context_section_description', 'is_main_image', 'page_title', 'caption_title_and_reference_description', 'caption_alt_text_description', 'caption_reference_description', 'page_url', 'context_page_description', 'section_title', 'page_changed_recently'])\r\n```" ]
1,631,561,929,000
1,632,764,815,000
null
CONTRIBUTOR
null
## Adding a Dataset - **Name:** *WIT* - **Description:** *Wikipedia-based Image Text Dataset* - **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning ](https://arxiv.org/abs/2103.01913)* - **Data:** *https://github.com/google-research-datasets/wit* - **Motivation:** (excerpt from their Github README.md) > - The largest multimodal dataset (publicly available at the time of this writing) by the number of image-text examples. > - A massively multilingual dataset (first of its kind) with coverage for over 100+ languages. > - A collection of diverse set of concepts and real world entities. > - Brings forth challenging real-world test sets. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2902/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2902/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2901
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2901/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2901/comments
https://api.github.com/repos/huggingface/datasets/issues/2901/events
https://github.com/huggingface/datasets/issues/2901
995,232,844
MDU6SXNzdWU5OTUyMzI4NDQ=
2,901
Incompatibility with pytest
{ "login": "severo", "id": 1676121, "node_id": "MDQ6VXNlcjE2NzYxMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "gravatar_id": "", "url": "https://api.github.com/users/severo", "html_url": "https://github.com/severo", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "organizations_url": "https://api.github.com/users/severo/orgs", "repos_url": "https://api.github.com/users/severo/repos", "events_url": "https://api.github.com/users/severo/events{/privacy}", "received_events_url": "https://api.github.com/users/severo/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }, { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false } ]
null
[ "Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it!" ]
1,631,560,337,000
1,631,608,847,000
1,631,608,847,000
CONTRIBUTOR
null
## Describe the bug pytest complains about xpathopen / path.open("w") ## Steps to reproduce the bug Create a test file, `test.py`: ```python import datasets as ds def load_dataset(): ds.load_dataset("counter", split="train", streaming=True) ``` And launch it with pytest: ```bash python -m pytest test.py ``` ## Expected results It should give something like: ``` collected 1 item test.py . [100%] ======= 1 passed in 3.15s ======= ``` ## Actual results ``` ============================================================================================================================= test session starts ============================================================================================================================== platform linux -- Python 3.8.11, pytest-6.2.5, py-1.10.0, pluggy-1.0.0 rootdir: /home/slesage/hf/datasets-preview-backend, configfile: pyproject.toml plugins: anyio-3.3.1 collected 1 item tests/queries/test_rows.py . [100%]Traceback (most recent call last): File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pytest/__main__.py", line 5, in <module> raise SystemExit(pytest.console_main()) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 185, in console_main code = main() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/config/__init__.py", line 162, in main ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 60, in _multicall return outcome.get_result() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main return wrap_session(config, _main) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/main.py", line 304, in wrap_session config.hook.pytest_sessionfinish( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_hooks.py", line 265, in __call__ return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_manager.py", line 80, in _hookexec return self._inner_hookexec(hook_name, methods, kwargs, firstresult) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 55, in _multicall gen.send(outcome) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/terminal.py", line 803, in pytest_sessionfinish outcome.get_result() File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_result.py", line 60, in get_result raise ex[1].with_traceback(ex[2]) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/pluggy/_callers.py", line 39, in _multicall res = hook_impl.function(*args) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 428, in pytest_sessionfinish config.cache.set("cache/nodeids", sorted(self.cached_nodeids)) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/_pytest/cacheprovider.py", line 188, in set f = path.open("w") TypeError: xpathopen() takes 1 positional argument but 2 were given ``` ## Environment info - `datasets` version: 1.12.0 - Platform: Linux-5.11.0-1017-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2901/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2901/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2899
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2899/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2899/comments
https://api.github.com/repos/huggingface/datasets/issues/2899/events
https://github.com/huggingface/datasets/issues/2899
994,082,432
MDU6SXNzdWU5OTQwODI0MzI=
2,899
Dataset
{ "login": "rcacho172", "id": 90449239, "node_id": "MDQ6VXNlcjkwNDQ5MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/90449239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcacho172", "html_url": "https://github.com/rcacho172", "followers_url": "https://api.github.com/users/rcacho172/followers", "following_url": "https://api.github.com/users/rcacho172/following{/other_user}", "gists_url": "https://api.github.com/users/rcacho172/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcacho172/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcacho172/subscriptions", "organizations_url": "https://api.github.com/users/rcacho172/orgs", "repos_url": "https://api.github.com/users/rcacho172/repos", "events_url": "https://api.github.com/users/rcacho172/events{/privacy}", "received_events_url": "https://api.github.com/users/rcacho172/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,631,432,333,000
1,631,463,135,000
1,631,463,135,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2899/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2899/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2898/comments
https://api.github.com/repos/huggingface/datasets/issues/2898/events
https://github.com/huggingface/datasets/issues/2898
994,032,814
MDU6SXNzdWU5OTQwMzI4MTQ=
2,898
Hug emoji
{ "login": "Jackg-08", "id": 90539794, "node_id": "MDQ6VXNlcjkwNTM5Nzk0", "avatar_url": "https://avatars.githubusercontent.com/u/90539794?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jackg-08", "html_url": "https://github.com/Jackg-08", "followers_url": "https://api.github.com/users/Jackg-08/followers", "following_url": "https://api.github.com/users/Jackg-08/following{/other_user}", "gists_url": "https://api.github.com/users/Jackg-08/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jackg-08/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jackg-08/subscriptions", "organizations_url": "https://api.github.com/users/Jackg-08/orgs", "repos_url": "https://api.github.com/users/Jackg-08/repos", "events_url": "https://api.github.com/users/Jackg-08/events{/privacy}", "received_events_url": "https://api.github.com/users/Jackg-08/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,631,417,271,000
1,631,463,193,000
1,631,463,193,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2898/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2898/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2892
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2892/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2892/comments
https://api.github.com/repos/huggingface/datasets/issues/2892/events
https://github.com/huggingface/datasets/issues/2892
993,274,572
MDU6SXNzdWU5OTMyNzQ1NzI=
2,892
Error when encoding a dataset with None objects with a Sequence feature
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
[ "This has been fixed by https://github.com/huggingface/datasets/pull/2900\r\nWe're doing a new release 1.12 today to make the fix available :)" ]
1,631,283,103,000
1,631,542,693,000
1,631,542,662,000
MEMBER
null
There is an error when encoding a dataset with None objects with a Sequence feature To reproduce: ```python from datasets import Dataset, Features, Value, Sequence data = {"a": [[0], None]} features = Features({"a": Sequence(Value("int32"))}) dataset = Dataset.from_dict(data, features=features) ``` raises ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-24-40add67f8751> in <module> 2 data = {"a": [[0], None]} 3 features = Features({"a": Sequence(Value("int32"))}) ----> 4 dataset = Dataset.from_dict(data, features=features) [...] ~/datasets/features.py in encode_nested_example(schema, obj) 888 if isinstance(obj, str): # don't interpret a string as a list 889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj)) --> 890 return [encode_nested_example(schema.feature, o) for o in obj] 891 # Object with special encoding: 892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks TypeError: 'NoneType' object is not iterable ``` Instead, if should run without error, as if the `features` were not passed
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2892/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2892/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2890
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2890/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2890/comments
https://api.github.com/repos/huggingface/datasets/issues/2890/events
https://github.com/huggingface/datasets/issues/2890
993,074,102
MDU6SXNzdWU5OTMwNzQxMDI=
2,890
0x290B112ED1280537B24Ee6C268a004994a16e6CE
{ "login": "rcacho172", "id": 90449239, "node_id": "MDQ6VXNlcjkwNDQ5MjM5", "avatar_url": "https://avatars.githubusercontent.com/u/90449239?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcacho172", "html_url": "https://github.com/rcacho172", "followers_url": "https://api.github.com/users/rcacho172/followers", "following_url": "https://api.github.com/users/rcacho172/following{/other_user}", "gists_url": "https://api.github.com/users/rcacho172/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcacho172/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcacho172/subscriptions", "organizations_url": "https://api.github.com/users/rcacho172/orgs", "repos_url": "https://api.github.com/users/rcacho172/repos", "events_url": "https://api.github.com/users/rcacho172/events{/privacy}", "received_events_url": "https://api.github.com/users/rcacho172/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,631,267,477,000
1,631,274,329,000
1,631,274,329,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2890/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2890/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2889/comments
https://api.github.com/repos/huggingface/datasets/issues/2889/events
https://github.com/huggingface/datasets/issues/2889
992,968,382
MDU6SXNzdWU5OTI5NjgzODI=
2,889
Coc
{ "login": "Bwiggity", "id": 90444264, "node_id": "MDQ6VXNlcjkwNDQ0MjY0", "avatar_url": "https://avatars.githubusercontent.com/u/90444264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Bwiggity", "html_url": "https://github.com/Bwiggity", "followers_url": "https://api.github.com/users/Bwiggity/followers", "following_url": "https://api.github.com/users/Bwiggity/following{/other_user}", "gists_url": "https://api.github.com/users/Bwiggity/gists{/gist_id}", "starred_url": "https://api.github.com/users/Bwiggity/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Bwiggity/subscriptions", "organizations_url": "https://api.github.com/users/Bwiggity/orgs", "repos_url": "https://api.github.com/users/Bwiggity/repos", "events_url": "https://api.github.com/users/Bwiggity/events{/privacy}", "received_events_url": "https://api.github.com/users/Bwiggity/received_events", "type": "User", "site_admin": false }
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,631,259,127,000
1,631,274,354,000
1,631,274,354,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2889/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/2889/timeline
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/2888
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2888/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2888/comments
https://api.github.com/repos/huggingface/datasets/issues/2888/events
https://github.com/huggingface/datasets/issues/2888
992,676,535
MDU6SXNzdWU5OTI2NzY1MzU=
2,888
v1.11.1 release date
{ "login": "fcakyon", "id": 34196005, "node_id": "MDQ6VXNlcjM0MTk2MDA1", "avatar_url": "https://avatars.githubusercontent.com/u/34196005?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fcakyon", "html_url": "https://github.com/fcakyon", "followers_url": "https://api.github.com/users/fcakyon/followers", "following_url": "https://api.github.com/users/fcakyon/following{/other_user}", "gists_url": "https://api.github.com/users/fcakyon/gists{/gist_id}", "starred_url": "https://api.github.com/users/fcakyon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fcakyon/subscriptions", "organizations_url": "https://api.github.com/users/fcakyon/orgs", "repos_url": "https://api.github.com/users/fcakyon/repos", "events_url": "https://api.github.com/users/fcakyon/events{/privacy}", "received_events_url": "https://api.github.com/users/fcakyon/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
null
[]
null
[ "Hi ! Probably 1.12 on monday :)\r\n", "@albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :)" ]
1,631,224,395,000
1,631,477,915,000
1,631,463,339,000
NONE
null
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago. When do you plan to publush v1.11.1 release?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/2888/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/2888/timeline
null
null
null
false