id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
2,075,645,042
https://api.github.com/repos/huggingface/datasets/issues/6580
https://github.com/huggingface/datasets/issues/6580
6,580
dataset cache only stores one config of the dataset in parquet dir, and uses that for all other configs resulting in showing same data in all configs.
closed
0
2024-01-11T03:14:18
2024-01-20T12:46:16
2024-01-20T12:46:16
kartikgupta321
[]
### Describe the bug ds = load_dataset("ai2_arc", "ARC-Easy"), i have tried to force redownload, delete cache and changing the cache dir. ### Steps to reproduce the bug dataset = [] dataset_name = "ai2_arc" possible_configs = [ 'ARC-Challenge', 'ARC-Easy' ] for config in possible_configs: dataset_slice = load_dataset(dataset_name, config,ignore_verifications=True,cache_dir='ai2_arc_files') dataset.append(dataset_slice) ### Expected behavior all configs should get saved in cache with their respective names. ### Environment info ai2_arc
false
2,075,407,473
https://api.github.com/repos/huggingface/datasets/issues/6579
https://github.com/huggingface/datasets/issues/6579
6,579
Unable to load `eli5` dataset with streaming
closed
1
2024-01-10T23:44:20
2024-01-11T09:19:18
2024-01-11T09:19:17
haok1402
[]
### Describe the bug Unable to load `eli5` dataset with streaming. ### Steps to reproduce the bug This fails with FileNotFoundError: https://files.pushshift.io/reddit/submissions ``` from datasets import load_dataset load_dataset("eli5", streaming=True) ``` This works correctly. ``` from datasets import load_dataset load_dataset("eli5") ``` ### Expected behavior - Loading `eli5` dataset should not raise an error under the streaming mode. - Or at the very least, show a warning that streaming mode is not supported with `eli5` dataset. ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-6.2.0-39-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 - `fsspec` version: 2023.6.0
false
2,074,923,321
https://api.github.com/repos/huggingface/datasets/issues/6578
https://github.com/huggingface/datasets/pull/6578
6,578
Faster webdataset streaming
closed
3
2024-01-10T18:18:09
2024-01-30T18:46:02
2024-01-30T18:39:51
lhoestq
[]
requests.get(..., streaming=True) is faster than using HTTP range requests when streaming large TAR files it can be enabled using block_size=0 in fsspec cc @rwightman
true
2,074,790,848
https://api.github.com/repos/huggingface/datasets/issues/6577
https://github.com/huggingface/datasets/issues/6577
6,577
502 Server Errors when streaming large dataset
closed
6
2024-01-10T16:59:36
2024-02-12T11:46:03
2024-01-15T16:05:44
sanchit-gandhi
[ "streaming" ]
### Describe the bug When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming: ``` huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet ``` This is despite the parquet file definitely existing on the Hub: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/blob/main/train/train-00228-of-07135.parquet And having the correct commit id: [7d2acc5c59de848e456e951a76e805304d6fb350](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/commits/main/train) I’m wondering whether this is coming from datasets? Or from the Hub side? ### Steps to reproduce the bug Reproducer: ```python from datasets import load_dataset from torch.utils.data import DataLoader from tqdm import tqdm NUM_EPOCHS = 20 dataset = load_dataset("sanchit-gandhi/concatenated-train-set", "train", streaming=True) dataset = dataset.with_format("torch") dataloader = DataLoader(dataset["train"], batch_size=256, drop_last=True, pin_memory=True, num_workers=16) for epoch in tqdm(range(NUM_EPOCHS), desc="Epoch", position=0): for batch in tqdm(dataloader, desc="Batch", position=1): continue ``` Running the above script tends to fail within about 2 hours with a traceback like the following: <details> <summary> Traceback: </summary> ```python 1029 for batch in train_loader: 1030 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 630, in __next__ 1031 data = self._next_data() 1032 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1325, in _next_data 1033 return self._process_data(data) 1034 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data 1035 data.reraise() 1036 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/_utils.py", line 694, in reraise 1037 raise exception 1038 huggingface_hub.utils._errors.HfHubHTTPError: Caught HfHubHTTPError in DataLoader worker process 10. 1039 Original Traceback (most recent call last): 1040 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 286, in hf_raise_for_status 1041 response.raise_for_status() 1042 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/requests/models.py", line 1021, in raise_for_status 1043 raise HTTPError(http_error_msg, response=self) 1044 requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet 1045 The above exception was the direct cause of the following exception: 1046 Traceback (most recent call last): 1047 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop 1048 data = fetcher.fetch(index) 1049 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch 1050 data.append(next(self.dataset_iter)) 1051 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1363, in __iter__ 1052 yield from self._iter_pytorch() 1053 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1298, in _iter_pytorch 1054 for key, example in ex_iterable: 1055 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 983, in __iter__ 1056 for x in self.ex_iterable: 1057 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 863, in __iter__ 1058 yield from self._iter() 1059 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 900, in _iter 1060 for key, example in iterator: 1061 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 679, in __iter__ 1062 yield from self._iter() 1063 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 741, in _iter 1064 for key, example in iterator: 1065 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 863, in __iter__ 1066 yield from self._iter() 1067 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 900, in _iter 1068 for key, example in iterator: 1069 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1115, in __iter__ 1070 for key, example in self.ex_iterable: 1071 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 679, in __iter__ 1072 yield from self._iter() 1073 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 741, in _iter 1074 for key, example in iterator: 1075 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 1115, in __iter__ 1076 for key, example in self.ex_iterable: 1077 File "/home/sanchitgandhi/datasets/src/datasets/iterable_dataset.py", line 282, in __iter__ 1078 for key, pa_table in self.generate_tables_fn(**self.kwargs): 1079 File "/home/sanchitgandhi/datasets/src/datasets/packaged_modules/parquet/parquet.py", line 87, in _generate_tables 1080 for batch_idx, record_batch in enumerate( 1081 File "pyarrow/_parquet.pyx", line 1367, in iter_batches 1082 File "pyarrow/types.pxi", line 88, in pyarrow.lib._datatype_to_pep3118 1083 File "/home/sanchitgandhi/datasets/src/datasets/download/streaming_download_manager.py", line 341, in read_with_retries 1084 out = read(*args, **kwargs) 1085 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/fsspec/spec.py", line 1856, in read 1086 out = self.cache._fetch(self.loc, self.loc + length) 1087 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/fsspec/caching.py", line 189, in _fetch 1088 self.cache = self.fetcher(start, end) # new block replaces old 1089 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/hf_file_system.py", line 626, in _fetch_range 1090 hf_raise_for_status(r) 1091 File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/huggingface_hub/utils/_errors.py", line 333, in hf_raise_for_status 1092 raise HfHubHTTPError(str(e), response=response) from e 1093 huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set/resolve/7d2acc5c59de848e456e951a76e805304d6fb350/train/train-00288-of-07135.parquet ``` </details> ### Expected behavior Should be able to stream the dataset without any 502 error. ### Environment info - `datasets` version: 2.16.2.dev0 - Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - `huggingface_hub` version: 0.20.1 - PyArrow version: 14.0.2 - Pandas version: 2.0.3 - `fsspec` version: 2023.10.0
false
2,073,710,124
https://api.github.com/repos/huggingface/datasets/issues/6576
https://github.com/huggingface/datasets/issues/6576
6,576
document page 404 not found after redirection
closed
1
2024-01-10T06:48:14
2024-01-17T14:01:31
2024-01-17T14:01:31
annahung31
[]
### Describe the bug The redirected page encountered 404 not found. ### Steps to reproduce the bug 1. In this tutorial: https://huggingface.co/learn/nlp-course/chapter5/4?fw=pt original md: https://github.com/huggingface/course/blob/2c733c2246b8b7e0e6f19a9e5d15bb12df43b2a3/chapters/en/chapter5/4.mdx#L49 ``` By default, 🤗 Datasets will decompress the files needed to load a dataset. If you want to preserve hard drive space, you can pass `DownloadConfig(delete_extracted=True)` to the `download_config` argument of `load_dataset()`. See the [documentation](https://huggingface.co/docs/datasets/package_reference/builder_classes.html?#datasets.utils.DownloadConfig) for more details. ``` The documentation points to `https://huggingface.co/docs/datasets/package_reference/builder_classes.html?#datasets.utils.DownloadConfig` it shows `The documentation page PACKAGE_REFERENCE/BUILDER_CLASSES.HTML doesn’t exist in v2.16.1, but exists on the main version. Click [here](https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes.html) to redirect to the main version of the documentation.` But the redirected website `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes.html` is 404 not found. ### Expected behavior I Guess the redirected webisite should be `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes` (without `.html`) or `https://huggingface.co/docs/datasets/main/en/package_reference/builder_classes#datasets.DownloadConfig`. ### Environment info Datasets main
false
2,072,617,406
https://api.github.com/repos/huggingface/datasets/issues/6575
https://github.com/huggingface/datasets/pull/6575
6,575
[IterableDataset] Fix `drop_last_batch`in map after shuffling or sharding
closed
2
2024-01-09T15:35:31
2024-01-11T16:16:54
2024-01-11T16:10:30
lhoestq
[]
It was not taken into account e.g. when passing to a DataLoader with num_workers>0 Fix https://github.com/huggingface/datasets/issues/6565
true
2,072,579,549
https://api.github.com/repos/huggingface/datasets/issues/6574
https://github.com/huggingface/datasets/pull/6574
6,574
Fix tests based on datasets that used to have scripts
closed
2
2024-01-09T15:16:16
2024-01-09T16:11:33
2024-01-09T16:05:13
lhoestq
[]
...now that `squad` and `paws` don't have a script anymore
true
2,072,553,951
https://api.github.com/repos/huggingface/datasets/issues/6573
https://github.com/huggingface/datasets/pull/6573
6,573
[WebDataset] Audio support and bug fixes
closed
2
2024-01-09T15:03:04
2024-01-11T16:17:28
2024-01-11T16:11:04
lhoestq
[]
- Add audio support - Fix an issue where user-provided features with additional fields are not taken into account Close https://github.com/huggingface/datasets/issues/6569
true
2,072,384,281
https://api.github.com/repos/huggingface/datasets/issues/6572
https://github.com/huggingface/datasets/pull/6572
6,572
Adding option for multipart achive download
closed
1
2024-01-09T13:35:44
2024-02-25T08:13:01
2024-02-25T08:13:01
jpodivin
[]
Right now we can only download multiple separate archives or a single file archive, but not multipart archives, such as those produced by `tar --multi-volume`. This PR allows for downloading and extraction of archives split into multiple parts. With the new `multi_part` field of the `DownloadConfig` set, the downloader will first retrieve all the files and attempt to concatenate them before starting extraction. This will obviously fail if files retrieved are actually multiple separate archives, so the option is set to `False` by default. Tests and docs incoming.
true
2,072,111,000
https://api.github.com/repos/huggingface/datasets/issues/6571
https://github.com/huggingface/datasets/issues/6571
6,571
Make DatasetDict.column_names return a list instead of dict
open
0
2024-01-09T10:45:17
2024-01-09T10:45:17
null
albertvillanova
[ "enhancement" ]
Currently, `DatasetDict.column_names` returns a dict, with each split name as keys and the corresponding list of column names as values. However, by construction, all splits have the same column names. I think it makes more sense to return a single list with the column names, which is the same for all the split keys.
false
2,071,805,265
https://api.github.com/repos/huggingface/datasets/issues/6570
https://github.com/huggingface/datasets/issues/6570
6,570
No online docs for 2.16 release
closed
7
2024-01-09T07:43:30
2024-01-09T16:45:50
2024-01-09T16:45:50
albertvillanova
[ "bug", "documentation" ]
We do not have the online docs for the latest minor release 2.16 (2.16.0 nor 2.16.1). In the online docs, the latest version appearing is 2.15.0: https://huggingface.co/docs/datasets/index ![Screenshot from 2024-01-09 08-43-08](https://github.com/huggingface/datasets/assets/8515462/83613222-867f-41f4-8833-7a4a76582f44)
false
2,070,251,122
https://api.github.com/repos/huggingface/datasets/issues/6569
https://github.com/huggingface/datasets/issues/6569
6,569
WebDataset ignores features defined in YAML or passed to load_dataset
closed
0
2024-01-08T11:24:21
2024-01-11T16:11:06
2024-01-11T16:11:05
lhoestq
[]
we should not override if the features exist already https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/packaged_modules/webdataset/webdataset.py#L78-L85
false
2,069,922,151
https://api.github.com/repos/huggingface/datasets/issues/6568
https://github.com/huggingface/datasets/issues/6568
6,568
keep_in_memory=True does not seem to work
open
6
2024-01-08T08:03:58
2024-01-13T04:53:04
null
kopyl
[]
UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :(
false
2,069,808,842
https://api.github.com/repos/huggingface/datasets/issues/6567
https://github.com/huggingface/datasets/issues/6567
6,567
AttributeError: 'str' object has no attribute 'to'
closed
3
2024-01-08T06:40:21
2024-01-08T11:56:19
2024-01-08T10:03:17
andysingal
[]
### Describe the bug ``` -------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>() 8 report_to="wandb") 9 ---> 10 trainer = Trainer( 11 model=model, 12 args=training_args, 1 frames [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _move_model_to_device(self, model, device) 688 689 def _move_model_to_device(self, model, device): --> 690 model = model.to(device) 691 # Moving a model to an XLA device disconnects the tied weights, so we have to retie them. 692 if self.args.parallel_mode == ParallelMode.TPU and hasattr(model, "tie_weights"): AttributeError: 'str' object has no attribute 'to' ``` ### Steps to reproduce the bug here is the notebook: ``` https://colab.research.google.com/drive/10JDBNsLlYrQdnI2FWfDK3F5M8wvVUDXG?usp=sharing ``` ### Expected behavior run the Training ### Environment info Colab Notebook , T4
false
2,069,495,429
https://api.github.com/repos/huggingface/datasets/issues/6566
https://github.com/huggingface/datasets/issues/6566
6,566
I train controlnet_sdxl in bf16 datatype, got unsupported ERROR in datasets
closed
1
2024-01-08T02:37:03
2024-06-02T14:24:39
2024-05-17T09:40:14
HelloWorldBeginner
[ "bug" ]
### Describe the bug ``` Traceback (most recent call last): File "train_controlnet_sdxl.py", line 1252, in <module> main(args) File "train_controlnet_sdxl.py", line 1013, in main train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 592, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3093, in map for rank, done, content in Dataset._map_single(**dataset_kwargs): File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3489, in _map_single writer.write_batch(batch) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_writer.py", line 557, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 248, in pyarrow.lib.array File "pyarrow/array.pxi", line 113, in pyarrow.lib._handle_arrow_array_protocol File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/arrow_writer.py", line 191, in __arrow_array__ out = pa.array(cast_to_python_objects(data, only_1d_for_numpy=True)) File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/features/features.py", line 447, in cast_to_python_objects return _cast_to_python_objects( File "/home/miniconda3/envs/mhh_df/lib/python3.8/site-packages/datasets/features/features.py", line 324, in _cast_to_python_objects for x in obj.detach().cpu().numpy() TypeError: Got unsupported ScalarType BFloat16 ``` ### Steps to reproduce the bug Here is my train script I use BF16 type,I use diffusers train my model ``` export MODEL_DIR="/home/mhh/sd_models/stable-diffusion-xl-base-1.0" export OUTPUT_DIR="./control_net" export VAE_NAME="/home/mhh/sd_models/sdxl-vae-fp16-fix" accelerate launch train_controlnet_sdxl.py \ --pretrained_model_name_or_path=$MODEL_DIR \ --output_dir=$OUTPUT_DIR \ --pretrained_vae_model_name_or_path=$VAE_NAME \ --dataset_name=/home/mhh/sd_datasets/fusing/fill50k \ --mixed_precision="bf16" \ --resolution=1024 \ --learning_rate=1e-5 \ --max_train_steps=200 \ --validation_image "/home/mhh/sd_datasets/controlnet_image/conditioning_image_1.png" "/home/mhh/sd_datasets/controlnet_image/conditioning_image_2.png" \ --validation_prompt "red circle with blue background" "cyan circle with brown floral background" \ --validation_steps=50 \ --train_batch_size=1 \ --gradient_accumulation_steps=4 \ --report_to="wandb" \ --seed=42 \ ``` ### Expected behavior When I changed the data type to fp16, it worked. ### Environment info datasets 2.16.1 numpy 1.24.4
false
2,068,939,670
https://api.github.com/repos/huggingface/datasets/issues/6565
https://github.com/huggingface/datasets/issues/6565
6,565
`drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader
closed
2
2024-01-07T02:46:50
2025-03-08T09:46:05
2024-01-11T16:10:31
naba89
[]
### Describe the bug Scenario: - Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't have two samples. What works: - Using DataLoader with `num_workers=0` What does not work: - Using DataLoader with `num_workers=1`, errors in the last batch. Basically, `drop_last_batch=True` is ignored when using multiple dataloading workers. Please take a look at the minimal repro script below. ### Steps to reproduce the bug ```python from datasets import Dataset, interleave_datasets from torch.utils.data import DataLoader def merge_samples(batch): assert len(batch['a']) == 2, "Batch size must be 2" batch['c'] = [batch['a'][0]] batch['d'] = [batch['a'][1]] return batch def gen1(): for ii in range(1, 8385): yield {"a": ii} def gen2(): for ii in range(1, 5302): yield {"a": ii} if __name__ == '__main__': dataset1 = Dataset.from_generator(gen1).to_iterable_dataset(num_shards=1024) dataset2 = Dataset.from_generator(gen2).to_iterable_dataset(num_shards=1024) interleaved = interleave_datasets([dataset1, dataset2], stopping_strategy="all_exhausted") mapped = interleaved.map(merge_samples, batched=True, batch_size=2, remove_columns=interleaved.column_names, drop_last_batch=True) # Works loader = DataLoader(mapped, batch_size=32, num_workers=0) i = 0 for b in loader: print(i, b['c'].shape, b['d'].shape) i += 1 print("DataLoader with num_workers=0 works") # Doesn't work loader = DataLoader(mapped, batch_size=32, num_workers=1) i = 0 for b in loader: print(i, b['c'].shape, b['d'].shape) i += 1 ``` ### Expected behavior `drop_last_batch=True` should have same behaviour for `num_workers=0` and `num_workers>=1` ### Environment info - `datasets` version: 2.16.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.12 - `huggingface_hub` version: 0.20.2 - PyArrow version: 12.0.1 - Pandas version: 2.0.3 - `fsspec` version: 2023.6.0 I have also tested on Linux and got the same behavior.
false
2,068,893,194
https://api.github.com/repos/huggingface/datasets/issues/6564
https://github.com/huggingface/datasets/issues/6564
6,564
`Dataset.filter` missing `with_rank` parameter
closed
2
2024-01-06T23:48:13
2024-01-29T16:36:55
2024-01-29T16:36:54
kopyl
[]
### Describe the bug The issue shall be open: https://github.com/huggingface/datasets/issues/6435 When i try to pass `with_rank` to `Dataset.filter()`, i get this: `Dataset.filter() got an unexpected keyword argument 'with_rank'` ### Steps to reproduce the bug Run notebook: https://colab.research.google.com/drive/1WUNKph8BdP0on5ve3gQnh_PE0cFLQqTn?usp=sharing ### Expected behavior Should work? ### Environment info NVIDIA RTX 4090
false
2,068,302,402
https://api.github.com/repos/huggingface/datasets/issues/6563
https://github.com/huggingface/datasets/issues/6563
6,563
`ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py)
closed
7
2024-01-06T02:28:54
2024-03-14T02:59:42
2024-01-06T16:13:27
wasertech
[]
### Describe the bug Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore. ```text + python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_size 4 --num_train_epochs 1 --learning_rate 1.41e-5 --gradient_accumulation_steps 8 --seq_length 4096 --output_dir output --log_with wandb Traceback (most recent call last): File "/home/trainer/sft_train.py", line 22, in <module> from datasets import load_dataset File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/__init__.py", line 22, in <module> from .arrow_dataset import Dataset File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 66, in <module> from .arrow_reader import ArrowReader File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/arrow_reader.py", line 30, in <module> from .download.download_config import DownloadConfig File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/__init__.py", line 9, in <module> from .download_manager import DownloadManager, DownloadMode File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/download/download_manager.py", line 31, in <module> from ..utils import tqdm as hf_tqdm File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/__init__.py", line 19, in <module> from .info_utils import VerificationMode File "/home/trainer/llm-train/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 5, in <module> from huggingface_hub.utils import insecure_hashlib ImportError: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (/home/trainer/llm-train/lib/python3.8/site-packages/huggingface_hub/utils/__init__.py) ``` ### Steps to reproduce the bug Using `datasets==2.16.1` and `huggingface_hub== 0.17.3`, load a dataset with `load_dataset`. ### Expected behavior The dataset should be (downloaded - if needed - and) returned. ### Environment info ```text trainer@a311ae86939e:/mnt$ pip show datasets Name: datasets Version: 2.16.1 Summary: HuggingFace community-driven open-source library of datasets Home-page: https://github.com/huggingface/datasets Author: HuggingFace Inc. Author-email: thomas@huggingface.co License: Apache 2.0 Location: /home/trainer/llm-train/lib/python3.8/site-packages Requires: packaging, pyyaml, multiprocess, pyarrow-hotfix, pandas, pyarrow, xxhash, dill, numpy, aiohttp, tqdm, fsspec, requests, filelock, huggingface-hub Required-by: trl, lm-eval, evaluate trainer@a311ae86939e:/mnt$ pip show huggingface_hub Name: huggingface-hub Version: 0.17.3 Summary: Client library to download and publish models, datasets and other repos on the huggingface.co hub Home-page: https://github.com/huggingface/huggingface_hub Author: Hugging Face, Inc. Author-email: julien@huggingface.co License: Apache Location: /home/trainer/llm-train/lib/python3.8/site-packages Requires: requests, pyyaml, packaging, typing-extensions, tqdm, filelock, fsspec Required-by: transformers, tokenizers, peft, evaluate, datasets, accelerate trainer@a311ae86939e:/mnt$ huggingface-cli env Copy-and-paste the text below in your GitHub issue. - huggingface_hub version: 0.17.3 - Platform: Linux-6.5.13-7-MANJARO-x86_64-with-glibc2.29 - Python version: 3.8.10 - Running in iPython ?: No - Running in notebook ?: No - Running in Google Colab ?: No - Token path ?: /home/trainer/.cache/huggingface/token - Has saved token ?: True - Who am I ?: wasertech - Configured git credential helpers: - FastAI: N/A - Tensorflow: N/A - Torch: 2.1.2 - Jinja2: 3.1.2 - Graphviz: N/A - Pydot: N/A - Pillow: 10.2.0 - hf_transfer: N/A - gradio: N/A - tensorboard: N/A - numpy: 1.24.4 - pydantic: N/A - aiohttp: 3.9.1 - ENDPOINT: https://huggingface.co - HUGGINGFACE_HUB_CACHE: /home/trainer/.cache/huggingface/hub - HUGGINGFACE_ASSETS_CACHE: /home/trainer/.cache/huggingface/assets - HF_TOKEN_PATH: /home/trainer/.cache/huggingface/token - HF_HUB_OFFLINE: False - HF_HUB_DISABLE_TELEMETRY: False - HF_HUB_DISABLE_PROGRESS_BARS: None - HF_HUB_DISABLE_SYMLINKS_WARNING: False - HF_HUB_DISABLE_EXPERIMENTAL_WARNING: False - HF_HUB_DISABLE_IMPLICIT_TOKEN: False - HF_HUB_ENABLE_HF_TRANSFER: False ```
false
2,067,904,504
https://api.github.com/repos/huggingface/datasets/issues/6562
https://github.com/huggingface/datasets/issues/6562
6,562
datasets.DownloadMode.FORCE_REDOWNLOAD use cache to download dataset features with load_dataset function
open
0
2024-01-05T19:10:25
2024-01-05T19:10:25
null
LsTam91
[]
### Describe the bug I have updated my dataset by adding a new feature, and push it to the hub. When I want to download it on my machine which contain the old version by using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` I get an error (paste bellow). Seems that the load_dataset function still use the old features schema instead of downloading everything new from the HUB. I find a way to go around this issue by manually deleting the old dataset cache. But from my understanding of `datasets.DownloadMode.FORCE_REDOWNLOAD` option, the dataset cache should be ignored. ### Steps to reproduce the bug 1. Download your dataset in your machine using `datasets.load_dataset` 2. Create a new feature in your dataset and push it to the hub 3. On the same machine redownload your dataset using `datasets.load_dataset("your_dataset_name", download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD)` ### Expected behavior ` ValueError: Couldn't cast id: string level: string context: list<element: string> child 0, element: string type: string answer: string question: string supporting_facts: list<element: string> child 0, element: string fra_answer: string fra_question: string -- schema metadata -- huggingface: '{"info": {"features": {"id": {"dtype": "string", "_type": "' + 490 to {'id': Value(dtype='string', id=None), 'level': Value(dtype='string', id=None), 'context': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'type': Value(dtype='string', id=None), 'answer': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'supporting_facts': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} because column names don't match The above exception was the direct cause of the following exception: DatasetGenerationError ... DatasetGenerationError: An error occurred while generating the dataset` ### Environment info datasets-2.16.1 huggingface-hub-0.20.2
false
2,067,404,951
https://api.github.com/repos/huggingface/datasets/issues/6561
https://github.com/huggingface/datasets/issues/6561
6,561
Document YAML configuration with "data_dir"
open
2
2024-01-05T14:03:33
2025-08-05T07:50:17
null
severo
[ "documentation" ]
See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference
false
2,065,637,625
https://api.github.com/repos/huggingface/datasets/issues/6560
https://github.com/huggingface/datasets/issues/6560
6,560
Support Video
closed
1
2024-01-04T13:10:58
2024-08-23T09:51:27
2024-08-23T09:51:27
yuvalkirstain
[ "duplicate", "enhancement" ]
### Feature request HF datasets are awesome in supporting text and images. Will be great to see such a support in videos :) ### Motivation Video generation :) ### Your contribution Will probably be limited to raising this feature request ;)
false
2,065,118,332
https://api.github.com/repos/huggingface/datasets/issues/6559
https://github.com/huggingface/datasets/issues/6559
6,559
Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']
closed
8
2024-01-04T07:04:48
2024-04-03T10:40:53
2024-01-05T01:26:25
zhulinJulia24
[]
### Describe the bug python script is: ``` from datasets import load_dataset cache_dir = 'path/to/your/cache/directory' dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir) ``` the script success when datasets version is 2.14.7. when using 2.16.1, error occurs ` ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']` ### Steps to reproduce the bug 1. pip install datasets==2.16.1 2. run python script: ``` from datasets import load_dataset cache_dir = 'path/to/your/cache/directory' dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir) ``` ### Expected behavior the dataset should be loaded successful in the latest version. ### Environment info datasets 2.16.1
false
2,064,885,984
https://api.github.com/repos/huggingface/datasets/issues/6558
https://github.com/huggingface/datasets/issues/6558
6,558
OSError: image file is truncated (1 bytes not processed) #28323
closed
1
2024-01-04T02:15:13
2024-02-21T00:38:12
2024-02-21T00:38:12
andysingal
[]
### Describe the bug ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) Cell In[24], line 28 23 return example 25 # Filter the dataset 26 # filtered_dataset = dataset.filter(contains_number) 27 # Add the 'label' field in the dataset ---> 28 labeled_dataset = dataset.filter(contains_number).map(add_label) 29 # View the structure of the updated dataset 30 print(labeled_dataset) File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:975, in DatasetDict.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc) 972 if cache_file_names is None: 973 cache_file_names = {k: None for k in self} 974 return DatasetDict( --> 975 { 976 k: dataset.filter( 977 function=function, 978 with_indices=with_indices, 979 input_columns=input_columns, 980 batched=batched, 981 batch_size=batch_size, 982 keep_in_memory=keep_in_memory, 983 load_from_cache_file=load_from_cache_file, 984 cache_file_name=cache_file_names[k], 985 writer_batch_size=writer_batch_size, 986 fn_kwargs=fn_kwargs, 987 num_proc=num_proc, 988 desc=desc, 989 ) 990 for k, dataset in self.items() 991 } 992 ) File /usr/local/lib/python3.10/dist-packages/datasets/dataset_dict.py:976, in <dictcomp>(.0) 972 if cache_file_names is None: 973 cache_file_names = {k: None for k in self} 974 return DatasetDict( 975 { --> 976 k: dataset.filter( 977 function=function, 978 with_indices=with_indices, 979 input_columns=input_columns, 980 batched=batched, 981 batch_size=batch_size, 982 keep_in_memory=keep_in_memory, 983 load_from_cache_file=load_from_cache_file, 984 cache_file_name=cache_file_names[k], 985 writer_batch_size=writer_batch_size, 986 fn_kwargs=fn_kwargs, 987 num_proc=num_proc, 988 desc=desc, 989 ) 990 for k, dataset in self.items() 991 } 992 ) File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs) 550 self_format = { 551 "type": self._format_type, 552 "format_kwargs": self._format_kwargs, 553 "columns": self._format_columns, 554 "output_all_columns": self._output_all_columns, 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output File /usr/local/lib/python3.10/dist-packages/datasets/fingerprint.py:481, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 477 validate_fingerprint(kwargs[fingerprint_name]) 479 # Call actual function --> 481 out = func(dataset, *args, **kwargs) 483 # Update fingerprint of in-place transforms + update in-place history of transforms 485 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3623, in Dataset.filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3620 if len(self) == 0: 3621 return self -> 3623 indices = self.map( 3624 function=partial( 3625 get_indices_from_mask_function, function, batched, with_indices, input_columns, self._indices 3626 ), 3627 with_indices=True, 3628 features=Features({"indices": Value("uint64")}), 3629 batched=True, 3630 batch_size=batch_size, 3631 remove_columns=self.column_names, 3632 keep_in_memory=keep_in_memory, 3633 load_from_cache_file=load_from_cache_file, 3634 cache_file_name=cache_file_name, 3635 writer_batch_size=writer_batch_size, 3636 fn_kwargs=fn_kwargs, 3637 num_proc=num_proc, 3638 suffix_template=suffix_template, 3639 new_fingerprint=new_fingerprint, 3640 input_columns=input_columns, 3641 desc=desc or "Filter", 3642 ) 3643 new_dataset = copy.deepcopy(self) 3644 new_dataset._indices = indices.data File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:592, in transmit_tasks.<locals>.wrapper(*args, **kwargs) 590 self: "Dataset" = kwargs.pop("self") 591 # apply actual function --> 592 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 593 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 594 for dataset in datasets: 595 # Remove task templates if a column mapping of the template is no longer valid File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:557, in transmit_format.<locals>.wrapper(*args, **kwargs) 550 self_format = { 551 "type": self._format_type, 552 "format_kwargs": self._format_kwargs, 553 "columns": self._format_columns, 554 "output_all_columns": self._output_all_columns, 555 } 556 # apply actual function --> 557 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 558 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 559 # re-apply format to the output File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3093, in Dataset.map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 3087 if transformed_dataset is None: 3088 with hf_tqdm( 3089 unit=" examples", 3090 total=pbar_total, 3091 desc=desc or "Map", 3092 ) as pbar: -> 3093 for rank, done, content in Dataset._map_single(**dataset_kwargs): 3094 if done: 3095 shards_done += 1 File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3470, in Dataset._map_single(shard, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset) 3466 indices = list( 3467 range(*(slice(i, i + batch_size).indices(shard.num_rows))) 3468 ) # Something simpler? 3469 try: -> 3470 batch = apply_function_on_filtered_inputs( 3471 batch, 3472 indices, 3473 check_same_num_examples=len(shard.list_indexes()) > 0, 3474 offset=offset, 3475 ) 3476 except NumExamplesMismatchError: 3477 raise DatasetTransformationNotAllowedError( 3478 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it." 3479 ) from None File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:3349, in Dataset._map_single.<locals>.apply_function_on_filtered_inputs(pa_inputs, indices, check_same_num_examples, offset) 3347 if with_rank: 3348 additional_args += (rank,) -> 3349 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) 3350 if isinstance(processed_inputs, LazyDict): 3351 processed_inputs = { 3352 k: v for k, v in processed_inputs.data.items() if k not in processed_inputs.keys_to_format 3353 } File /usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py:6212, in get_indices_from_mask_function(function, batched, with_indices, input_columns, indices_mapping, *args, **fn_kwargs) 6209 if input_columns is None: 6210 # inputs only contains a batch of examples 6211 batch: dict = inputs[0] -> 6212 num_examples = len(batch[next(iter(batch.keys()))]) 6213 for i in range(num_examples): 6214 example = {key: batch[key][i] for key in batch} File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:272, in LazyDict.__getitem__(self, key) 270 value = self.data[key] 271 if key in self.keys_to_format: --> 272 value = self.format(key) 273 self.data[key] = value 274 self.keys_to_format.remove(key) File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:375, in LazyBatch.format(self, key) 374 def format(self, key): --> 375 return self.formatter.format_column(self.pa_table.select([key])) File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:442, in PythonFormatter.format_column(self, pa_table) 440 def format_column(self, pa_table: pa.Table) -> list: 441 column = self.python_arrow_extractor().extract_column(pa_table) --> 442 column = self.python_features_decoder.decode_column(column, pa_table.column_names[0]) 443 return column File /usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py:218, in PythonFeaturesDecoder.decode_column(self, column, column_name) 217 def decode_column(self, column: list, column_name: str) -> list: --> 218 return self.features.decode_column(column, column_name) if self.features else column File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in Features.decode_column(self, column, column_name) 1938 def decode_column(self, column: list, column_name: str): 1939 """Decode column with custom feature decoding. 1940 1941 Args: (...) 1948 `list[Any]` 1949 """ 1950 return ( -> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column] 1952 if self._column_requires_decoding[column_name] 1953 else column 1954 ) File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1951, in <listcomp>(.0) 1938 def decode_column(self, column: list, column_name: str): 1939 """Decode column with custom feature decoding. 1940 1941 Args: (...) 1948 `list[Any]` 1949 """ 1950 return ( -> 1951 [decode_nested_example(self[column_name], value) if value is not None else None for value in column] 1952 if self._column_requires_decoding[column_name] 1953 else column 1954 ) File /usr/local/lib/python3.10/dist-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id) 1336 elif isinstance(schema, (Audio, Image)): 1337 # we pass the token to read and decode files from private repositories in streaming mode 1338 if obj is not None and schema.decode: -> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1340 return obj File /usr/local/lib/python3.10/dist-packages/datasets/features/image.py:185, in Image.decode_example(self, value, token_per_repo_id) 183 else: 184 image = PIL.Image.open(BytesIO(bytes_)) --> 185 image.load() # to avoid "Too many open files" errors 186 return image File /usr/local/lib/python3.10/dist-packages/PIL/ImageFile.py:254, in ImageFile.load(self) 252 break 253 else: --> 254 raise OSError( 255 "image file is truncated " 256 f"({len(b)} bytes not processed)" 257 ) 259 b = b + s 260 n, err_code = decoder.decode(b) OSError: image file is truncated (1 bytes not processed) ``` ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset("mehul7/captioned_military_aircraft") from transformers import AutoImageProcessor checkpoint = "microsoft/resnet-50" image_processor = AutoImageProcessor.from_pretrained(checkpoint) import re from PIL import Image import io def contains_number(example): try: image = Image.open(io.BytesIO(example["image"]['bytes'])) t = image_processor(images=image, return_tensors="pt")['pixel_values'] except Exception as e: print(f"Error processing image:{example['text']}") return False return bool(re.search(r'\d', example['text'])) # Define a function to add the 'label' field def add_label(example): lab = example['text'].split() temp = 'NOT' for item in lab: if str(item[-1]).isdigit(): temp = item break example['label'] = temp return example # Filter the dataset # filtered_dataset = dataset.filter(contains_number) # Add the 'label' field in the dataset labeled_dataset = dataset.filter(contains_number).map(add_label) # View the structure of the updated dataset print(labeled_dataset) ``` ### Expected behavior needs to form labels same as : https://www.kaggle.com/code/jiabaowangts/dataset-air/notebook ### Environment info Kaggle notebook P100
false
2,064,341,965
https://api.github.com/repos/huggingface/datasets/issues/6557
https://github.com/huggingface/datasets/pull/6557
6,557
Support standalone yaml
closed
4
2024-01-03T16:47:35
2024-01-11T17:59:51
2024-01-11T17:53:42
lhoestq
[]
see (internal) https://huggingface.slack.com/archives/C02V51Q3800/p1703885853581679
true
2,064,018,208
https://api.github.com/repos/huggingface/datasets/issues/6556
https://github.com/huggingface/datasets/pull/6556
6,556
Fix imagefolder with one image
closed
2
2024-01-03T13:13:02
2024-02-12T21:57:34
2024-01-09T13:06:30
lhoestq
[]
A dataset repository with one image and one metadata file was considered a JSON dataset instead of an ImageFolder dataset. This is because we pick the dataset type with the most compatible data file extensions present in the repository and it results in a tie in this case. e.g. for https://huggingface.co/datasets/multimodalart/repro_1_image I fixed this by deprioritizing metadata files in the count. fix https://github.com/huggingface/datasets/issues/6545
true
2,063,841,286
https://api.github.com/repos/huggingface/datasets/issues/6555
https://github.com/huggingface/datasets/pull/6555
6,555
Do not use Parquet exports if revision is passed
closed
4
2024-01-03T11:33:10
2024-02-02T10:41:33
2024-02-02T10:35:28
albertvillanova
[]
Fix #6554.
true
2,063,839,916
https://api.github.com/repos/huggingface/datasets/issues/6554
https://github.com/huggingface/datasets/issues/6554
6,554
Parquet exports are used even if revision is passed
closed
1
2024-01-03T11:32:26
2024-02-02T10:35:29
2024-02-02T10:35:29
albertvillanova
[ "bug" ]
We should not used Parquet exports if `revision` is passed. I think this is a regression.
false
2,063,474,183
https://api.github.com/repos/huggingface/datasets/issues/6553
https://github.com/huggingface/datasets/issues/6553
6,553
Cannot import name 'load_dataset' from .... module ‘datasets’
closed
2
2024-01-03T08:18:21
2024-02-21T00:38:24
2024-02-21T00:38:24
ciaoyizhen
[]
### Describe the bug use python -m pip install datasets to install ### Steps to reproduce the bug from datasets import load_dataset ### Expected behavior it doesn't work ### Environment info datasets version==2.15.0 python == 3.10.12 linux version I don't know??
false
2,063,157,187
https://api.github.com/repos/huggingface/datasets/issues/6552
https://github.com/huggingface/datasets/issues/6552
6,552
Loading a dataset from Google Colab hangs at "Resolving data files".
closed
2
2024-01-03T02:18:17
2024-01-08T10:09:04
2024-01-08T10:09:04
KelSolaar
[]
### Describe the bug Hello, I'm trying to load a dataset from Google Colab but the process hangs at `Resolving data files`: ![image](https://github.com/huggingface/datasets/assets/99779/7175ad85-e571-46ed-9f87-92653985777d) It is happening when the `_get_origin_metadata` definition is invoked: ```python def _get_origin_metadata( data_files: List[str], max_workers=64, download_config: Optional[DownloadConfig] = None, ) -> Tuple[str]: return thread_map( partial(_get_single_origin_metadata, download_config=download_config), data_files, max_workers=max_workers, tqdm_class=hf_tqdm, desc="Resolving data files", disable=len(data_files) <= 16, ``` The thread is then stuck at `waiter.acquire()` in the builtin `threading.py` file. I can load the dataset just fine on my machine. Cheers, Thomas ### Steps to reproduce the bug In Google Colab: ```python !pip install datasets from datasets import load_dataset dataset = load_dataset("colour-science/color-checker-detection-dataset") ``` ### Expected behavior The dataset should be loaded. ### Environment info - `datasets` version: 2.16.1 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.20.1 - PyArrow version: 10.0.1 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
false
2,062,768,400
https://api.github.com/repos/huggingface/datasets/issues/6551
https://github.com/huggingface/datasets/pull/6551
6,551
Fix parallel downloads for datasets without scripts
closed
4
2024-01-02T18:06:18
2024-01-06T20:14:57
2024-01-03T13:19:48
lhoestq
[]
Enable parallel downloads using multiprocessing when `num_proc` is passed to `load_dataset`. It was enabled for datasets with scripts already (if they passed lists to `dl_manager.download`) but not for no-script datasets (we pass dicts {split: [list of files]} to `dl_manager.download` for those ones). I fixed this by parallelising on the lists contained in the data files dicts when possible. I also added a context manager `stack_multiprocessing_download_progress_bars` in `DownloadManager` to stack the progress bard of the downloads (from `cached_path(...)` calls). Otherwise the progress bars overlap each other with an annoying flickering effect.
true
2,062,556,493
https://api.github.com/repos/huggingface/datasets/issues/6550
https://github.com/huggingface/datasets/pull/6550
6,550
Multi gpu docs
closed
4
2024-01-02T15:11:58
2024-01-31T13:45:15
2024-01-31T13:38:59
lhoestq
[]
after discussions in https://github.com/huggingface/datasets/pull/6415
true
2,062,420,259
https://api.github.com/repos/huggingface/datasets/issues/6549
https://github.com/huggingface/datasets/issues/6549
6,549
Loading from hf hub with clearer error message
open
1
2024-01-02T13:26:34
2024-01-02T14:06:49
null
thomwolf
[ "enhancement" ]
### Feature request Shouldn't this kinda work ? ``` Dataset.from_json("hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json") ``` I got an error ``` File ~/miniconda3/envs/datatrove/lib/python3.10/site-packages/datasets/data_files.py:380, in resolve_pattern(pattern, base_path, allowed_extensions, download_config) 378 if allowed_extensions is not None: 379 error_msg += f" with any supported extension {list(allowed_extensions)}" --> 380 raise FileNotFoundError(error_msg) 381 return out FileNotFoundError: Unable to find 'hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json' (I'm logged in) ``` Fix: the correct path is ``` hf://datasets/HuggingFaceTB/eval_data/eval_data_context_and_answers.json ``` Proposal: raise a clearer error ### Motivation Clearer error message ### Your contribution Can open a PR
false
2,061,047,984
https://api.github.com/repos/huggingface/datasets/issues/6548
https://github.com/huggingface/datasets/issues/6548
6,548
Skip if a dataset has issues
open
1
2023-12-31T12:41:26
2024-01-02T10:33:17
null
hadianasliwa
[]
### Describe the bug Hello everyone, I'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error: Couldn't reach https://huggingface.co/datasets/wikimedia/wikipedia/resolve/4cb9b0d719291f1a10f96f67d609c5d442980dc9/20231101.ext/train-00000-of-00001.parquet Failed to resolve \'huggingface.co\' ([Errno -3] Temporary failure in name resolution)"))'))) ![image](https://github.com/huggingface/datasets/assets/143214684/8847d9cb-529e-4eda-9c76-282713dfa3af) so I was wondering is there a parameter to be passed to load_dataset() to skip files that can't be downloaded?? ### Steps to reproduce the bug Parameter to be passed to load_dataset() of huggingface to skip files that can't be downloaded?? ### Expected behavior load_dataset() finishes without error ### Environment info None
false
2,060,796,927
https://api.github.com/repos/huggingface/datasets/issues/6547
https://github.com/huggingface/datasets/pull/6547
6,547
set dev version
closed
2
2023-12-30T16:47:17
2023-12-30T16:53:38
2023-12-30T16:47:27
lhoestq
[]
null
true
2,060,796,369
https://api.github.com/repos/huggingface/datasets/issues/6546
https://github.com/huggingface/datasets/pull/6546
6,546
Release: 2.16.1
closed
2
2023-12-30T16:44:51
2023-12-30T16:52:07
2023-12-30T16:45:52
lhoestq
[]
null
true
2,060,789,507
https://api.github.com/repos/huggingface/datasets/issues/6545
https://github.com/huggingface/datasets/issues/6545
6,545
`image` column not automatically inferred if image dataset only contains 1 image
closed
0
2023-12-30T16:17:29
2024-01-09T13:06:31
2024-01-09T13:06:31
apolinario
[]
### Describe the bug By default, the standard Image Dataset maps out `file_name` to `image` when loading an Image Dataset. However, if the dataset contains only 1 image, this does not take place ### Steps to reproduce the bug Input (dataset with one image `multimodalart/repro_1_image`) ```py from datasets import load_dataset dataset = load_dataset("multimodalart/repro_1_image") dataset ``` Output: ```py DatasetDict({ train: Dataset({ features: ['file_name', 'prompt'], num_rows: 1 }) }) ``` Input (dataset with 2+ images `multimodalart/repro_2_image`) ```py from datasets import load_dataset dataset = load_dataset("multimodalart/repro_2_image") dataset ``` Output: ```py DatasetDict({ train: Dataset({ features: ['image', 'prompt'], num_rows: 2 }) }) ``` ### Expected behavior Expected to map `file_name` → `image` for all dataset sizes, including 1. ### Environment info Both latest main and 2.16.0
false
2,060,782,594
https://api.github.com/repos/huggingface/datasets/issues/6544
https://github.com/huggingface/datasets/pull/6544
6,544
Fix custom configs from script
closed
3
2023-12-30T15:51:25
2024-01-02T11:02:39
2023-12-30T16:09:49
lhoestq
[]
We should not use the parquet export when the user is passing config_kwargs I also fixed a regression that would disallow creating a custom config when a dataset has multiple predefined configs fix https://github.com/huggingface/datasets/issues/6533
true
2,060,776,174
https://api.github.com/repos/huggingface/datasets/issues/6543
https://github.com/huggingface/datasets/pull/6543
6,543
Fix dl_manager.extract returning FileNotFoundError
closed
2
2023-12-30T15:24:50
2023-12-30T16:00:06
2023-12-30T15:53:59
lhoestq
[]
The dl_manager base path is remote (e.g. a hf:// path), so local cached paths should be passed as absolute paths. This could happen if users provide a relative path as `cache_dir` fix https://github.com/huggingface/datasets/issues/6536
true
2,059,198,575
https://api.github.com/repos/huggingface/datasets/issues/6542
https://github.com/huggingface/datasets/issues/6542
6,542
Datasets : wikipedia 20220301.en error
closed
2
2023-12-29T08:34:51
2024-01-02T13:21:06
2024-01-02T13:20:30
ppx666
[]
### Describe the bug When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist. ### Steps to reproduce the bug 1.I tried downloading directly. ```python wiki_dataset = load_dataset("wikipedia", "20220301.en") ``` An exception occurred ``` MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner')` ``` 2.I modified the code as prompted. ```python wiki_dataset = load_dataset('wikipedia', '20220301.en', beam_runner='DirectRunner') ``` An exception occurred: ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/enwiki/20220301/dumpstatus.json ``` ### Expected behavior I searched in the parent directory of the corresponding URL, but there was no corresponding "20220301" directory. I really need this data set and hope to provide a download method. ### Environment info python 3.8 datasets 2.16.0 apache-beam 2.52.0 dill 0.3.7
false
2,058,983,826
https://api.github.com/repos/huggingface/datasets/issues/6541
https://github.com/huggingface/datasets/issues/6541
6,541
Dataset not loading successfully.
closed
4
2023-12-29T01:35:47
2024-01-17T00:40:46
2024-01-17T00:40:45
hisushanta
[]
### Describe the bug When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning' I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099) ### Steps to reproduce the bug ## Reproduction Hi, please check this line of code, when I run Show attribute error. ``` from datasets import load_dataset from transformers import WhisperProcessor, WhisperForConditionalGeneration # Select an audio file and read it: ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") audio_sample = ds[0]["audio"] waveform = audio_sample["array"] sampling_rate = audio_sample["sampling_rate"] # Load the Whisper model in Hugging Face format: processor = WhisperProcessor.from_pretrained("openai/whisper-tiny.en") model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-tiny.en") # Use the model and processor to transcribe the audio: input_features = processor( waveform, sampling_rate=sampling_rate, return_tensors="pt" ).input_features # Generate token ids predicted_ids = model.generate(input_features) # Decode token ids to text transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True) transcription[0] ``` **Attribute Error** ``` AttributeError Traceback (most recent call last) Cell In[9], line 6 4 # Select an audio file and read it: 5 ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") ----> 6 audio_sample = ds[0]["audio"] 7 waveform = audio_sample["array"] 8 sampling_rate = audio_sample["sampling_rate"] File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2795, in Dataset.__getitem__(self, key) 2793 def __getitem__(self, key): # noqa: F811 2794 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2795 return self._getitem(key) File /opt/pytorch/lib/python3.8/site-packages/datasets/arrow_dataset.py:2780, in Dataset._getitem(self, key, **kwargs) 2778 formatter = get_formatter(format_type, features=self._info.features, **format_kwargs) 2779 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2780 formatted_output = format_table( 2781 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2782 ) 2783 return formatted_output File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:629, in format_table(table, key, formatter, format_columns, output_all_columns) 627 python_formatter = PythonFormatter(features=formatter.features) 628 if format_columns is None: --> 629 return formatter(pa_table, query_type=query_type) 630 elif query_type == "column": 631 if key in format_columns: File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:396, in Formatter.__call__(self, pa_table, query_type) 394 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 395 if query_type == "row": --> 396 return self.format_row(pa_table) 397 elif query_type == "column": 398 return self.format_column(pa_table) File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:437, in PythonFormatter.format_row(self, pa_table) 435 return LazyRow(pa_table, self) 436 row = self.python_arrow_extractor().extract_row(pa_table) --> 437 row = self.python_features_decoder.decode_row(row) 438 return row File /opt/pytorch/lib/python3.8/site-packages/datasets/formatting/formatting.py:215, in PythonFeaturesDecoder.decode_row(self, row) 214 def decode_row(self, row: dict) -> dict: --> 215 return self.features.decode_example(row) if self.features else row File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1917, in Features.decode_example(self, example, token_per_repo_id) 1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1904 """Decode example with custom feature decoding. 1905 1906 Args: (...) 1914 `dict[str, Any]` 1915 """ -> 1917 return { 1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1919 if self._column_requires_decoding[column_name] 1920 else value 1921 for column_name, (feature, value) in zip_dict( 1922 {key: value for key, value in self.items() if key in example}, example 1923 ) 1924 } File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1918, in <dictcomp>(.0) 1903 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1904 """Decode example with custom feature decoding. 1905 1906 Args: (...) 1914 `dict[str, Any]` 1915 """ 1917 return { -> 1918 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1919 if self._column_requires_decoding[column_name] 1920 else value 1921 for column_name, (feature, value) in zip_dict( 1922 {key: value for key, value in self.items() if key in example}, example 1923 ) 1924 } File /opt/pytorch/lib/python3.8/site-packages/datasets/features/features.py:1339, in decode_nested_example(schema, obj, token_per_repo_id) 1336 elif isinstance(schema, (Audio, Image)): 1337 # we pass the token to read and decode files from private repositories in streaming mode 1338 if obj is not None and schema.decode: -> 1339 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1340 return obj File /opt/pytorch/lib/python3.8/site-packages/datasets/features/audio.py:191, in Audio.decode_example(self, value, token_per_repo_id) 189 array = array.T 190 if self.mono: --> 191 array = librosa.to_mono(array) 192 if self.sampling_rate and self.sampling_rate != sampling_rate: 193 array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate) File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:78, in attach.<locals>.__getattr__(name) 76 submod_path = f"{package_name}.{attr_to_modules[name]}" 77 submod = importlib.import_module(submod_path) ---> 78 attr = getattr(submod, name) 80 # If the attribute lives in a file (module) with the same 81 # name as the attribute, ensure that the attribute and *not* 82 # the module is accessible on the package. 83 if name == attr_to_modules[name]: File /opt/pytorch/lib/python3.8/site-packages/lazy_loader/__init__.py:77, in attach.<locals>.__getattr__(name) 75 elif name in attr_to_modules: 76 submod_path = f"{package_name}.{attr_to_modules[name]}" ---> 77 submod = importlib.import_module(submod_path) 78 attr = getattr(submod, name) 80 # If the attribute lives in a file (module) with the same 81 # name as the attribute, ensure that the attribute and *not* 82 # the module is accessible on the package. File /usr/lib/python3.8/importlib/__init__.py:127, in import_module(name, package) 125 break 126 level += 1 --> 127 return _bootstrap._gcd_import(name[level:], package, level) File <frozen importlib._bootstrap>:1014, in _gcd_import(name, package, level) File <frozen importlib._bootstrap>:991, in _find_and_load(name, import_) File <frozen importlib._bootstrap>:975, in _find_and_load_unlocked(name, import_) File <frozen importlib._bootstrap>:671, in _load_unlocked(spec) File <frozen importlib._bootstrap_external>:848, in exec_module(self, module) File <frozen importlib._bootstrap>:219, in _call_with_frames_removed(f, *args, **kwds) File /opt/pytorch/lib/python3.8/site-packages/librosa/core/audio.py:13 11 import audioread 12 import numpy as np ---> 13 import scipy.signal 14 import soxr 15 import lazy_loader as lazy File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/__init__.py:323 314 from ._spline import ( # noqa: F401 315 cspline2d, 316 qspline2d, (...) 319 symiirorder2, 320 ) 322 from ._bsplines import * --> 323 from ._filter_design import * 324 from ._fir_filter_design import * 325 from ._ltisys import * File /opt/pytorch/lib/python3.8/site-packages/scipy/signal/_filter_design.py:16 13 from numpy.polynomial.polynomial import polyval as npp_polyval 14 from numpy.polynomial.polynomial import polyvalfromroots ---> 16 from scipy import special, optimize, fft as sp_fft 17 from scipy.special import comb 18 from scipy._lib._util import float_factorial File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/__init__.py:405 1 """ 2 ===================================================== 3 Optimization and root finding (:mod:`scipy.optimize`) (...) 401 402 """ 404 from ._optimize import * --> 405 from ._minimize import * 406 from ._root import * 407 from ._root_scalar import * File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_minimize.py:26 24 from ._trustregion_krylov import _minimize_trust_krylov 25 from ._trustregion_exact import _minimize_trustregion_exact ---> 26 from ._trustregion_constr import _minimize_trustregion_constr 28 # constrained minimization 29 from ._lbfgsb_py import _minimize_lbfgsb File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/__init__.py:4 1 """This module contains the equality constrained SQP solver.""" ----> 4 from .minimize_trustregion_constr import _minimize_trustregion_constr 6 __all__ = ['_minimize_trustregion_constr'] File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_trustregion_constr/minimize_trustregion_constr.py:5 3 from scipy.sparse.linalg import LinearOperator 4 from .._differentiable_functions import VectorFunction ----> 5 from .._constraints import ( 6 NonlinearConstraint, LinearConstraint, PreparedConstraint, strict_bounds) 7 from .._hessian_update_strategy import BFGS 8 from .._optimize import OptimizeResult File /opt/pytorch/lib/python3.8/site-packages/scipy/optimize/_constraints.py:8 6 from ._optimize import OptimizeWarning 7 from warnings import warn, catch_warnings, simplefilter ----> 8 from numpy.testing import suppress_warnings 9 from scipy.sparse import issparse 12 def _arr_to_scalar(x): 13 # If x is a numpy array, return x.item(). This will 14 # fail if the array has more than one element. File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/__init__.py:11 8 from unittest import TestCase 10 from . import _private ---> 11 from ._private.utils import * 12 from ._private.utils import (_assert_valid_refcount, _gen_alignment_data) 13 from ._private import extbuild, decorators as dec File /opt/pytorch/lib/python3.8/site-packages/numpy/testing/_private/utils.py:480 476 pprint.pprint(desired, msg) 477 raise AssertionError(msg.getvalue()) --> 480 @np._no_nep50_warning() 481 def assert_almost_equal(actual,desired,decimal=7,err_msg='',verbose=True): 482 """ 483 Raises an AssertionError if two items are not equal up to desired 484 precision. (...) 548 549 """ 550 __tracebackhide__ = True # Hide traceback for py.test File /opt/pytorch/lib/python3.8/site-packages/numpy/__init__.py:313, in __getattr__(attr) 305 raise AttributeError(__former_attrs__[attr]) 307 # Importing Tester requires importing all of UnitTest which is not a 308 # cheap import Since it is mainly used in test suits, we lazy import it 309 # here to save on the order of 10 ms of import time for most users 310 # 311 # The previous way Tester was imported also had a side effect of adding 312 # the full `numpy.testing` namespace --> 313 if attr == 'testing': 314 import numpy.testing as testing 315 return testing AttributeError: module 'numpy' has no attribute '_no_nep50_warning' ``` ### Expected behavior ``` ' Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.' ``` Also, make sure this script is provided for your official website so please update: [script](https://huggingface.co/docs/transformers/model_doc/whisper) ### Environment info **System Info** * transformers -> 4.36.1 * datasets -> 2.15.0 * huggingface_hub -> 0.19.4 * python -> 3.8.10 * accelerate -> 0.25.0 * pytorch -> 2.0.1+cpu * Using GPU in Script -> No
false
2,058,965,157
https://api.github.com/repos/huggingface/datasets/issues/6540
https://github.com/huggingface/datasets/issues/6540
6,540
Extreme inefficiency for `save_to_disk` when merging datasets
open
1
2023-12-29T00:44:35
2023-12-30T15:05:48
null
KatarinaYuan
[]
### Describe the bug Hi, I tried to merge in total 22M sequences of data, where each sequence is of maximum length 2000. I found that merging these datasets and then `save_to_disk` is extremely slow because of flattening the indices. Wondering if you have any suggestions or guidance on this. Thank you very much! ### Steps to reproduce the bug The source data is too big to demonstrate ### Expected behavior The source data is too big to demonstrate ### Environment info python 3.9.0 datasets 2.7.0 pytorch 2.0.0 tokenizers 0.13.1 transformers 4.31.0
false
2,058,493,960
https://api.github.com/repos/huggingface/datasets/issues/6539
https://github.com/huggingface/datasets/issues/6539
6,539
'Repo card metadata block was not found' when loading a pragmeval dataset
open
0
2023-12-28T14:18:25
2023-12-28T14:18:37
null
lambdaofgod
[]
### Describe the bug I can't load dataset subsets of 'pragmeval'. The funny thing is I ran the dataset author's [colab notebook](https://colab.research.google.com/drive/1sg--LF4z7XR1wxAOfp0-3d4J6kQ9nj_A?usp=sharing) and it works just fine. I tried to install exactly the same packages that are installed on colab using poetry, so my environment info only differs from the one from colab in linux version - I still get the same bug outside colab. ### Steps to reproduce the bug Install dependencies with poetry pyproject.toml ``` [tool.poetry] name = "project" version = "0.1.0" description = "" authors = [] [tool.poetry.dependencies] python = "^3.10" datasets = "2.16.0" pandas = "1.5.3" pyarrow = "10.0.1" huggingface-hub = "0.19.4" fsspec = "2023.6.0" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" ``` `poetry run python -c "import datasets; print(datasets.get_dataset_config_names('pragmeval'))` prints ['default'] ### Expected behavior The command should print ``` ['emergent', 'emobank-arousal', 'emobank-dominance', 'emobank-valence', 'gum', 'mrda', 'pdtb', 'persuasiveness-claimtype', 'persuasiveness-eloquence', 'persuasiveness-premisetype', 'persuasiveness-relevance', 'persuasiveness-specificity', 'persuasiveness-strength', 'sarcasm', 'squinky-formality', 'squinky-implicature', 'squinky-informativeness', 'stac', 'switchboard', 'verifiability'] ``` ### Environment info - `datasets` version: 2.16.0 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 10.0.1 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
false
2,057,377,630
https://api.github.com/repos/huggingface/datasets/issues/6538
https://github.com/huggingface/datasets/issues/6538
6,538
ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
closed
15
2023-12-27T13:31:16
2024-01-03T10:06:47
2024-01-03T10:04:58
Sonali-Behera-TRT
[]
### Describe the bug While importing from packages getting the error Code: ``` import os import torch from datasets import load_dataset, Dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging ) from peft import LoraConfig, PeftModel from trl import SFTTrainer from huggingface_hub import login import pandas as pd ``` Error: ```` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[5], line 14 4 from transformers import ( 5 AutoModelForCausalLM, 6 AutoTokenizer, (...) 11 logging 12 ) 13 from peft import LoraConfig, PeftModel ---> 14 from trl import SFTTrainer 15 from huggingface_hub import login 16 import pandas as pd File /opt/conda/lib/python3.10/site-packages/trl/__init__.py:21 8 from .import_utils import ( 9 is_diffusers_available, 10 is_npu_available, (...) 13 is_xpu_available, 14 ) 15 from .models import ( 16 AutoModelForCausalLMWithValueHead, 17 AutoModelForSeq2SeqLMWithValueHead, 18 PreTrainedModelWrapper, 19 create_reference_model, 20 ) ---> 21 from .trainer import ( 22 DataCollatorForCompletionOnlyLM, 23 DPOTrainer, 24 IterativeSFTTrainer, 25 PPOConfig, 26 PPOTrainer, 27 RewardConfig, 28 RewardTrainer, 29 SFTTrainer, 30 ) 33 if is_diffusers_available(): 34 from .models import ( 35 DDPOPipelineOutput, 36 DDPOSchedulerOutput, 37 DDPOStableDiffusionPipeline, 38 DefaultDDPOStableDiffusionPipeline, 39 ) File /opt/conda/lib/python3.10/site-packages/trl/trainer/__init__.py:44 42 from .ppo_trainer import PPOTrainer 43 from .reward_trainer import RewardTrainer, compute_accuracy ---> 44 from .sft_trainer import SFTTrainer 45 from .training_configs import RewardConfig File /opt/conda/lib/python3.10/site-packages/trl/trainer/sft_trainer.py:23 21 import torch.nn as nn 22 from datasets import Dataset ---> 23 from datasets.arrow_writer import SchemaInferenceError 24 from datasets.builder import DatasetGenerationError 25 from transformers import ( 26 AutoModelForCausalLM, 27 AutoTokenizer, (...) 33 TrainingArguments, 34 ) ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py ```` transformers version: 4.36.2 python version: 3.10.12 datasets version: 2.16.1 ### Steps to reproduce the bug 1. Install packages ``` !pip install -U datasets trl accelerate peft bitsandbytes transformers trl huggingface_hub ``` 2. import packages ``` import os import torch from datasets import load_dataset, Dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging ) from peft import LoraConfig, PeftModel from trl import SFTTrainer from huggingface_hub import login import pandas as pd ``` ### Expected behavior No error while importing ### Environment info - `datasets` version: 2.16.0 - Platform: Linux-5.15.133+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.20.1 - PyArrow version: 11.0.0 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
false
2,057,132,173
https://api.github.com/repos/huggingface/datasets/issues/6537
https://github.com/huggingface/datasets/issues/6537
6,537
Adding support for netCDF (*.nc) files
open
3
2023-12-27T09:27:29
2023-12-27T20:46:53
null
shermansiu
[ "enhancement" ]
### Feature request netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`. ### Motivation When uploading *.nc files onto Huggingface Hub through the `datasets` API, I would like to be able to preview the dataset without converting it to another format. ### Your contribution I can submit a PR, provided I have the time.
false
2,056,863,239
https://api.github.com/repos/huggingface/datasets/issues/6536
https://github.com/huggingface/datasets/issues/6536
6,536
datasets.load_dataset raises FileNotFoundError for datasets==2.16.0
closed
2
2023-12-27T03:15:48
2023-12-30T18:58:04
2023-12-30T15:54:00
ArvinZhuang
[]
### Describe the bug Seems `datasets.load_dataset` raises FileNotFoundError for some hub datasets with the latest `datasets==2.16.0` ### Steps to reproduce the bug For example `pip install datasets==2.16.0` then ```python import datasets datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_dir='cache1')["train"] ``` This will raise: ```bash Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/load.py", line 2545, in load_dataset builder_instance.download_and_prepare( File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/builder.py", line 1003, in download_and_prepare self._download_and_prepare( File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/builder.py", line 1076, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 43, in _split_generators data_files = dl_manager.download_and_extract(self.config.data_files) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 566, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 539, in extract extracted_paths = map_nested( File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 466, in map_nested mapped = [ File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 467, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 387, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 387, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 370, in _single_map_nested return function(data_struct) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 451, in _download out = cached_path(url_or_filename, download_config=download_config) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 188, in cached_path output_path = get_from_cache( File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 570, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wentingzhao/anthropic-hh-first-prompt/resolve/11b393a5545f706a357ebcd4a5285d93db176715/cache1/downloads/87d66c365626feca116cba323c4856c9aae056e4503f09f23e34aa085eb9de15 ``` However, seems it works fine for some datasets, for example, if works fine for `datasets.load_dataset("ag_news", cache_dir='cache2')["test"]` But the dataset works fine for datasets==2.15.0, for example `pip install datasets==2.15.0`, then ```python import datasets datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_dir='cache3')["train"] Dataset({ features: ['user', 'system', 'source'], num_rows: 8552 }) ``` ### Expected behavior 2.16.0 should work as same as 2.15.0 for all datasets ### Environment info python3.9 conda env tested on MacOS and Linux
false
2,056,264,339
https://api.github.com/repos/huggingface/datasets/issues/6535
https://github.com/huggingface/datasets/issues/6535
6,535
IndexError: Invalid key: 47682 is out of bounds for size 0 while using PEFT
open
3
2023-12-26T10:14:33
2024-02-05T08:42:31
null
MahavirDabas18
[]
### Describe the bug I am trying to fine-tune the t5 model on the paraphrasing task. While running the same code without- model = get_peft_model(model, config) the model trains without any issues. However, using the model returned from get_peft_model raises the following error due to datasets- IndexError: Invalid key: 47682 is out of bounds for size 0. I had raised this in https://github.com/huggingface/peft/issues/1299#issue-2056173386 and they suggested that I raise it here. Here is the complete error- IndexError Traceback (most recent call last) in <cell line: 1>() ----> 1 trainer.train() 11 frames [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1553 hf_hub_utils.enable_progress_bars() 1554 else: -> 1555 return inner_training_loop( 1556 args=args, 1557 resume_from_checkpoint=resume_from_checkpoint, [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1836 1837 step = -1 -> 1838 for step, inputs in enumerate(epoch_iterator): 1839 total_batched_samples += 1 1840 if rng_to_sync: [/usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py](https://localhost:8080/#) in iter(self) 446 # We iterate one batch ahead to check when we are at the end 447 try: --> 448 current_batch = next(dataloader_iter) 449 except StopIteration: 450 yield [/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in next(self) 628 # TODO(https://github.com/pytorch/pytorch/issues/76750) 629 self._reset() # type: ignore[call-arg] --> 630 data = self._next_data() 631 self._num_yielded += 1 632 if self._dataset_kind == _DatasetKind.Iterable and \ [/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in _next_data(self) 672 def _next_data(self): 673 index = self._next_index() # may raise StopIteration --> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 675 if self._pin_memory: 676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) [/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py](https://localhost:8080/#) in fetch(self, possibly_batched_index) 47 if self.auto_collation: 48 if hasattr(self.dataset, "getitems") and self.dataset.getitems: ---> 49 data = self.dataset.getitems(possibly_batched_index) 50 else: 51 data = [self.dataset[idx] for idx in possibly_batched_index] [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in getitems(self, keys) 2802 def getitems(self, keys: List) -> List: 2803 """Can be used to get a batch using a list of integers indices.""" -> 2804 batch = self.getitem(keys) 2805 n_examples = len(batch[next(iter(batch))]) 2806 return [{col: array[i] for col, array in batch.items()} for i in range(n_examples)] [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in getitem(self, key) 2798 def getitem(self, key): # noqa: F811 2799 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2800 return self._getitem(key) 2801 2802 def getitems(self, keys: List) -> List: [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _getitem(self, key, **kwargs) 2782 format_kwargs = format_kwargs if format_kwargs is not None else {} 2783 formatter = get_formatter(format_type, features=self._info.features, **format_kwargs) -> 2784 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) 2785 formatted_output = format_table( 2786 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in query_table(table, key, indices) 581 else: 582 size = indices.num_rows if indices is not None else table.num_rows --> 583 _check_valid_index_key(key, size) 584 # Query the main table 585 if indices is None: [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size) 534 elif isinstance(key, Iterable): 535 if len(key) > 0: --> 536 _check_valid_index_key(int(max(key)), size=size) 537 _check_valid_index_key(int(min(key)), size=size) 538 else: [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size) 524 if isinstance(key, int): 525 if (key < 0 and key + size < 0) or (key >= size): --> 526 raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") 527 return 528 elif isinstance(key, slice): IndexError: Invalid key: 47682 is out of bounds for size 0 ### Steps to reproduce the bug device = "cuda:0" if torch.cuda.is_available() else "cpu" #defining model name for tokenizer and model loading model_name= "t5-small" #loading the tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) def preprocess_function(data, tokenizer): inputs = [f"Paraphrase this sentence: {doc}" for doc in data["text"]] model_inputs = tokenizer(inputs, max_length=150, truncation=True) labels = [ast.literal_eval(i)[0] for i in data['paraphrases']] labels = tokenizer(labels, max_length=150, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs train_dataset = load_dataset("humarin/chatgpt-paraphrases", split="train").shuffle(seed=42).select(range(50000)) val_dataset = load_dataset("humarin/chatgpt-paraphrases", split="train").shuffle(seed=42).select(range(50000,55000)) tokenized_train = train_dataset.map(lambda batch: preprocess_function(batch, tokenizer), batched=True) tokenized_val = val_dataset.map(lambda batch: preprocess_function(batch, tokenizer), batched=True) def print_trainable_parameters(model): """ Prints the number of trainable parameters in the model. """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) config = LoraConfig( r=16, #attention heads lora_alpha=32, #alpha scaling lora_dropout=0.05, bias="none", task_type="Seq2Seq" ) #loading the model model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device) model = get_peft_model(model, config) print_trainable_parameters(model) #loading the data collator data_collator = DataCollatorForSeq2Seq( tokenizer=tokenizer, model=model, label_pad_token_id=-100, padding="longest" ) #defining the training arguments training_args = Seq2SeqTrainingArguments( output_dir=os.getcwd(), evaluation_strategy="epoch", save_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, weight_decay=1e-3, save_total_limit=3, load_best_model_at_end=True, num_train_epochs=1, predict_with_generate=True ) def compute_metric_with_extra(tokenizer): def compute_metrics(eval_preds): metric = evaluate.load('rouge') preds, labels = eval_preds # decode preds and labels labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # rougeLSum expects newline after each sentence decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds] decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) return result return compute_metrics trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_train, eval_dataset=tokenized_val, tokenizer=tokenizer, data_collator=data_collator, compute_metrics= compute_metric_with_extra(tokenizer) ) trainer.train() ### Expected behavior I would want the trainer to train normally as it was before I used- model = get_peft_model(model, config) ### Environment info datasets version- 2.16.0 peft version- 0.7.1 transformers version- 4.35.2 accelerate version- 0.25.0 python- 3.10.12 enviroment- google colab
false
2,056,002,548
https://api.github.com/repos/huggingface/datasets/issues/6534
https://github.com/huggingface/datasets/issues/6534
6,534
How to configure multiple folders in the same zip package
open
1
2023-12-26T03:56:20
2023-12-26T06:31:16
null
d710055071
[]
How should I write "config" in readme when all the data, such as train test, is in a zip file train floder and test floder in data.zip
false
2,055,929,101
https://api.github.com/repos/huggingface/datasets/issues/6533
https://github.com/huggingface/datasets/issues/6533
6,533
ted_talks_iwslt | Error: Config name is missing
closed
2
2023-12-26T00:38:18
2023-12-30T18:58:21
2023-12-30T16:09:50
rayliuca
[]
### Describe the bug Running load_dataset using the newest `datasets` library like below on the ted_talks_iwslt using year pair data will throw an error "Config name is missing" see also: https://huggingface.co/datasets/ted_talks_iwslt/discussions/3 likely caused by #6493, where the `and not config_kwargs` part in the if logic was removed https://github.com/huggingface/datasets/blob/ef3b5dd3633995c95d77f35fb17f89ff44990bc4/src/datasets/builder.py#L512 ### Steps to reproduce the bug run: ```python load_dataset("ted_talks_iwslt", language_pair=("ja", "en"), year="2015") ``` ### Expected behavior Load the data without error ### Environment info datasets 2.16.0
false
2,055,631,201
https://api.github.com/repos/huggingface/datasets/issues/6532
https://github.com/huggingface/datasets/issues/6532
6,532
[Feature request] Indexing datasets by a customly-defined id field to enable random access dataset items via the id
open
10
2023-12-25T11:37:10
2025-05-05T13:25:24
null
Yu-Shi
[ "enhancement" ]
### Feature request Some datasets may contain an id-like field, for example the `id` field in [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and the `_id` field in [BeIR/dbpedia-entity](https://huggingface.co/datasets/BeIR/dbpedia-entity). HF datasets support efficient random access via row, but not via this kinds of id fields. I wonder if it is possible to add support for indexing by a custom "id-like" field to enable random access via such ids. The ids may be numbers or strings. ### Motivation In some cases, especially during inference/evaluation, I may want to find out the item that has a specified id, defined by the dataset itself. For example, in a typical re-ranking setting in information retrieval, the user may want to re-rank the set of candidate documents of each query. The input is usually presented in a TREC-style run file, with the following format: ``` <qid> Q0 <docno> <rank> <score> <tag> ``` The re-ranking program should be able to fetch the queries and documents according to the `<qid>` and `<docno>`, which are the original id defined in the query/document datasets. To accomplish this, I have to iterate over the whole HF dataset to get the mapping from real ids to row ids every time I start the program, which is time-consuming. Thus I want HF dataset to provide options for users to index by a custom id column, not by row. ### Your contribution I'm not an expert in this project and I'm afraid that I'm not able to make contributions on the code.
false
2,055,201,605
https://api.github.com/repos/huggingface/datasets/issues/6531
https://github.com/huggingface/datasets/pull/6531
6,531
Add polars compatibility
closed
7
2023-12-24T20:03:23
2024-03-08T19:29:25
2024-03-08T15:22:58
psmyth94
[]
Hey there, I've just finished adding support to convert and format to `polars.DataFrame`. This was in response to the open issue about integrating Polars [#3334](https://github.com/huggingface/datasets/issues/3334). Datasets can be switched to Polars format via `Dataset.set_format("polars")`. I've also included `to_polars` and `from_polars`. All polars functions are checked via config.POLARS_AVAILABLE. A few notes: This only supports `DataFrames` and not `LazyFrames`. This probably could be integrated fairly easily via `is_lazy` args in `set_format`, and `to_polars`. Let me know your feedbacks.
true
2,054,817,609
https://api.github.com/repos/huggingface/datasets/issues/6530
https://github.com/huggingface/datasets/issues/6530
6,530
Impossible to save a mapped dataset to disk
open
1
2023-12-23T15:18:27
2023-12-24T09:40:30
null
kopyl
[]
### Describe the bug I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py). After I do the mapping like this: ``` train_dataset = train_dataset.map(compute_embeddings_fn, batched=True) train_dataset = train_dataset.map( compute_vae_encodings_fn, batched=True, batch_size=16, ) ``` and try to save it like this: `train_dataset.save_to_disk("test")` i get this error ([full traceback](https://pastebin.com/kq3vt739)): ``` TypeError: Object of type function is not JSON serializable The format kwargs must be JSON serializable, but key 'transform' isn't. ``` But what is interesting is that pushing to hub works like that: `train_dataset.push_to_hub("kopyl/mapped-833-icons-sdxl-1024-dataset", token=True)` Here is the link of the pushed dataset: https://huggingface.co/datasets/kopyl/mapped-833-icons-sdxl-1024-dataset ### Steps to reproduce the bug Here is the self-contained notebook: https://colab.research.google.com/drive/1RtCsEMVcwWcMwlWURk_cj_9xUBHz065M?usp=sharing ### Expected behavior It should be easily saved to disk ### Environment info NVIDIA A100, Linux (NC24ads A100 v4 from Azure), CUDA 12.2. [pip freeze](https://pastebin.com/QTNb6iru)
false
2,054,209,449
https://api.github.com/repos/huggingface/datasets/issues/6529
https://github.com/huggingface/datasets/issues/6529
6,529
Impossible to only download a test split
open
2
2023-12-22T16:56:32
2024-02-02T00:05:04
null
ysig
[]
I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function. Then after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed before! split is passed to the dataset builder in `as_dataset`. If I'm not missing something, this seems like bad design, for the following use case: > Imagine there is a huge dataset that has an evaluation test set and you want to just download and run just to compare your method. Is there a current workaround that can help me achieve the same result? Thank you,
false
2,053,996,494
https://api.github.com/repos/huggingface/datasets/issues/6528
https://github.com/huggingface/datasets/pull/6528
6,528
set dev version
closed
2
2023-12-22T14:23:18
2023-12-22T14:31:42
2023-12-22T14:25:34
lhoestq
[]
null
true
2,053,966,748
https://api.github.com/repos/huggingface/datasets/issues/6527
https://github.com/huggingface/datasets/pull/6527
6,527
Release: 2.16.0
closed
2
2023-12-22T13:59:56
2023-12-22T14:24:12
2023-12-22T14:17:55
lhoestq
[]
null
true
2,053,726,451
https://api.github.com/repos/huggingface/datasets/issues/6526
https://github.com/huggingface/datasets/pull/6526
6,526
Preserve order of configs and splits when using Parquet exports
closed
2
2023-12-22T10:35:56
2023-12-22T11:42:22
2023-12-22T11:36:14
albertvillanova
[]
Preserve order of configs and splits, as defined in dataset infos. Fix #6521.
true
2,053,119,357
https://api.github.com/repos/huggingface/datasets/issues/6525
https://github.com/huggingface/datasets/pull/6525
6,525
BBox type
closed
2
2023-12-21T22:13:27
2024-01-11T06:34:51
2023-12-21T22:39:27
lhoestq
[]
see [internal discussion](https://huggingface.slack.com/archives/C02EK7C3SHW/p1703097195609209) Draft to get some feedback on a possible `BBox` feature type that can be used to get object detection bounding boxes data in one format or another. ```python >>> from datasets import load_dataset, BBox >>> ds = load_dataset("svhn", "full_numbers", split="train") >>> ds[0] { 'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=107x46 at 0x126409BE0>, 'digits': {'bbox': [[38, 1, 21, 40], [57, 3, 16, 40]], 'label': [4, 6]} } >>> ds = ds.rename_column("digits", "annotations").cast_column("annotations", BBox(format="coco")) >>> ds[0] { 'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=107x46 at 0x147730070>, 'annotations': [{'bbox': [38, 1, 21, 40], 'category_id': 4}, {'bbox': [57, 3, 16, 40], 'category_id': 6}] } ``` note that it's a type for a list of bounding boxes, not just one - which would be needed to switch from a format to another using type casting.
true
2,053,076,311
https://api.github.com/repos/huggingface/datasets/issues/6524
https://github.com/huggingface/datasets/issues/6524
6,524
Streaming the Pile: Missing Files
closed
1
2023-12-21T21:25:09
2023-12-22T09:17:05
2023-12-22T09:17:05
FelixLabelle
[]
### Describe the bug The pile does not stream, a "File not Found error" is returned. It looks like the Pile's files have been moved. ### Steps to reproduce the bug To reproduce run the following code: ``` from datasets import load_dataset dataset = load_dataset('EleutherAI/pile', 'en', split='train', streaming=True) next(iter(dataset)) ``` I get the following error: `FileNotFoundError: https://the-eye.eu/public/AI/pile/train/00.jsonl.zst` ### Expected behavior Return the data in a stream. ### Environment info - `datasets` version: 2.12.0 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.11.5 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 2.0.3
false
2,052,643,484
https://api.github.com/repos/huggingface/datasets/issues/6523
https://github.com/huggingface/datasets/pull/6523
6,523
fix tests
closed
2
2023-12-21T15:36:21
2023-12-21T15:56:54
2023-12-21T15:50:38
lhoestq
[]
null
true
2,052,332,528
https://api.github.com/repos/huggingface/datasets/issues/6522
https://github.com/huggingface/datasets/issues/6522
6,522
Loading HF Hub Dataset (private org repo) fails to load all features
open
0
2023-12-21T12:26:35
2023-12-21T13:24:31
null
versipellis
[]
### Describe the bug When pushing a `Dataset` with multiple `Features` (`input`, `output`, `tags`) to Huggingface Hub (private org repo), and later downloading the `Dataset`, only `input` and `output` load - I believe the expected behavior is for all `Features` to be loaded by default? ### Steps to reproduce the bug Pushing the data. `data_concat` is a `list` of `dict`s. ```python for datum in data_concat: datum_tags = {d["key"]: d["value"] for d in datum["tags"]} split_fraction = # some logic that generates a train/test split number if split_faction < test_fraction: data_test.append(datum) else: data_train.append(datum) dataset = DatasetDict( { "train": Dataset.from_list(data_train), "test": Dataset.from_list(data_test), "full": Dataset.from_list(data_concat), }, ) dataset_shuffled = dataset.shuffle(seed=shuffle_seed) dataset_shuffled.push_to_hub( repo_id=hf_repo_id, private=True, config_name=m, revision=revision, token=hf_token, ) ``` Loading it later: ```python dataset = datasets.load_dataset( path=hf_repo_id, name=name, token=hf_token, ) ``` Produces: ``` DatasetDict({ train: Dataset({ features: ['input', 'output'], num_rows: <obfuscated> }) test: Dataset({ features: ['input', 'output'], num_rows: <obfuscated> }) full: Dataset({ features: ['input', 'output'], num_rows: <obfuscated> }) }) ``` ### Expected behavior The expected result is below: ``` DatasetDict({ train: Dataset({ features: ['input', 'output', 'tags'], num_rows: <obfuscated> }) test: Dataset({ features: ['input', 'output', 'tags'], num_rows: <obfuscated> }) full: Dataset({ features: ['input', 'output', 'tags'], num_rows: <obfuscated> }) }) ``` My workaround is as follows: ```python dsinfo = datasets.get_dataset_config_info( path=data_files, config_name=data_config, token=hf_token, ) allfeatures = dsinfo.features.copy() if "tags" not in allfeatures: allfeatures["tags"] = [{"key": Value(dtype="string", id=None), "value": Value(dtype="string", id=None)}] dataset = datasets.load_dataset( path=data_files, name=data_config, features=allfeatures, token=hf_token, ) ``` Interestingly enough (and perhaps a related bug?), if I don't add the `tags` to `allfeatures` above (i.e. only loading `input` and `output`), it throws an error when executing `load_dataset`: ``` ValueError: Couldn't cast tags: list<element: struct<key: string, value: string>> child 0, element: struct<key: string, value: string> child 0, key: string child 1, value: string input: <obfuscated> output: <obfuscated> -- schema metadata -- huggingface: '{"info": {"features": {"tags": [{"key": {"dtype": "string",' + 532 to {'input': <obfuscated>, 'output': <obfuscated> because column names don't match ``` Traceback for this: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/load.py", line 2152, in load_dataset builder_instance.download_and_prepare( File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 948, in download_and_prepare self._download_and_prepare( File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1043, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1805, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1950, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Environment info - `datasets` version: 2.15.0 - Platform: macOS-14.0-arm64-arm-64bit - Python version: 3.11.5 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
false
2,052,229,538
https://api.github.com/repos/huggingface/datasets/issues/6521
https://github.com/huggingface/datasets/issues/6521
6,521
The order of the splits is not preserved
closed
1
2023-12-21T11:17:27
2023-12-22T11:36:15
2023-12-22T11:36:15
albertvillanova
[ "bug" ]
We had a regression and the order of the splits is not preserved. They are alphabetically sorted, instead of preserving original "train", "validation", "test" order. Check: In branch "main" ```python In [9]: dataset = load_dataset("adversarial_qa", '"adversarialQA") In [10]: dataset Out[10]: DatasetDict({ test: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 3000 }) train: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 30000 }) validation: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 3000 }) }) ``` Before (2.15.0) it was: ```python DatasetDict({ train: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 30000 }) validation: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 3000 }) test: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 3000 }) }) ``` See issues: - https://huggingface.co/datasets/adversarial_qa/discussions/3 - https://huggingface.co/datasets/beans/discussions/4 This is a regression because it was previously fixed. See: - #6196 - #5728
false
2,052,059,078
https://api.github.com/repos/huggingface/datasets/issues/6520
https://github.com/huggingface/datasets/pull/6520
6,520
Support commit_description parameter in push_to_hub
closed
2
2023-12-21T09:36:11
2023-12-21T14:49:47
2023-12-21T14:43:35
albertvillanova
[]
Support `commit_description` parameter in `push_to_hub`. CC: @Wauplin
true
2,050,759,824
https://api.github.com/repos/huggingface/datasets/issues/6519
https://github.com/huggingface/datasets/pull/6519
6,519
Support push_to_hub canonical datasets
closed
4
2023-12-20T15:16:45
2023-12-21T14:48:20
2023-12-21T14:40:57
albertvillanova
[]
Support `push_to_hub` canonical datasets. This is necessary in the Space to convert script-datasets to Parquet: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet Note that before this PR, the `repo_id` "dataset_name" was transformed to "user/dataset_name". This behavior was introduced by: - #6269
true
2,050,137,038
https://api.github.com/repos/huggingface/datasets/issues/6518
https://github.com/huggingface/datasets/pull/6518
6,518
fix get_metadata_patterns function args error
closed
3
2023-12-20T09:06:22
2023-12-21T15:14:17
2023-12-21T15:07:57
d710055071
[]
Bug get_metadata_patterns arg error https://github.com/huggingface/datasets/issues/6517
true
2,050,121,588
https://api.github.com/repos/huggingface/datasets/issues/6517
https://github.com/huggingface/datasets/issues/6517
6,517
Bug get_metadata_patterns arg error
closed
0
2023-12-20T08:56:44
2023-12-22T00:24:23
2023-12-22T00:24:23
d710055071
[]
https://github.com/huggingface/datasets/blob/3f149204a2a5948287adcade5e90707aa5207a92/src/datasets/load.py#L1240C1-L1240C69 metadata_patterns = get_metadata_patterns(base_path, download_config=self.download_config)
false
2,050,033,322
https://api.github.com/repos/huggingface/datasets/issues/6516
https://github.com/huggingface/datasets/pull/6516
6,516
Support huggingface-hub pre-releases
closed
2
2023-12-20T07:52:29
2023-12-20T08:51:34
2023-12-20T08:44:44
albertvillanova
[]
Support `huggingface-hub` pre-releases. This way we will have our CI green when testing `huggingface-hub` release candidates. See: https://github.com/huggingface/datasets/tree/ci-test-huggingface-hub-v0.20.0.rc1 Close #6513.
true
2,049,724,251
https://api.github.com/repos/huggingface/datasets/issues/6515
https://github.com/huggingface/datasets/issues/6515
6,515
Why call http_head() when fsspec_head() succeeds
closed
0
2023-12-20T02:25:51
2023-12-26T05:35:46
2023-12-26T05:35:46
d710055071
[]
https://github.com/huggingface/datasets/blob/a91582de288d98e94bcb5ab634ca1cfeeff544c5/src/datasets/utils/file_utils.py#L510C1-L523C14
false
2,049,600,663
https://api.github.com/repos/huggingface/datasets/issues/6514
https://github.com/huggingface/datasets/pull/6514
6,514
Cache backward compatibility with 2.15.0
closed
4
2023-12-19T23:52:25
2023-12-21T21:14:11
2023-12-21T21:07:55
lhoestq
[]
...for datasets without scripts It takes into account the changes in cache from - https://github.com/huggingface/datasets/pull/6493: switch to `config/version/commit_sha` schema - https://github.com/huggingface/datasets/pull/6454: fix `DataFilesDict` keys ordering when hashing requires https://github.com/huggingface/datasets/pull/6493 to be merged
true
2,048,869,151
https://api.github.com/repos/huggingface/datasets/issues/6513
https://github.com/huggingface/datasets/issues/6513
6,513
Support huggingface-hub 0.20.0
closed
0
2023-12-19T15:15:46
2023-12-20T08:44:45
2023-12-20T08:44:45
albertvillanova
[]
CI to test the support of `huggingface-hub` 0.20.0: https://github.com/huggingface/datasets/compare/main...ci-test-huggingface-hub-v0.20.0.rc1 We need to merge: - #6510 - #6512 - #6516
false
2,048,795,819
https://api.github.com/repos/huggingface/datasets/issues/6512
https://github.com/huggingface/datasets/pull/6512
6,512
Remove deprecated HfFolder
closed
2
2023-12-19T14:40:49
2023-12-19T20:21:13
2023-12-19T20:14:30
lhoestq
[]
...and use `huggingface_hub.get_token()` instead
true
2,048,465,958
https://api.github.com/repos/huggingface/datasets/issues/6511
https://github.com/huggingface/datasets/pull/6511
6,511
Implement get dataset default config name
closed
3
2023-12-19T11:26:19
2023-12-21T14:48:57
2023-12-21T14:42:41
albertvillanova
[]
Implement `get_dataset_default_config_name`. Now that we support setting a configuration as default in `push_to_hub` (see #6500), we need a programmatically way to know in advance which is the default configuration. This will be used in the Space to convert script-datasets to Parquet: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet Follow-up of: - #6500 CC: @severo
true
2,046,928,742
https://api.github.com/repos/huggingface/datasets/issues/6510
https://github.com/huggingface/datasets/pull/6510
6,510
Replace `list_files_info` with `list_repo_tree` in `push_to_hub`
closed
3
2023-12-18T15:34:19
2023-12-19T18:05:47
2023-12-19T17:58:34
mariosasko
[]
Starting from `huggingface_hub` 0.20.0, `list_files_info` will be deprecated in favor of `list_repo_tree` (see https://github.com/huggingface/huggingface_hub/pull/1910)
true
2,046,720,869
https://api.github.com/repos/huggingface/datasets/issues/6509
https://github.com/huggingface/datasets/pull/6509
6,509
Better cast error when generating dataset
closed
3
2023-12-18T13:57:24
2023-12-19T09:37:12
2023-12-19T09:31:03
lhoestq
[]
I want to improve the error message for datasets like https://huggingface.co/datasets/m-a-p/COIG-CQIA Cc @albertvillanova @severo is this new error ok ? Or should I use a dedicated error class ? New: ```python Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1920, in _prepare_split_single writer.write_table(table) File "/Users/quentinlhoest/hf/datasets/src/datasets/arrow_writer.py", line 574, in write_table pa_table = table_cast(pa_table, self._schema) File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2322, in table_cast return cast_table_to_schema(table, schema) File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2276, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast instruction: string other: string index: string domain: list<item: string> child 0, item: string output: string task_type: struct<major: list<item: string>, minor: list<item: string>> child 0, major: list<item: string> child 0, item: string child 1, minor: list<item: string> child 0, item: string task_name_in_eng: string input: string to {'answer_from': Value(dtype='string', id=None), 'instruction': Value(dtype='string', id=None), 'human_verified': Value(dtype='bool', id=None), 'domain': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'output': Value(dtype='string', id=None), 'task_type': {'major': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'minor': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'copyright': Value(dtype='string', id=None), 'input': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/playground/ttest.py", line 74, in <module> load_dataset("m-a-p/COIG-CQIA") File "/Users/quentinlhoest/hf/datasets/src/datasets/load.py", line 2529, in load_dataset builder_instance.download_and_prepare( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 936, in download_and_prepare self._download_and_prepare( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1031, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1791, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1922, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 3 new columns (other, index, task_name_in_eng) and 3 missing columns (answer_from, copyright, human_verified). This happened while the json dataset builder was generating data using hf://datasets/m-a-p/COIG-CQIA/coig_pc/coig_pc_core_sample.json (at revision b7b7ecf290f6515036c7c04bd8537228ac2eb474) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) ``` Previously: ```python Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1931, in _prepare_split_single writer.write_table(table) File "/Users/quentinlhoest/hf/datasets/src/datasets/arrow_writer.py", line 574, in write_table pa_table = table_cast(pa_table, self._schema) File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2295, in table_cast return cast_table_to_schema(table, schema) File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2253, in cast_table_to_schema raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") ValueError: Couldn't cast task_type: struct<major: list<item: string>, minor: list<item: string>> child 0, major: list<item: string> child 0, item: string child 1, minor: list<item: string> child 0, item: string other: string instruction: string task_name_in_eng: string domain: list<item: string> child 0, item: string index: string output: string input: string to {'human_verified': Value(dtype='bool', id=None), 'task_type': {'major': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'minor': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'answer_from': Value(dtype='string', id=None), 'copyright': Value(dtype='string', id=None), 'instruction': Value(dtype='string', id=None), 'domain': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'output': Value(dtype='string', id=None), 'input': Value(dtype='string', id=None)} because column names don't match The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/playground/ttest.py", line 74, in <module> load_dataset("m-a-p/COIG-CQIA") File "/Users/quentinlhoest/hf/datasets/src/datasets/load.py", line 2529, in load_dataset builder_instance.download_and_prepare( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 949, in download_and_prepare self._download_and_prepare( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1044, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1804, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1949, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ```
true
2,045,733,273
https://api.github.com/repos/huggingface/datasets/issues/6508
https://github.com/huggingface/datasets/pull/6508
6,508
Read GeoParquet files using parquet reader
closed
13
2023-12-18T04:50:37
2024-01-26T18:22:35
2024-01-26T16:18:41
weiji14
[]
Let GeoParquet files with the file extension `*.geoparquet` or `*.gpq` be readable by the default parquet reader. Those two file extensions are the ones most commonly used for GeoParquet files, and is included in the `gpq` validator tool at https://github.com/planetlabs/gpq/blob/e5576b4ee7306b4d2259d56c879465a9364dab90/cmd/gpq/command/convert.go#L73-L75 Addresses https://github.com/huggingface/datasets/issues/6438
true
2,045,152,928
https://api.github.com/repos/huggingface/datasets/issues/6507
https://github.com/huggingface/datasets/issues/6507
6,507
where is glue_metric.py> @Frankie123421 what was the resolution to this?
closed
0
2023-12-17T09:58:25
2023-12-18T11:42:49
2023-12-18T11:42:49
Mcccccc1024
[]
> @Frankie123421 what was the resolution to this? use glue_metric.py instead of glue.py in load_metric _Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_
false
2,044,975,038
https://api.github.com/repos/huggingface/datasets/issues/6506
https://github.com/huggingface/datasets/issues/6506
6,506
Incorrect test set labels for RTE and CoLA datasets via load_dataset
closed
1
2023-12-16T22:06:08
2023-12-21T09:57:57
2023-12-21T09:57:57
emreonal11
[]
### Describe the bug The test set labels for the RTE and CoLA datasets when loading via datasets load_dataset are all -1. Edit: It appears this is also the case for every other dataset except for MRPC (stsb, sst2, qqp, mnli (both matched and mismatched), qnli, wnli, ax). Is this intended behavior to safeguard the test set for evaluation purposes? ### Steps to reproduce the bug !pip install datasets from datasets import load_dataset rte_data = load_dataset('glue', 'rte') cola_data = load_dataset('glue', 'cola') print(rte_data['test'][0:30]['label']) print(cola_data['test'][0:30]['label']) Output: [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] The non-label test data seems to be fine: e.g. rte_data['test'][1] is: {'sentence1': "Authorities in Brazil say that more than 200 people are being held hostage in a prison in the country's remote, Amazonian-jungle state of Rondonia.", 'sentence2': 'Authorities in Brazil hold 200 people as hostage.', 'label': -1, 'idx': 1} Training and validation data are also fine: e.g. rte_data['train][0] is: {'sentence1': 'No Weapons of Mass Destruction Found in Iraq Yet.', 'sentence2': 'Weapons of Mass Destruction Found in Iraq.', 'label': 1, 'idx': 0} ### Expected behavior Expected the labels to be binary 0/1 values; Got all -1s instead ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 10.0.1 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
false
2,044,721,288
https://api.github.com/repos/huggingface/datasets/issues/6505
https://github.com/huggingface/datasets/issues/6505
6,505
Got stuck when I trying to load a dataset
open
7
2023-12-16T11:51:07
2024-12-24T16:45:52
null
yirenpingsheng
[]
### Describe the bug Hello, everyone. I met a problem when I am trying to load a data file using load_dataset method on a Debian 10 system. The data file is not very large, only 1.63MB with 600 records. Here is my code: from datasets import load_dataset dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json') I waited it for 20 minutes. It still no response. I cannot using Ctrl+C to cancel the command. I have to use Ctrl+Z to kill it. I also try it with a txt file, it still no response in a long time. I can load the same file successfully using my laptop (windows 10, python 3.8.5, datasets==2.14.5). I can also make it on another computer (Ubuntu 20.04.5 LTS, python 3.10.13, datasets 2.14.7). It only takes me 1-2 miniutes. Could you give me some suggestions? Thank you. ### Steps to reproduce the bug from datasets import load_dataset dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json') ### Expected behavior I hope it can load the file successfully. ### Environment info OS: Debian GNU/Linux 10 Python: Python 3.10.13 Pip list: Package Version ------------------------- ------------ accelerate 0.25.0 addict 2.4.0 aiofiles 23.2.1 aiohttp 3.9.1 aiosignal 1.3.1 aliyun-python-sdk-core 2.14.0 aliyun-python-sdk-kms 2.16.2 altair 5.2.0 annotated-types 0.6.0 anyio 3.7.1 async-timeout 4.0.3 attrs 23.1.0 certifi 2023.11.17 cffi 1.16.0 charset-normalizer 3.3.2 click 8.1.7 contourpy 1.2.0 crcmod 1.7 cryptography 41.0.7 cycler 0.12.1 datasets 2.14.7 dill 0.3.7 docstring-parser 0.15 einops 0.7.0 exceptiongroup 1.2.0 fastapi 0.105.0 ffmpy 0.3.1 filelock 3.13.1 fonttools 4.46.0 frozenlist 1.4.1 fsspec 2023.10.0 gast 0.5.4 gradio 3.50.2 gradio_client 0.6.1 h11 0.14.0 httpcore 1.0.2 httpx 0.25.2 huggingface-hub 0.19.4 idna 3.6 importlib-metadata 7.0.0 importlib-resources 6.1.1 jieba 0.42.1 Jinja2 3.1.2 jmespath 0.10.0 joblib 1.3.2 jsonschema 4.20.0 jsonschema-specifications 2023.11.2 kiwisolver 1.4.5 markdown-it-py 3.0.0 MarkupSafe 2.1.3 matplotlib 3.8.2 mdurl 0.1.2 modelscope 1.10.0 mpmath 1.3.0 multidict 6.0.4 multiprocess 0.70.15 networkx 3.2.1 nltk 3.8.1 numpy 1.26.2 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu12 8.9.2.26 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu12 12.1.0.106 nvidia-nccl-cu12 2.18.1 nvidia-nvjitlink-cu12 12.3.101 nvidia-nvtx-cu12 12.1.105 orjson 3.9.10 oss2 2.18.3 packaging 23.2 pandas 2.1.4 peft 0.7.1 Pillow 10.1.0 pip 23.3.1 platformdirs 4.1.0 protobuf 4.25.1 psutil 5.9.6 pyarrow 14.0.1 pyarrow-hotfix 0.6 pycparser 2.21 pycryptodome 3.19.0 pydantic 2.5.2 pydantic_core 2.14.5 pydub 0.25.1 Pygments 2.17.2 pyparsing 3.1.1 python-dateutil 2.8.2 python-multipart 0.0.6 pytz 2023.3.post1 PyYAML 6.0.1 referencing 0.32.0 regex 2023.10.3 requests 2.31.0 rich 13.7.0 rouge-chinese 1.0.3 rpds-py 0.13.2 safetensors 0.4.1 scipy 1.11.4 semantic-version 2.10.0 sentencepiece 0.1.99 setuptools 68.2.2 shtab 1.6.5 simplejson 3.19.2 six 1.16.0 sniffio 1.3.0 sortedcontainers 2.4.0 sse-starlette 1.8.2 starlette 0.27.0 sympy 1.12 tiktoken 0.5.2 tokenizers 0.15.0 tomli 2.0.1 toolz 0.12.0 torch 2.1.2 tqdm 4.66.1 transformers 4.36.1 triton 2.1.0 trl 0.7.4 typing_extensions 4.9.0 tyro 0.6.0 tzdata 2023.3 urllib3 2.1.0 uvicorn 0.24.0.post1 websockets 11.0.3 wheel 0.41.2 xxhash 3.4.1 yapf 0.40.2 yarl 1.9.4 zipp 3.17.0
false
2,044,541,154
https://api.github.com/repos/huggingface/datasets/issues/6504
https://github.com/huggingface/datasets/issues/6504
6,504
Error Pushing to Hub
closed
0
2023-12-16T01:05:22
2023-12-16T06:20:53
2023-12-16T06:20:53
Jiayi-Pan
[]
### Describe the bug Error when trying to push a dataset in a special format to hub ### Steps to reproduce the bug ``` import datasets from datasets import Dataset dataset_dict = { "filename": ["apple", "banana"], "token": [[[1,2],[3,4]],[[1,2],[3,4]]], "label": [0, 1], } dataset = Dataset.from_dict(dataset_dict) dataset = dataset.cast_column("token", datasets.features.features.Array2D(shape=(2, 2),dtype="int16")) dataset.push_to_hub("SequenceModel/imagenet_val_256") ``` Error: ``` ... ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/tuple' in "<unicode string>", line 8, column 16: shape: !!python/tuple ^ ``` ### Expected behavior Dataset being pushed to hub ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-5.19.0-1022-gcp-x86_64-with-glibc2.35 - Python version: 3.11.5 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
false
2,043,847,591
https://api.github.com/repos/huggingface/datasets/issues/6503
https://github.com/huggingface/datasets/pull/6503
6,503
Fix streaming xnli
closed
2
2023-12-15T14:40:57
2023-12-15T14:51:06
2023-12-15T14:44:47
lhoestq
[]
This code was failing ```python In [1]: from datasets import load_dataset In [2]: ...: ds = load_dataset("xnli", "all_languages", split="test", streaming=True) ...: ...: sample_data = next(iter(ds))["premise"] # pick up one data ...: input_text = list(sample_data.values()) ``` ``` File ~/hf/datasets/src/datasets/features/translation.py:104, in TranslationVariableLanguages.encode_example(self, translation_dict) 102 return translation_dict 103 elif self.languages and set(translation_dict) - lang_set: --> 104 raise ValueError( 105 f'Some languages in example ({", ".join(sorted(set(translation_dict) - lang_set))}) are not in valid set ({", ".join(lang_set)}).' 106 ) 108 # Convert dictionary into tuples, splitting out cases where there are 109 # multiple translations for a single language. 110 translation_tuples = [] ValueError: Some languages in example (language, translation) are not in valid set (ur, fr, hi, sw, vi, el, de, th, en, tr, zh, ar, bg, ru, es). ``` because in streaming mode we expect features encode methods to be no-ops if the example is already encoded. I fixed `TranslationVariableLanguages` to account for that
true
2,043,771,731
https://api.github.com/repos/huggingface/datasets/issues/6502
https://github.com/huggingface/datasets/pull/6502
6,502
Pickle support for `torch.Generator` objects
closed
2
2023-12-15T13:55:12
2023-12-15T15:04:33
2023-12-15T14:58:22
mariosasko
[]
Fix for https://discuss.huggingface.co/t/caching-a-dataset-processed-with-randomness/65616
true
2,043,377,240
https://api.github.com/repos/huggingface/datasets/issues/6501
https://github.com/huggingface/datasets/issues/6501
6,501
OverflowError: value too large to convert to int32_t
open
1
2023-12-15T10:10:21
2025-06-27T04:27:14
null
zhangfan-algo
[]
### Describe the bug ![image](https://github.com/huggingface/datasets/assets/47747764/f58044fb-ddda-48b6-ba68-7bbfef781630) ### Steps to reproduce the bug just loading datasets ### Expected behavior how can I fix it ### Environment info pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3-none-any.whl pip install huggingface_hub-0.19.4-py3-none-any.whl tokenizers-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl transformers-4.36.1-py3-none-any.whl pyarrow_hotfix-0.6-py3-none-any.whl datasets-2.15.0-py3-none-any.whl tyro-0.5.18-py3-none-any.whl trl-0.7.4-py3-none-any.whl done
false
2,043,258,633
https://api.github.com/repos/huggingface/datasets/issues/6500
https://github.com/huggingface/datasets/pull/6500
6,500
Enable setting config as default when push_to_hub
closed
8
2023-12-15T09:17:41
2023-12-18T11:56:11
2023-12-18T11:50:03
albertvillanova
[]
Fix #6497.
true
2,043,166,976
https://api.github.com/repos/huggingface/datasets/issues/6499
https://github.com/huggingface/datasets/pull/6499
6,499
docs: add reference Git over SSH
closed
2
2023-12-15T08:38:31
2023-12-15T11:48:47
2023-12-15T11:42:38
severo
[]
see https://discuss.huggingface.co/t/update-datasets-getting-started-to-new-git-security/65893
true
2,042,075,969
https://api.github.com/repos/huggingface/datasets/issues/6498
https://github.com/huggingface/datasets/pull/6498
6,498
Fallback on dataset script if user wants to load default config
closed
8
2023-12-14T16:46:01
2023-12-15T13:16:56
2023-12-15T13:10:48
lhoestq
[]
Right now this code is failing on `main`: ```python load_dataset("openbookqa") ``` This is because it tries to load the dataset from the Parquet export but the dataset has multiple configurations and the Parquet export doesn't know which one is the default one. I fixed this by simply falling back on using the dataset script (which tells the user to pass `trust_remote_code=True`): ```python load_dataset("openbookqa", trust_remote_code=True) ``` Note that if the user happened to specify a config name I don't fall back on the script since we can use the Parquet export in this case (no need to know which config is the default) ```python load_dataset("openbookqa", "main") ```
true
2,041,994,274
https://api.github.com/repos/huggingface/datasets/issues/6497
https://github.com/huggingface/datasets/issues/6497
6,497
Support setting a default config name in push_to_hub
closed
0
2023-12-14T15:59:03
2023-12-18T11:50:04
2023-12-18T11:50:04
albertvillanova
[ "enhancement" ]
In order to convert script-datasets to no-script datasets, we need to support setting a default config name for those scripts that set one.
false
2,041,589,386
https://api.github.com/repos/huggingface/datasets/issues/6496
https://github.com/huggingface/datasets/issues/6496
6,496
Error when writing a dataset to HF Hub: A commit has happened since. Please refresh and try again.
open
1
2023-12-14T11:24:54
2023-12-14T12:22:21
null
GeorgesLorre
[]
**Describe the bug** Getting a `412 Client Error: Precondition Failed` when trying to write a dataset to the HF hub. ``` huggingface_hub.utils._errors.HfHubHTTPError: 412 Client Error: Precondition Failed for url: https://huggingface.co/api/datasets/GLorr/test-dask/commit/main (Request ID: Root=1-657ae26f-3bd92bf861bb254b2cc0826c;50a09ab7-9347-406a-ba49-69f98abee9cc) A commit has happened since. Please refresh and try again. ``` **Steps to reproduce the bug** This is a minimal reproducer: ``` import dask.dataframe as dd import pandas as pd import random import os import huggingface_hub import datasets huggingface_hub.login(token=os.getenv("HF_TOKEN")) data = {"number": [random.randint(0,10) for _ in range(1000)]} df = pd.DataFrame.from_dict(data) dataframe = dd.from_pandas(df, npartitions=1) dataframe = dataframe.repartition(npartitions=3) schema = datasets.Features({"number": datasets.Value("int64")}).arrow_schema repo_id = "GLorr/test-dask" repo_path = f"hf://datasets/{repo_id}" huggingface_hub.create_repo(repo_id=repo_id, repo_type="dataset", exist_ok=True) dd.to_parquet(dataframe, path=f"{repo_path}/data", schema=schema) ``` **Expected behavior** Would expect to write to the hub without any problem. **Environment info** ``` datasets==2.15.0 huggingface-hub==0.19.4 ```
false
2,039,684,839
https://api.github.com/repos/huggingface/datasets/issues/6494
https://github.com/huggingface/datasets/issues/6494
6,494
Image Data loaded Twice
open
0
2023-12-13T13:11:42
2023-12-13T13:11:42
null
ArcaneLex
[]
### Describe the bug ![1702472610561](https://github.com/huggingface/datasets/assets/28867010/4b7ef5e7-32c3-4b73-84cb-5de059caa0b6) When I learn from https://huggingface.co/docs/datasets/image_load and try to load image data from a folder. I noticed that the image was read twice in the returned data. As you can see in the attached image, there are only four images in the train folder, but reading brings up eight images ### Steps to reproduce the bug from datasets import Dataset, load_dataset dataset = load_dataset("imagefolder", data_dir="data/", drop_labels=False) # print(dataset["train"][0]["image"] == dataset["train"][1]["image"]) print(dataset) print(dataset["train"]["image"]) print(len(dataset["train"]["image"])) ### Expected behavior DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 8 }) }) [<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D1CA8B0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D2452E0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245310>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2453A0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245460>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245430>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2454F0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245550>] 8 ### Environment info - `datasets` version: 2.14.5 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.9.17 - Huggingface_hub version: 0.19.4 - PyArrow version: 13.0.0 - Pandas version: 2.0.3
false
2,039,708,529
https://api.github.com/repos/huggingface/datasets/issues/6495
https://github.com/huggingface/datasets/issues/6495
6,495
Newline characters don't behave as expected when calling dataset.info
open
0
2023-12-12T23:07:51
2023-12-13T13:24:22
null
gerald-wrona
[]
### System Info - `transformers` version: 4.32.1 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.11.5 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cpu (False) - Tensorflow version (GPU?): 2.15.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @marios ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction [Source](https://huggingface.co/docs/datasets/v2.2.1/en/access) ``` from datasets import load_dataset dataset = load_dataset('glue', 'mrpc', split='train') dataset.info ``` DatasetInfo(description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n', citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398', license='', features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(names=['not_equivalent', 'equivalent'], id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='glue', dataset_name=None, config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943843, num_examples=3668, shard_lengths=None, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105879, num_examples=408, shard_lengths=None, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442410, num_examples=1725, shard_lengths=None, dataset_name='glue')}, download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': None}}, download_size=1494541, post_processing_size=None, dataset_size=1492132, size_in_bytes=2986673) ### Expected behavior ``` from datasets import load_dataset dataset = load_dataset('glue', 'mrpc', split='train') dataset.info ``` DatasetInfo( description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n', citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398', license='', features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, builder_name='glue', config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943851, num_examples=3668, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105887, num_examples=408, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442418, num_examples=1725, dataset_name='glue')}, download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': '971d7767d81b997fd9060ade0ec23c4fc31cbb226a55d1bd4a1bac474eb81dc7'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': '60a9b09084528f0673eedee2b69cb941920f0b8cd0eeccefc464a98768457f89'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': 'a04e271090879aaba6423d65b94950c089298587d9c084bf9cd7439bd785f784'}}, download_size=1494541, post_processing_size=None, dataset_size=1492156, size_in_bytes=2986697 )
false
2,038,221,490
https://api.github.com/repos/huggingface/datasets/issues/6493
https://github.com/huggingface/datasets/pull/6493
6,493
Lazy data files resolution and offline cache reload
closed
8
2023-12-12T17:15:17
2023-12-21T15:19:20
2023-12-21T15:13:11
lhoestq
[]
Includes both https://github.com/huggingface/datasets/pull/6458 and https://github.com/huggingface/datasets/pull/6459 This PR should be merged instead of the two individually, since they are conflicting ## Offline cache reload it can reload datasets that were pushed to hub if they exist in the cache. example: ```python >>> Dataset.from_dict({"a": [1, 2]}).push_to_hub("lhoestq/tmp") >>> load_dataset("lhoestq/tmp") DatasetDict({ train: Dataset({ features: ['a'], num_rows: 2 }) }) ``` and later, without connection: ```python >>> load_dataset("lhoestq/tmp") Using the latest cached version of the dataset since lhoestq/tmp couldn't be found on the Hugging Face Hub Found the latest cached dataset configuration 'default' at /Users/quentinlhoest/.cache/huggingface/datasets/lhoestq___tmp/default/0.0.0/da0e902a945afeb9 (last modified on Wed Dec 13 14:55:52 2023). DatasetDict({ train: Dataset({ features: ['a'], num_rows: 2 }) }) ``` - Updated `CachedDatasetModuleFactory` to look for datasets in the cache at `<namespace>___<dataset_name>/<config_id>` - Since the metadata configs parameters are not available in offline mode, we don't know which folder to load (config_id and hash change), so I simply load the latest one - I instantiate a BuilderConfig even if there is no metadata config with the right config_name - Its config_id is equal to the config_name to be able to retrieve it in the cache (no more suffix for configs from metadata configs) - We can reload this config if offline mode by specifying the right config_name (same as online !) - Consequences of this change: - Only when there are user's parameters it creates a custom builder config with config_id = config_name + user parameters hash - the hash used to name the cache folder takes into account the metadata config and the dataset info, so that the right cache can be reloaded when there is internet connection without redownloading the data or resolving the data files. For local directories I hash the builder configs and dataset info, and for datasets on the hub I use the commit sha as hash. - cache directories now look like `config/version/commit_sha` for hub datasets which is clean :) Fix https://github.com/huggingface/datasets/issues/3547 ## Lazy data files resolution this makes this code run in 2sec instead of >10sec ```python from datasets import load_dataset ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False) ``` For some datasets with many configs and files it can be up to 100x faster. This is particularly important now that some datasets will be loaded from the Parquet export instead of the scripts. The data files are only resolved in the builder `__init__`. To do so I added DataFilesPatternsList and DataFilesPatternsDict that have `.resolve()` to return resolved DataFilesList and DataFilesDict
true
2,037,987,267
https://api.github.com/repos/huggingface/datasets/issues/6492
https://github.com/huggingface/datasets/pull/6492
6,492
Make push_to_hub return CommitInfo
closed
3
2023-12-12T15:18:16
2023-12-13T14:29:01
2023-12-13T14:22:41
albertvillanova
[]
Make `push_to_hub` return `CommitInfo`. This is useful, for example, if we pass `create_pr=True` and we want to know the created PR ID. CC: @severo for the use case in https://huggingface.co/datasets/jmhessel/newyorker_caption_contest/discussions/4
true
2,037,690,643
https://api.github.com/repos/huggingface/datasets/issues/6491
https://github.com/huggingface/datasets/pull/6491
6,491
Fix metrics dead link
closed
2
2023-12-12T12:51:49
2023-12-21T15:15:08
2023-12-21T15:08:53
qgallouedec
[]
null
true
2,037,204,892
https://api.github.com/repos/huggingface/datasets/issues/6490
https://github.com/huggingface/datasets/issues/6490
6,490
`load_dataset(...,save_infos=True)` not working without loading script
open
1
2023-12-12T08:09:18
2023-12-12T08:36:22
null
morganveyret
[]
### Describe the bug It seems that saving a dataset infos back into the card file is not working for datasets without a loading script. After tracking the problem a bit it looks like saving the infos uses `Builder.get_imported_module_dir()` as its destination directory. Internally this is a call to `inspect.getfile()` but since the actual builder class used is dynamically created (cf. `datasets.load.configure_builder_class`) this method actually return te path to the parent builder class (e.g. `datasets.packaged_modules.json.JSON`). ### Steps to reproduce the bug 1. Have a local dataset without any loading script 2. Make sure there are no dataset infos in the README.md 3. Load with `save_infos=True` 4. No change in the dataset README.md 5. A new README.md file is created in the directory of the parent builder class (e.g. for json in `.../site-packages/datasets/packaged_modules/json/README.md`) ### Expected behavior The dataset README.md should be updated and no file should be created in the python environment. ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.3 - `fsspec` version: 2023.6.0
false
2,036,743,777
https://api.github.com/repos/huggingface/datasets/issues/6489
https://github.com/huggingface/datasets/issues/6489
6,489
load_dataset imageflder for aws s3 path
open
0
2023-12-12T00:08:43
2023-12-12T00:09:27
null
segalinc
[ "enhancement" ]
### Feature request I would like to load a dataset from S3 using the imagefolder option something like `dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) ` ### Motivation no need of data_files ### Your contribution no experience with this
false
2,035,899,898
https://api.github.com/repos/huggingface/datasets/issues/6488
https://github.com/huggingface/datasets/issues/6488
6,488
429 Client Error
open
2
2023-12-11T15:06:01
2024-06-20T05:55:45
null
sasaadi
[]
Hello, I was downloading the following dataset and after 20% of data was downloaded, I started getting error 429. It is not resolved since a few days. How should I resolve it? Thanks Dataset: https://huggingface.co/datasets/cerebras/SlimPajama-627B Error: `requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/datasets/cerebras/SlimPajama-627B/resolve/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543/train/chunk1/example_train_3300.jsonl.zst`
false
2,035,424,254
https://api.github.com/repos/huggingface/datasets/issues/6487
https://github.com/huggingface/datasets/pull/6487
6,487
Update builder hash with info
closed
2
2023-12-11T11:09:16
2024-01-11T06:35:07
2023-12-11T11:41:34
lhoestq
[]
Currently if you change the `dataset_info` of a dataset (e.g. in the YAML part of the README.md), the cache ignores this change. This is problematic because you want to regenerate a dataset if you change the features or the split sizes for example (e.g. after push_to_hub) Ideally we should take the resolved files into account as well but this will be for another PR
true
2,035,206,206
https://api.github.com/repos/huggingface/datasets/issues/6486
https://github.com/huggingface/datasets/pull/6486
6,486
Fix docs phrasing about supported formats when sharing a dataset
closed
2
2023-12-11T09:21:22
2023-12-13T14:21:29
2023-12-13T14:15:21
albertvillanova
[]
Fix docs phrasing.
true
2,035,141,884
https://api.github.com/repos/huggingface/datasets/issues/6485
https://github.com/huggingface/datasets/issues/6485
6,485
FileNotFoundError: [Errno 2] No such file or directory: 'nul'
closed
1
2023-12-11T08:52:13
2023-12-14T08:09:08
2023-12-14T08:09:08
amanyara
[]
### Describe the bug it seems that sth wrong with my terrible "bug body" life, When i run this code, "import datasets" i meet this error FileNotFoundError: [Errno 2] No such file or directory: 'nul' ![image](https://github.com/huggingface/datasets/assets/73683903/3973c120-ebb1-42b7-bede-b9de053e861d) ![image](https://github.com/huggingface/datasets/assets/73683903/0496adff-a7a7-4dcb-929e-ec11ede71f04) ### Steps to reproduce the bug 1.import datasets ### Expected behavior i just run a single line code and stuct in this bug ### Environment info OS: Windows10 Datasets==2.15.0 python=3.10
false
2,032,946,981
https://api.github.com/repos/huggingface/datasets/issues/6483
https://github.com/huggingface/datasets/issues/6483
6,483
Iterable Dataset: rename column clashes with remove column
closed
4
2023-12-08T16:11:30
2023-12-08T16:27:16
2023-12-08T16:27:04
sanchit-gandhi
[ "streaming" ]
### Describe the bug Suppose I have a two iterable datasets, one with the features: * `{"audio", "text", "column_a"}` And the other with the features: * `{"audio", "sentence", "column_b"}` I want to combine both datasets using `interleave_datasets`, which requires me to unify the column names. I would typically do this by: 1. Renaming the common columns to the same name (e.g. `"text"` -> `"sentence"`) 2. Removing the unwanted columns (e.g. `"column_a"`, `"column_b"`) However, the process of renaming and removing columns in an iterable dataset doesn't work, since we need to preserve the original text column, meaning we can't combine the datasets. ### Steps to reproduce the bug ```python from datasets import load_dataset # load LS in streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # check original features dataset_features = dataset.features.keys() print("Original features: ", dataset_features) # rename "text" -> "sentence" dataset = dataset.rename_column("text", "sentence") # remove unwanted columns COLUMNS_TO_KEEP = {"audio", "sentence"} dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP)) # stream first sample, should return "audio" and "sentence" columns print(next(iter(dataset))) ``` Traceback: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[5], line 17 14 COLUMNS_TO_KEEP = {"audio", "sentence"} 15 dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP)) ---> 17 print(next(iter(dataset))) File ~/datasets/src/datasets/iterable_dataset.py:1353, in IterableDataset.__iter__(self) 1350 yield formatter.format_row(pa_table) 1351 return -> 1353 for key, example in ex_iterable: 1354 if self.features: 1355 # `IterableDataset` automatically fills missing columns with None. 1356 # This is done with `_apply_feature_types_on_example`. 1357 example = _apply_feature_types_on_example( 1358 example, self.features, token_per_repo_id=self._token_per_repo_id 1359 ) File ~/datasets/src/datasets/iterable_dataset.py:652, in MappedExamplesIterable.__iter__(self) 650 yield from ArrowExamplesIterable(self._iter_arrow, {}) 651 else: --> 652 yield from self._iter() File ~/datasets/src/datasets/iterable_dataset.py:729, in MappedExamplesIterable._iter(self) 727 if self.remove_columns: 728 for c in self.remove_columns: --> 729 del transformed_example[c] 730 yield key, transformed_example 731 current_idx += 1 KeyError: 'text' ``` => we see that `datasets` is looking for the column "text", even though we've renamed this to "sentence" and then removed the un-wanted "text" column from our dataset. ### Expected behavior Should be able to rename and remove columns from iterable dataset. ### Environment info - `datasets` version: 2.15.1.dev0 - Platform: macOS-13.5.1-arm64-arm-64bit - Python version: 3.11.6 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.2 - `fsspec` version: 2023.9.2
false
2,033,333,294
https://api.github.com/repos/huggingface/datasets/issues/6484
https://github.com/huggingface/datasets/issues/6484
6,484
[Feature Request] Dataset versioning
open
2
2023-12-08T16:01:35
2023-12-11T19:13:46
null
kenfus
[]
**Is your feature request related to a problem? Please describe.** I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was not redownloading the data, it was reading in some cached data, until I put `download_mode="force_redownload"`, even though the reversion was different. Of course, I may have done something wrong or missed a setting somewhere! **Describe the solution you'd like** The solution would allow me to easily work with revisions: - create a new dataset (by combining things, different preprocessing, ..) and give it a new revision (v.1.2.3), maybe like this: `dataset_audio.push_to_hub('kenfus/xy', revision='v1.0.2')` - then, get the current revision as follows: ``` dataset = load_dataset( 'kenfus/xy', revision='v1.0.2', ) ``` this downloads the new version and does not load in a different revision and all future map, filter, .. operations are done on this dataset and not loaded from cache produced from a different revision. - if I rerun the run, the caching should be smart enough in every step to not reuse a mapping operation on a different revision. **Describe alternatives you've considered** I created my own caching, putting `download_mode="force_redownload"` and `load_from_cache_file=False,` everywhere. **Additional context** Thanks a lot for your great work! Creating NLP datasets and training a model with them is really easy and straightforward with huggingface. This is the data loading in my script: ``` ## CREATE PATHS prepared_dataset_path = os.path.join( DATA_FOLDER, str(DATA_VERSION), "prepared_dataset" ) os.makedirs(os.path.join(DATA_FOLDER, str(DATA_VERSION)), exist_ok=True) ## LOAD DATASET if os.path.exists(prepared_dataset_path): print("Loading prepared dataset from disk...") dataset_prepared = load_from_disk(prepared_dataset_path) else: print("Loading dataset from HuggingFace Datasets...") dataset = load_dataset( PATH_TO_DATASET, revision=DATA_VERSION, download_mode="force_redownload" ) print("Preparing dataset...") dataset_prepared = dataset.map( prepare_dataset, remove_columns=["audio", "transcription"], num_proc=os.cpu_count(), load_from_cache_file=False, ) dataset_prepared.save_to_disk(prepared_dataset_path) del dataset if CHECK_DATASET: ## CHECK DATASET dataset_prepared = dataset_prepared.map( check_dimensions, num_proc=os.cpu_count(), load_from_cache_file=False ) dataset_filtered = dataset_prepared.filter( lambda example: not example["incorrect_dimension"], load_from_cache_file=False, ) for example in dataset_prepared.filter( lambda example: example["incorrect_dimension"], load_from_cache_file=False ): print(example["path"]) print( f"Number of examples with incorrect dimension: {len(dataset_prepared) - len(dataset_filtered)}" ) print("Number of examples train: ", len(dataset_filtered["train"])) print("Number of examples test: ", len(dataset_filtered["test"])) ```
false
2,032,675,918
https://api.github.com/repos/huggingface/datasets/issues/6482
https://github.com/huggingface/datasets/pull/6482
6,482
Fix max lock length on unix
closed
3
2023-12-08T13:39:30
2023-12-12T11:53:32
2023-12-12T11:47:27
lhoestq
[]
reported in https://github.com/huggingface/datasets/pull/6482
true
2,032,650,003
https://api.github.com/repos/huggingface/datasets/issues/6481
https://github.com/huggingface/datasets/issues/6481
6,481
using torchrun, save_to_disk suddenly shows SIGTERM
open
0
2023-12-08T13:22:03
2023-12-08T13:22:03
null
Ariya12138
[]
### Describe the bug When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages: Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reaches the 14th shard. WARNING: torch.distributed.elastic.multiprocessing.api: Sending process 2224968 closing signal SIGTERM ERROR: torch.distributed.elastic.multiprocessing.api: failed (exitcode: -7). traceback: Signal 7 (SIGBUS) received by PID 2224967. ### Steps to reproduce the bug ds_shard = ds_shard.map(map_fn, *args, **kwargs) ds_shard.save_to_disk(ds_shard_filepaths[rank]) Saving the dataset (14/70 shards): 20%|██ | 875350/4376702 [00:19<01:53, 30863.15 examples/s] WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2224968 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 2224967) of binary: /home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/python Traceback (most recent call last): File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ========================================================== run.py FAILED ---------------------------------------------------------- Failures: <NO_OTHER_FAILURES> ---------------------------------------------------------- Root Cause (first observed failure): [0]: time : 2023-12-08_20:09:04 rank : 0 (local_rank: 0) exitcode : -7 (pid: 2224967) error_file: <N/A> traceback : Signal 7 (SIGBUS) received by PID 2224967 ### Expected behavior I hope it can save successfully without any issues, but it seems there is a problem. ### Environment info `datasets` version: 2.14.6 - Platform: Linux-4.19.90-24.4.v2101.ky10.aarch64-aarch64-with-glibc2.28 - Python version: 3.10.11 - Huggingface_hub version: 0.17.3 - PyArrow version: 14.0.0 - Pandas version: 2.1.2
false