id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
βŒ€
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
βŒ€
is_pull_request
bool
2 classes
2,031,116,653
https://api.github.com/repos/huggingface/datasets/issues/6480
https://github.com/huggingface/datasets/pull/6480
6,480
Add IterableDataset `__repr__`
closed
2
2023-12-07T16:31:50
2023-12-08T13:33:06
2023-12-08T13:26:54
lhoestq
[]
Example for glue sst2: Dataset ``` DatasetDict({ test: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 1821 }) train: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 67349 }) validation: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 872 }) }) ``` IterableDataset (new) ``` IterableDatasetDict({ test: IterableDataset({ features: ['sentence', 'label', 'idx'], n_shards: 1 }) train: IterableDataset({ features: ['sentence', 'label', 'idx'], n_shards: 1 }) validation: IterableDataset({ features: ['sentence', 'label', 'idx'], n_shards: 1 }) }) ``` IterableDataset (before) ``` {'test': <datasets.iterable_dataset.IterableDataset object at 0x130d421f0>, 'train': <datasets.iterable_dataset.IterableDataset object at 0x136f3aaf0>, 'validation': <datasets.iterable_dataset.IterableDataset object at 0x136f4b100>} {'sentence': 'hide new secretions from the parental units ', 'label': 0, 'idx': 0} ```
true
2,029,040,121
https://api.github.com/repos/huggingface/datasets/issues/6479
https://github.com/huggingface/datasets/pull/6479
6,479
More robust preupload retry mechanism
closed
2
2023-12-06T17:19:38
2023-12-06T19:47:29
2023-12-06T19:41:06
mariosasko
[]
null
true
2,028,071,596
https://api.github.com/repos/huggingface/datasets/issues/6478
https://github.com/huggingface/datasets/issues/6478
6,478
How to load data from lakefs
closed
3
2023-12-06T09:04:11
2024-07-03T19:13:57
2024-07-03T19:13:56
d710055071
[]
My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
false
2,028,022,374
https://api.github.com/repos/huggingface/datasets/issues/6477
https://github.com/huggingface/datasets/pull/6477
6,477
Fix PermissionError on Windows CI
closed
2
2023-12-06T08:34:53
2023-12-06T09:24:11
2023-12-06T09:17:52
albertvillanova
[]
Fix #6476.
true
2,028,018,596
https://api.github.com/repos/huggingface/datasets/issues/6476
https://github.com/huggingface/datasets/issues/6476
6,476
CI on windows is broken: PermissionError
closed
0
2023-12-06T08:32:53
2023-12-06T09:17:53
2023-12-06T09:17:53
albertvillanova
[ "bug" ]
See: https://github.com/huggingface/datasets/actions/runs/7104781624/job/19340572394 ``` FAILED tests/test_load.py::test_loading_from_the_datasets_hub - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpfcnps56i\\hf-internal-testing___dataset_with_script\\default\\0.0.0\\c240e2be3370bdbd\\dataset_with_script-train.arrow' ```
false
2,027,373,734
https://api.github.com/repos/huggingface/datasets/issues/6475
https://github.com/huggingface/datasets/issues/6475
6,475
laion2B-en failed to load on Windows with PrefetchVirtualMemory failed
open
6
2023-12-06T00:07:34
2023-12-06T23:26:23
null
doctorpangloss
[]
### Describe the bug I have downloaded laion2B-en, and I'm receiving the following error trying to load it: ``` Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 128/128 [00:00<00:00, 1173.79it/s] Traceback (most recent call last): File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 31, in <module> count = compute_frequencies() ^^^^^^^^^^^^^^^^^^^^^ File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 17, in compute_frequencies laion2b_dataset = load_dataset("laion/laion2B-en", split="train", cache_dir=_CACHE_DIR, keep_in_memory=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\load.py", line 2165, in load_dataset ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1187, in as_dataset datasets = map_nested( ^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\utils\py_utils.py", line 456, in map_nested return function(data_struct) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1217, in _build_single_dataset ds = self._as_dataset( ^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1291, in _as_dataset dataset_kwargs = ArrowReader(cache_dir, self.info).read( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 244, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 265, in read_files pa_table = self._read_files(files, in_memory=in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 200, in _read_files pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 336, in _get_table_from_filename table = ArrowReader.read_table(filename, in_memory=in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 357, in read_table return table_cls.from_file(filename) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\table.py", line 1059, in from_file table = _memory_mapped_arrow_table_from_file(filename) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\table.py", line 66, in _memory_mapped_arrow_table_from_file pa_table = opened_stream.read_all() ^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow\ipc.pxi", line 757, in pyarrow.lib.RecordBatchReader.read_all File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status OSError: [WinError 8] PrefetchVirtualMemory failed. Detail: [Windows error 8] Not enough memory resources are available to process this command. ``` This error is probably a red herring: https://stackoverflow.com/questions/50263929/numpy-memmap-returns-not-enough-memory-while-there-are-plenty-available In other words, the issue is related to asking for a memory mapping of length N > M the length of the file on Windows. This gracefully succeeds on Linux. I have 1024 arrow files in my cache instead of 128 like in the repository for it. Probably related. I don't know why `datasets` reorganized/rewrote the dataset in my cache to be 1024 slices instead of the original 128. ### Steps to reproduce the bug ``` # as a huggingface developer, you may already have laion2B-en somewhere _CACHE_DIR = "." from datasets import load_dataset load_dataset("laion/laion2B-en", split="train", cache_dir=_CACHE_DIR, keep_in_memory=False) ``` ### Expected behavior This should correctly load as a memory mapped Arrow dataset. ### Environment info - `datasets` version: 2.15.0 - Platform: Windows-10-10.0.20348-SP0 (this is windows 2022) - Python version: 3.11.4 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.2 - `fsspec` version: 2023.10.0
false
2,027,006,715
https://api.github.com/repos/huggingface/datasets/issues/6474
https://github.com/huggingface/datasets/pull/6474
6,474
Deprecate Beam API and download from HF GCS bucket
closed
2
2023-12-05T19:51:33
2024-03-12T14:56:25
2024-03-12T14:50:12
mariosasko
[]
Deprecate the Beam API and download from the HF GCS bucked. TODO: - [x] Convert the Beam-based [`wikipedia`](https://huggingface.co/datasets/wikipedia) to an Arrow-based dataset ([Hub PR](https://huggingface.co/datasets/wikipedia/discussions/19)) - [x] Make [`natural_questions`](https://huggingface.co/datasets/natural_questions) a no-code dataset ([Hub PR](https://huggingface.co/datasets/natural_questions/discussions/7)) - [x] Make [`wiki40b`](https://huggingface.co/datasets/wiki40b) a no-code dataset ([Hub PR](https://huggingface.co/datasets/wiki40b/discussions/5)) - [x] Make [`wiki_dpr`](https://huggingface.co/datasets/wiki_dpr) an Arrow-based dataset ([Hub PR](https://huggingface.co/datasets/wiki_dpr/discussions/14))
true
2,026,495,084
https://api.github.com/repos/huggingface/datasets/issues/6473
https://github.com/huggingface/datasets/pull/6473
6,473
Fix CI quality
closed
2
2023-12-05T15:36:23
2023-12-05T18:14:50
2023-12-05T18:08:41
albertvillanova
[]
Fix #6472.
true
2,026,493,439
https://api.github.com/repos/huggingface/datasets/issues/6472
https://github.com/huggingface/datasets/issues/6472
6,472
CI quality is broken
closed
0
2023-12-05T15:35:34
2023-12-06T08:17:34
2023-12-05T18:08:43
albertvillanova
[ "bug", "maintenance" ]
See: https://github.com/huggingface/datasets/actions/runs/7100835633/job/19327734359 ``` Would reformat: src/datasets/features/image.py 1 file would be reformatted, 253 files left unchanged ```
false
2,026,100,761
https://api.github.com/repos/huggingface/datasets/issues/6471
https://github.com/huggingface/datasets/pull/6471
6,471
Remove delete doc CI
closed
2
2023-12-05T12:37:50
2023-12-05T12:44:59
2023-12-05T12:38:50
lhoestq
[]
null
true
2,024,724,319
https://api.github.com/repos/huggingface/datasets/issues/6470
https://github.com/huggingface/datasets/issues/6470
6,470
If an image in a dataset is corrupted, we get unescapable error
open
0
2023-12-04T20:58:49
2023-12-04T20:58:49
null
chigozienri
[]
### Describe the bug Example discussed in detail here: https://huggingface.co/datasets/sasha/birdsnap/discussions/1 ### Steps to reproduce the bug ``` from datasets import load_dataset, VerificationMode dataset = load_dataset( 'sasha/birdsnap', split="train", verification_mode=VerificationMode.ALL_CHECKS, streaming=True # I recommend using streaming=True when reproducing, as this dataset is large ) for idx, row in enumerate(dataset): # Iterating to 9287 took 7 minutes for me # If you already have the data locally cached and set streaming=False, you see the same error just by with dataset[9287] pass # error at 9287 OSError: image file is truncated (45 bytes not processed) # note that we can't avoid the error using a try/except + continue inside the loop ``` ### Expected behavior Able to escape errors in casting to Image() without killing the whole loop ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.31 - Python version: 3.11.5 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.3 - `fsspec` version: 2023.10.0
false
2,023,695,839
https://api.github.com/repos/huggingface/datasets/issues/6469
https://github.com/huggingface/datasets/pull/6469
6,469
Don't expand_info in HF glob
closed
3
2023-12-04T12:00:37
2023-12-15T13:18:37
2023-12-15T13:12:30
lhoestq
[]
Finally fix https://github.com/huggingface/datasets/issues/5537
true
2,023,617,877
https://api.github.com/repos/huggingface/datasets/issues/6468
https://github.com/huggingface/datasets/pull/6468
6,468
Use auth to get parquet export
closed
2
2023-12-04T11:18:27
2023-12-04T17:21:22
2023-12-04T17:15:11
lhoestq
[]
added `token` to the `_datasets_server` functions
true
2,023,174,233
https://api.github.com/repos/huggingface/datasets/issues/6467
https://github.com/huggingface/datasets/issues/6467
6,467
New version release request
closed
2
2023-12-04T07:08:26
2023-12-04T15:42:22
2023-12-04T15:42:22
LZHgrla
[ "enhancement" ]
### Feature request Hi! I am using `datasets` in library `xtuner` and am highly interested in the features introduced since v2.15.0. To avoid installation from source in our pypi wheels, we are eagerly waiting for the new release. So, Does your team have a new release plan for v2.15.1 and could you please share it with us? Thanks very much! ### Motivation . ### Your contribution .
false
2,022,601,176
https://api.github.com/repos/huggingface/datasets/issues/6466
https://github.com/huggingface/datasets/issues/6466
6,466
Can't align optional features of struct
closed
3
2023-12-03T15:57:07
2024-02-15T15:19:33
2024-02-08T14:38:34
Dref360
[]
### Describe the bug Hello! I'm currently experiencing an issue where I can't concatenate datasets if an inner field of a Feature is Optional. I have a column named `speaker`, and this holds some information about a speaker. ```python @dataclass class Speaker: name: str email: Optional[str] ``` If I have two datasets, one happens to have `email` always None, then I get `The features can't be aligned because the key email of features` ### Steps to reproduce the bug You can run the following script: ```python ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]}) ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': 'abc@aol.com'}]}) concatenate_datasets([ds, ds2]) >>>The features can't be aligned because the key speaker of features {'speaker': {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)}} has unexpected type - {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)} (expected either {'email': Value(dtype='null', id=None), 'name': Value(dtype='string', id=None)} or Value("null"). ``` ### Expected behavior I think this should work; if two top-level columns were in the same situation it would properly cast to `string`. ```python ds = Dataset.from_dict({'email': [None, None]}) ds2 = Dataset.from_dict({'email': ['abc@aol.com', 'one@yahoo.com']}) concatenate_datasets([ds, ds2]) >>> # Works! ``` ### Environment info - `datasets` version: 2.15.1.dev0 - Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35 - Python version: 3.9.13 - `huggingface_hub` version: 0.19.4 - PyArrow version: 9.0.0 - Pandas version: 1.4.4 - `fsspec` version: 2023.6.0 I would be happy to fix this issue.
false
2,022,212,468
https://api.github.com/repos/huggingface/datasets/issues/6465
https://github.com/huggingface/datasets/issues/6465
6,465
`load_dataset` uses out-of-date cache instead of re-downloading a changed dataset
open
2
2023-12-02T21:35:17
2024-08-20T08:32:11
null
mnoukhov
[]
### Describe the bug When a dataset is updated on the hub, using `load_dataset` will load the locally cached dataset instead of re-downloading the updated dataset ### Steps to reproduce the bug Here is a minimal example script to 1. create an initial dataset and upload 2. download it so it is stored in cache 3. change the dataset and re-upload 4. redownload ```python import time from datasets import Dataset, DatasetDict, DownloadMode, load_dataset username = "YOUR_USERNAME_HERE" initial = Dataset.from_dict({"foo": [1, 2, 3]}) print(f"Intial {initial['foo']}") initial_ds = DatasetDict({"train": initial}) initial_ds.push_to_hub("test") time.sleep(1) download = load_dataset(f"{username}/test", split="train") changed = download.map(lambda x: {"foo": x["foo"] + 1}) print(f"Changed {changed['foo']}") changed.push_to_hub("test") time.sleep(1) download_again = load_dataset(f"{username}/test", split="train") print(f"Download Changed {download_again['foo']}") # >>> gives the out-dated [1,2,3] when it should be changed [2,3,4] ``` The redownloaded dataset should be the changed dataset but it is actually the cached, initial dataset. Force-redownloading gives the correct dataset ```python download_again_force = load_dataset(f"{username}/test", split="train", download_mode=DownloadMode.FORCE_REDOWNLOAD) print(f"Force Download Changed {download_again_force['foo']}") # >>> [2,3,4] ``` ### Expected behavior I assumed there should be some sort of hashing that should check for changes in the dataset and re-download if the hashes don't match ### Environment info - `datasets` version: 2.15.0 β”‚ - Platform: Linux-5.15.0-1028-nvidia-x86_64-with-glibc2.17 β”‚ - Python version: 3.8.17 β”‚ - `huggingface_hub` version: 0.19.4 β”‚ - PyArrow version: 13.0.0 β”‚ - Pandas version: 2.0.3 β”‚ - `fsspec` version: 2023.6.0
false
2,020,860,462
https://api.github.com/repos/huggingface/datasets/issues/6464
https://github.com/huggingface/datasets/pull/6464
6,464
Add concurrent loading of shards to datasets.load_from_disk
closed
8
2023-12-01T13:13:53
2024-01-26T15:17:43
2024-01-26T15:10:26
kkoutini
[]
In some file systems (like luster), memory mapping arrow files takes time. This can be accelerated by performing the mmap in parallel on processes or threads. - Threads seem to be faster than processes when gathering the list of tables from the workers (see https://github.com/huggingface/datasets/issues/2252). - I'm not sure if using threads would respect theΒ `IN_MEMORY_MAX_SIZE` config. - I'm not sure if we need to exposeΒ num_procΒ fromΒ `BaseReader.read`Β toΒ `DatasetBuilder.as_dataset`. Since `Β DatasetBuilder.as_dataset` is used in many places beside `load_dataset`. ### Tests on luster file system (on a shared partial node): Loading 1231 shards of ~2GBs. The files were pre-loaded in another process before the script runs (couldn't get a fresh node). ```python import logging from time import perf_counter import datasets logger = datasets.logging.get_logger(__name__) datasets.logging.set_verbosity_info() logging.basicConfig(level=logging.DEBUG, format="%(message)s") class catchtime: # context to measure loading time: https://stackoverflow.com/questions/33987060/python-context-manager-that-measures-time def __init__(self, debug_print="Time", logger=logger): self.debug_print = debug_print self.logger = logger def __enter__(self): self.start = perf_counter() return self def __exit__(self, type, value, traceback): self.time = perf_counter() - self.start readout = f"{self.debug_print}: {self.time:.3f} seconds" self.logger.info(readout) dataset_path="" # warmup with catchtime("Loading in parallel", logger=logger): ds = datasets.load_from_disk(dataset_path,num_proc=16) # num_proc=16 with catchtime("Loading in parallel", logger=logger): ds = datasets.load_from_disk(dataset_path,num_proc=16) # num_proc=32 with catchtime("Loading in parallel", logger=logger): ds = datasets.load_from_disk(dataset_path,num_proc=32) # num_proc=1 with catchtime("Loading in conseq", logger=logger): ds = datasets.load_from_disk(dataset_path,num_proc=1) ``` #### Run 1 ``` open file: .../dataset_dict.json Loading the dataset from disk using 16 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:28<00:00, 13.96shards/s] Loading in parallel: 88.690 seconds open file: .../dataset_dict.json Loading the dataset from disk using 16 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:48<00:00, 11.31shards/s] Loading in parallel: 109.339 seconds open file: .../dataset_dict.json Loading the dataset from disk using 32 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:06<00:00, 18.56shards/s] Loading in parallel: 66.931 seconds open file: .../dataset_dict.json Loading the dataset from disk: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [05:09<00:00, 3.98shards/s] Loading in conseq: 309.792 seconds ``` #### Run 2 ``` open file: .../dataset_dict.json Loading the dataset from disk using 16 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:38<00:00, 12.53shards/s] Loading in parallel: 98.831 seconds open file: .../dataset_dict.json Loading the dataset from disk using 16 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [02:01<00:00, 10.16shards/s] Loading in parallel: 121.669 seconds open file: .../dataset_dict.json Loading the dataset from disk using 32 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:07<00:00, 18.18shards/s] Loading in parallel: 68.192 seconds open file: .../dataset_dict.json Loading the dataset from disk: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [05:19<00:00, 3.86shards/s] Loading in conseq: 319.759 seconds ``` #### Run 3 ``` open file: .../dataset_dict.json Loading the dataset from disk using 16 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:36<00:00, 12.74shards/s] Loading in parallel: 96.936 seconds open file: .../dataset_dict.json Loading the dataset from disk using 16 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [02:00<00:00, 10.24shards/s] Loading in parallel: 120.761 seconds open file: .../dataset_dict.json Loading the dataset from disk using 32 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:08<00:00, 18.04shards/s] Loading in parallel: 68.666 seconds open file: .../dataset_dict.json Loading the dataset from disk: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [05:35<00:00, 3.67shards/s] Loading in conseq: 335.777 seconds ``` fix #2252
true
2,020,702,967
https://api.github.com/repos/huggingface/datasets/issues/6463
https://github.com/huggingface/datasets/pull/6463
6,463
Disable benchmarks in PRs
closed
2
2023-12-01T11:35:30
2023-12-01T12:09:09
2023-12-01T12:03:04
lhoestq
[]
In order to keep PR pages less spammy / more readable. Having the benchmarks on commits on `main` is enough imo
true
2,019,238,388
https://api.github.com/repos/huggingface/datasets/issues/6462
https://github.com/huggingface/datasets/pull/6462
6,462
Missing DatasetNotFoundError
closed
2
2023-11-30T18:09:43
2023-11-30T18:36:40
2023-11-30T18:30:30
lhoestq
[]
continuation of https://github.com/huggingface/datasets/pull/6431 this should fix the CI in https://github.com/huggingface/datasets/pull/6458 too
true
2,018,850,731
https://api.github.com/repos/huggingface/datasets/issues/6461
https://github.com/huggingface/datasets/pull/6461
6,461
Fix shard retry mechanism in `push_to_hub`
closed
5
2023-11-30T14:57:14
2023-12-01T17:57:39
2023-12-01T17:51:33
mariosasko
[]
When it fails, `preupload_lfs_files` throws a [`RuntimeError`](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/_commit_api.py#L402) error and chains the original HTTP error. This PR modifies the retry mechanism's error handling to account for that. Fix https://github.com/huggingface/datasets/issues/6392
true
2,017,433,899
https://api.github.com/repos/huggingface/datasets/issues/6460
https://github.com/huggingface/datasets/issues/6460
6,460
jsonlines files don't load with `load_dataset`
closed
4
2023-11-29T21:20:11
2023-12-29T02:58:29
2023-12-05T13:30:53
serenalotreck
[]
### Describe the bug While [the docs](https://huggingface.co/docs/datasets/upload_dataset#upload-dataset) seem to state that `.jsonl` is a supported extension for `datasets`, loading the dataset results in a `JSONDecodeError`. ### Steps to reproduce the bug Code: ``` from datasets import load_dataset dset = load_dataset('slotreck/pickle') ``` Traceback: ``` Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 925/925 [00:00<00:00, 3.11MB/s] Downloading and preparing dataset json/slotreck--pickle to /mnt/home/lotrecks/.cache/huggingface/datasets/slotreck___json/slotreck--pickle-0c311f36ed032b04/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96... Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 589k/589k [00:00<00:00, 18.9MB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 104k/104k [00:00<00:00, 4.61MB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 170k/170k [00:00<00:00, 7.71MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 3.77it/s] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 523.92it/s] Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/mnt/home/lotrecks/.cache/huggingface/datasets/downloads/6ec07bb2f279c9377036af6948532513fa8f48244c672d2644a2d7018ee5c9cb' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0 Traceback (most recent call last): File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 144, in _generate_tables dataset = json.load(f) File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 296, in load parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 348, in loads return _default_decoder.decode(s) File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/decoder.py", line 340, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 3086) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1879, in _prepare_split_single for _, table in generator: File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 147, in _generate_tables raise e File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size) File "pyarrow/_json.pyx", line 259, in pyarrow._json.read_json File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/load.py", line 1815, in load_dataset storage_options=storage_options, File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 913, in download_and_prepare **download_and_prepare_kwargs, File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1004, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1768, in _prepare_split gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1912, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior For the dataset to be loaded without error. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core - Python version: 3.7.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
false
2,017,029,380
https://api.github.com/repos/huggingface/datasets/issues/6459
https://github.com/huggingface/datasets/pull/6459
6,459
Retrieve cached datasets that were pushed to hub when offline
closed
3
2023-11-29T16:56:15
2024-03-25T13:55:42
2024-03-25T13:55:42
lhoestq
[]
I drafted the logic to retrieve a no-script dataset in the cache. For example it can reload datasets that were pushed to hub if they exist in the cache. example: ```python >>> Dataset.from_dict({"a": [1, 2]}).push_to_hub("lhoestq/tmp") >>> load_dataset("lhoestq/tmp") DatasetDict({ train: Dataset({ features: ['a'], num_rows: 2 }) }) ``` and later, without connection: ```python >>> load_dataset("lhoestq/tmp") Using the latest cached version of the dataset from /Users/quentinlhoest/.cache/huggingface/datasets/lhoestq___tmp/*/*/0b3caccda1725efb(last modified on Wed Nov 29 16:50:27 2023) since it couldn't be found locally at lhoestq/tmp. DatasetDict({ train: Dataset({ features: ['a'], num_rows: 2 }) }) ``` fix https://github.com/huggingface/datasets/issues/3547 ## Implementation details (EDITED) I continued in https://github.com/huggingface/datasets/pull/6493, see the changes there TODO: - [x] tests - [ ] compatible with https://github.com/huggingface/datasets/pull/6458
true
2,016,577,761
https://api.github.com/repos/huggingface/datasets/issues/6458
https://github.com/huggingface/datasets/pull/6458
6,458
Lazy data files resolution
closed
20
2023-11-29T13:18:44
2024-02-08T14:41:35
2024-02-08T14:41:35
lhoestq
[]
Related to discussion at https://github.com/huggingface/datasets/pull/6255 this makes this code run in 2sec instead of >10sec ```python from datasets import load_dataset ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False) ``` For some datasets with many configs and files it can be up to 100x faster. This is particularly important now that some datasets will be loaded from the Parquet export instead of the scripts. The data files are only resolved in the builder `__init__`. To do so I added DataFilesPatternsList and DataFilesPatternsDict that have `.resolve()` to return resolved DataFilesList and DataFilesDict
true
2,015,650,563
https://api.github.com/repos/huggingface/datasets/issues/6457
https://github.com/huggingface/datasets/issues/6457
6,457
`TypeError`: huggingface_hub.hf_file_system.HfFileSystem.find() got multiple values for keyword argument 'maxdepth'
closed
5
2023-11-29T01:57:36
2023-11-29T15:39:03
2023-11-29T02:02:38
wasertech
[]
### Describe the bug Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Steps to reproduce the bug Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Expected behavior Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Environment info Please see https://github.com/huggingface/huggingface_hub/issues/1872
false
2,015,186,090
https://api.github.com/repos/huggingface/datasets/issues/6456
https://github.com/huggingface/datasets/pull/6456
6,456
Don't require trust_remote_code in inspect_dataset
closed
3
2023-11-28T19:47:07
2023-11-30T10:40:23
2023-11-30T10:34:12
lhoestq
[]
don't require `trust_remote_code` in (deprecated) `inspect_dataset` (it defeats its purpose) (not super important but we might as well keep it until the next major release) this is needed to fix the tests in https://github.com/huggingface/datasets/pull/6448
true
2,013,001,584
https://api.github.com/repos/huggingface/datasets/issues/6454
https://github.com/huggingface/datasets/pull/6454
6,454
Refactor `dill` logic
closed
5
2023-11-27T20:01:25
2023-11-28T16:29:58
2023-11-28T16:29:31
mariosasko
[]
Refactor the `dill` logic to make it easier to maintain (and fix some issues along the way) It makes the following improvements to the serialization API: * consistent order of a `dict`'s keys * support for hashing `torch.compile`-ed modules and functions * deprecates `datasets.fingerprint.hashregister` as the `hashregister`-ed reducers are never invoked anyways (does not support nested data as `pickle`/`dill` do) ~~TODO: optimize hashing of `pa.Table` and `datasets.table.Table`~~ The `pa_array.to_string` approach is faster for large arrays because it outputs the first 10 and last 10 elements (by default). The problem is that this can produce identical hashes for non-identical arrays if their differing elements get ellipsed... Fix https://github.com/huggingface/datasets/issues/6440, fix https://github.com/huggingface/datasets/issues/5839
true
2,011,907,787
https://api.github.com/repos/huggingface/datasets/issues/6453
https://github.com/huggingface/datasets/pull/6453
6,453
Update hub-docs reference
closed
3
2023-11-27T09:57:20
2023-11-27T10:23:44
2023-11-27T10:17:34
mishig25
[]
Follow up to huggingface/huggingface.js#296
true
2,011,632,708
https://api.github.com/repos/huggingface/datasets/issues/6452
https://github.com/huggingface/datasets/pull/6452
6,452
Praveen_repo_pull_req
closed
0
2023-11-27T07:07:50
2023-11-27T09:28:00
2023-11-27T09:28:00
Praveenhh
[]
null
true
2,010,693,912
https://api.github.com/repos/huggingface/datasets/issues/6451
https://github.com/huggingface/datasets/issues/6451
6,451
Unable to read "marsyas/gtzan" data
closed
3
2023-11-25T15:13:17
2023-12-01T12:53:46
2023-11-27T09:36:25
gerald-wrona
[]
Hi, this is my code and the error: ``` from datasets import load_dataset gtzan = load_dataset("marsyas/gtzan", "all") ``` [error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt) [audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt) Python 3.11.5 Jupyter Notebook 6.5.4 Windows 10 I'm able to download and work with other datasets, but not this one. For example, both these below work fine: ``` from datasets import load_dataset dataset = load_dataset("facebook/voxpopuli", "pl", split="train", streaming=True) minds = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` Thanks for your help https://huggingface.co/datasets/marsyas/gtzan/tree/main
false
2,009,491,386
https://api.github.com/repos/huggingface/datasets/issues/6450
https://github.com/huggingface/datasets/issues/6450
6,450
Support multiple image/audio columns in ImageFolder/AudioFolder
closed
1
2023-11-24T10:34:09
2023-11-28T11:07:17
2023-11-24T17:24:38
severo
[ "duplicate", "enhancement" ]
### Feature request Have a metadata.csv file with multiple columns that point to relative image or audio files. ### Motivation Currently, ImageFolder allows one column, called `file_name`, pointing to relative image files. On the same model, AudioFolder allows one column, called `file_name`, pointing to relative audio files. But it's not possible to have two image columns, or to have two audio column, or to have one audio column and one image column. ### Your contribution no specific contribution
false
2,008,617,992
https://api.github.com/repos/huggingface/datasets/issues/6449
https://github.com/huggingface/datasets/pull/6449
6,449
Fix metadata file resolution when inferred pattern is `**`
closed
6
2023-11-23T17:35:02
2023-11-27T10:02:56
2023-11-24T17:13:02
mariosasko
[]
Refetch metadata files in case they were dropped by `filter_extensions` in the previous step. Fix #6442
true
2,008,614,985
https://api.github.com/repos/huggingface/datasets/issues/6448
https://github.com/huggingface/datasets/pull/6448
6,448
Use parquet export if possible
closed
24
2023-11-23T17:31:57
2023-12-01T17:57:17
2023-12-01T17:50:59
lhoestq
[]
The idea is to make this code work for datasets with scripts if they have a Parquet export ```python ds = load_dataset("squad", trust_remote_code=False) ``` And more generally, it means we use the Parquet export whenever it's possible (it's safer and faster than dataset scripts). I also added a `config.USE_PARQUET_EXPORT` variable to use in the datasets-server parquet conversion job - [x] Needs https://github.com/huggingface/datasets/pull/6429 to be merged first cc @severo I use the /parquet and /info endpoints from datasets-server
true
2,008,195,298
https://api.github.com/repos/huggingface/datasets/issues/6447
https://github.com/huggingface/datasets/issues/6447
6,447
Support one dataset loader per config when using YAML
open
0
2023-11-23T13:03:07
2023-11-23T13:03:07
null
severo
[ "enhancement" ]
### Feature request See https://huggingface.co/datasets/datasets-examples/doc-unsupported-1 I would like to use CSV loader for the "csv" config, JSONL loader for the "jsonl" config, etc. ### Motivation It would be more flexible for the users ### Your contribution No specific contribution
false
2,007,092,708
https://api.github.com/repos/huggingface/datasets/issues/6446
https://github.com/huggingface/datasets/issues/6446
6,446
Speech Commands v2 dataset doesn't match AST-v2 config
closed
3
2023-11-22T20:46:36
2023-11-28T14:46:08
2023-11-28T14:46:08
vymao
[]
### Describe the bug [According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover, the class labels themselves don't match between the model config and the dataset. It is difficult to reproduce the data used to fine tune `MIT/ast-finetuned-speech-commands-v2`. ### Steps to reproduce the bug ``` >>> model = ASTForAudioClassification.from_pretrained("MIT/ast-finetuned-speech-commands-v2") >>> model.config.id2label {0: 'backward', 1: 'follow', 2: 'five', 3: 'bed', 4: 'zero', 5: 'on', 6: 'learn', 7: 'two', 8: 'house', 9: 'tree', 10: 'dog', 11: 'stop', 12: 'seven', 13: 'eight', 14: 'down', 15: 'six', 16: 'forward', 17: 'cat', 18: 'right', 19: 'visual', 20: 'four', 21: 'wow', 22: 'no', 23: 'nine', 24: 'off', 25: 'three', 26: 'left', 27: 'marvin', 28: 'yes', 29: 'up', 30: 'sheila', 31: 'happy', 32: 'bird', 33: 'go', 34: 'one'} >>> dataset = load_dataset("speech_commands", "v0.02", split="test") >>> torch.unique(torch.Tensor(dataset['label'])) tensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35.]) ``` If you try to explore the [dataset itself](https://huggingface.co/datasets/speech_commands/viewer/v0.02/test), you can see that the id to label does not match what is provided by `model.config.id2label`. ### Expected behavior The labels should match completely and there should be the same number of label classes between the model config and the dataset itself. ### Environment info datasets = 2.14.6, transformers = 4.33.3
false
2,006,958,595
https://api.github.com/repos/huggingface/datasets/issues/6445
https://github.com/huggingface/datasets/pull/6445
6,445
Use `filelock` package for file locking
closed
4
2023-11-22T19:04:45
2023-11-23T18:47:30
2023-11-23T18:41:23
mariosasko
[]
Use the `filelock` package instead of `datasets.utils.filelock` for file locking to be consistent with `huggingface_hub` and not to be responsible for improving the `filelock` capabilities πŸ™‚. (Reverts https://github.com/huggingface/datasets/pull/859, but these `INFO` logs are not printed by default (anymore?), so this should be okay)
true
2,006,842,179
https://api.github.com/repos/huggingface/datasets/issues/6444
https://github.com/huggingface/datasets/pull/6444
6,444
Remove `Table.__getstate__` and `Table.__setstate__`
closed
4
2023-11-22T17:55:10
2023-11-23T15:19:43
2023-11-23T15:13:28
LZHgrla
[]
When using distributed training, the code of `os.remove(filename)` may be executed separately by each rank, leading to `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmprxxxxxxx.arrow'` ```python from torch import distributed as dist if dist.get_rank() == 0: dataset = process_dataset(*args, **kwargs) objects = [dataset] else: objects = [None] dist.broadcast_object_list(objects, src=0) dataset = objects[0] ```
true
2,006,568,368
https://api.github.com/repos/huggingface/datasets/issues/6443
https://github.com/huggingface/datasets/issues/6443
6,443
Trouble loading files defined in YAML explicitly
open
6
2023-11-22T15:18:10
2025-06-23T13:46:46
null
severo
[ "bug" ]
Look at https://huggingface.co/datasets/severo/doc-yaml-2 It's a reproduction of the example given in the docs at https://huggingface.co/docs/hub/datasets-manual-configuration ``` You can select multiple files per split using a list of paths: my_dataset_repository/ β”œβ”€β”€ README.md β”œβ”€β”€ data/ β”‚ β”œβ”€β”€ abc.csv β”‚ └── def.csv └── holdout/ └── ghi.csv --- configs: - config_name: default data_files: - split: train path: - "data/abc.csv" - "data/def.csv" - split: test path: "holdout/ghi.csv" --- ``` It raises the following error: ``` Error code: ConfigNamesError Exception: FileNotFoundError Message: Couldn't find a dataset script at /src/services/worker/severo/doc-yaml-2/doc-yaml-2.py or any data file in the same directory. Couldn't find 'severo/doc-yaml-2' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/severo/doc-yaml-2@938a0578fb4c6bc9da7d80b06a3ba39c2834b0c2/data/def.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.arrow', '.txt', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, token=hf_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1507, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /src/services/worker/severo/doc-yaml-2/doc-yaml-2.py or any data file in the same directory. Couldn't find 'severo/doc-yaml-2' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/severo/doc-yaml-2@938a0578fb4c6bc9da7d80b06a3ba39c2834b0c2/data/def.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.arrow', '.txt', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] ```
false
2,006,086,907
https://api.github.com/repos/huggingface/datasets/issues/6442
https://github.com/huggingface/datasets/issues/6442
6,442
Trouble loading image folder with additional features - metadata file ignored
closed
1
2023-11-22T11:01:35
2023-11-24T17:13:03
2023-11-24T17:13:03
linoytsaban
[]
### Describe the bug Loading image folder with a caption column using `load_dataset(<image_folder_path>)` doesn't load the captions. When loading a local image folder with captions using `datasets==2.13.0` ``` from datasets import load_dataset data = load_dataset(<image_folder_path>) data.column_names ``` yields `{'train': ['image', 'prompt']}` but when using `datasets==2.15.0` yeilds `{'train': ['image']}` Putting the images and `metadata.jsonl` file into a nested `train` folder **or** loading with `load_dataset("imagefolder", data_dir=<image_folder_path>)` solves the issue and yields `{'train': ['image', 'prompt']}` ### Steps to reproduce the bug 1. create a folder `<image_folder_path>` that contains images and a metadata file with additional features- e.g. "prompt" 2. run: ``` from datasets import load_dataset data = load_dataset("<image_folder_path>") data.column_names ``` ### Expected behavior `{'train': ['image', 'prompt']}` ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
false
2,004,985,857
https://api.github.com/repos/huggingface/datasets/issues/6441
https://github.com/huggingface/datasets/issues/6441
6,441
Trouble Loading a Gated Dataset For User with Granted Permission
closed
3
2023-11-21T19:24:36
2023-12-13T08:27:16
2023-12-13T08:27:16
e-trop
[]
### Describe the bug I have granted permissions to several users to access a gated huggingface dataset. The users accepted the invite and when trying to load the dataset using their access token they get `FileNotFoundError: Couldn't find a dataset script at .....` . Also when they try to click the url link for the dataset they get a 404 error. ### Steps to reproduce the bug 1. Grant access to gated dataset for specific users 2. Users accept invitation 3. Users login to hugging face hub using cli login 4. Users run load_dataset ### Expected behavior Dataset is loaded normally for users who were granted access to the gated dataset. ### Environment info datasets==2.15.0
false
2,004,509,301
https://api.github.com/repos/huggingface/datasets/issues/6440
https://github.com/huggingface/datasets/issues/6440
6,440
`.map` not hashing under python 3.9
closed
2
2023-11-21T15:14:54
2023-11-28T16:29:33
2023-11-28T16:29:33
changyeli
[]
### Describe the bug The `.map` function cannot hash under python 3.9. Tried to use [the solution here](https://github.com/huggingface/datasets/issues/4521#issuecomment-1205166653), but still get the same message: `Parameter 'function'=<function map_to_pred at 0x7fa0b49ead30> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` ### Steps to reproduce the bug ```python def map_to_pred(batch): """ Perform inference on an audio batch Parameters: batch (dict): A dictionary containing audio data and other related information. Returns: dict: The input batch dictionary with added prediction and transcription fields. """ audio = batch['audio'] input_features = processor( audio['array'], sampling_rate=audio['sampling_rate'], return_tensors="pt").input_features input_features = input_features.to('cuda') with torch.no_grad(): predicted_ids = model.generate(input_features) preds = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0] batch['prediction'] = processor.tokenizer._normalize(preds) batch["transcription"] = processor.tokenizer._normalize(batch['transcription']) return batch MODEL_CARD = "openai/whisper-small" MODEL_NAME = MODEL_CARD.rsplit('/', maxsplit=1)[-1] model = WhisperForConditionalGeneration.from_pretrained(MODEL_CARD) processor = AutoProcessor.from_pretrained( MODEL_CARD, language="english", task="transcribe") model = torch.compile(model) dt = load_dataset("audiofolder", data_dir=config['DATA']['dataset'], split="test") dt = dt.cast_column("audio", Audio(sampling_rate=16000)) result = coraal_dt.map(map_to_pred, num_proc=16) ``` ### Expected behavior Hashed and cached dataset starts inferencing ### Environment info - `transformers` version: 4.35.0 - Platform: Linux-5.14.0-284.30.1.el9_2.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.18 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
false
2,002,916,514
https://api.github.com/repos/huggingface/datasets/issues/6439
https://github.com/huggingface/datasets/issues/6439
6,439
Download + preparation speed of datasets.load_dataset is 20x slower than huggingface hub snapshot and manual loding
open
0
2023-11-20T20:07:23
2023-11-20T20:07:37
null
AntreasAntoniou
[]
### Describe the bug I am working with a dataset I am trying to publish. The path is Antreas/TALI. It's a fairly large dataset, and contains images, video, audio and text. I have been having multiple problems when the dataset is being downloaded using the load_dataset function -- even with 64 workers taking more than 7 days to process. With snapshot download it takes 12 hours, and that includes the dataset preparation done using load_dataset and passing the dataset parquet file paths. Find the script I am using below: ```python import multiprocessing as mp import pathlib from typing import Optional import datasets from rich import print from tqdm import tqdm def download_dataset_via_hub( dataset_name: str, dataset_download_path: pathlib.Path, num_download_workers: int = mp.cpu_count(), ): import huggingface_hub as hf_hub download_folder = hf_hub.snapshot_download( repo_id=dataset_name, repo_type="dataset", cache_dir=dataset_download_path, resume_download=True, max_workers=num_download_workers, ignore_patterns=[], ) return pathlib.Path(download_folder) / "data" def load_dataset_via_hub( dataset_download_path: pathlib.Path, num_download_workers: int = mp.cpu_count(), dataset_name: Optional[str] = None, ): from dataclasses import dataclass, field from datasets import ClassLabel, Features, Image, Sequence, Value dataset_path = download_dataset_via_hub( dataset_download_path=dataset_download_path, num_download_workers=num_download_workers, dataset_name=dataset_name, ) # Building a list of file paths for validation set train_files = [ file.as_posix() for file in pathlib.Path(dataset_path).glob("*.parquet") if "train" in file.as_posix() ] val_files = [ file.as_posix() for file in pathlib.Path(dataset_path).glob("*.parquet") if "val" in file.as_posix() ] test_files = [ file.as_posix() for file in pathlib.Path(dataset_path).glob("*.parquet") if "test" in file.as_posix() ] print( f"Found {len(test_files)} files for testing set, {len(train_files)} for training set and {len(val_files)} for validation set" ) data_files = { "test": test_files, "val": val_files, "train": train_files, } features = Features( { "image": Image( decode=True ), # Set `decode=True` if you want to decode the images, otherwise `decode=False` "image_url": Value("string"), "item_idx": Value("int64"), "wit_features": Sequence( { "attribution_passes_lang_id": Value("bool"), "caption_alt_text_description": Value("string"), "caption_reference_description": Value("string"), "caption_title_and_reference_description": Value("string"), "context_page_description": Value("string"), "context_section_description": Value("string"), "hierarchical_section_title": Value("string"), "is_main_image": Value("bool"), "language": Value("string"), "page_changed_recently": Value("bool"), "page_title": Value("string"), "page_url": Value("string"), "section_title": Value("string"), } ), "wit_idx": Value("int64"), "youtube_title_text": Value("string"), "youtube_description_text": Value("string"), "youtube_video_content": Value("binary"), "youtube_video_starting_time": Value("string"), "youtube_subtitle_text": Value("string"), "youtube_video_size": Value("int64"), "youtube_video_file_path": Value("string"), } ) dataset = datasets.load_dataset( "parquet" if dataset_name is None else dataset_name, data_files=data_files, features=features, num_proc=1, cache_dir=dataset_download_path / "cache", ) return dataset if __name__ == "__main__": dataset_cache = pathlib.Path("/disk/scratch_fast0/tali/") dataset = load_dataset_via_hub(dataset_cache, dataset_name="Antreas/TALI")[ "test" ] for sample in tqdm(dataset): print(list(sample.keys())) ``` Also, streaming this dataset has been a very painfully slow process. Streaming the train set takes 15m to start, and streaming the test and val sets takes 3 hours to start! ### Steps to reproduce the bug 1. Run the code I provided to get a sense of how fast snapshot + manual is 2. Run datasets.load_dataset("Antreas/TALI") to get a sense of the speed of that OP. 3. You should now have an appreciation of how long these things take. ### Expected behavior The load dataset function should be at least as fast as the huggingface snapshot download function in terms of downloading dataset files. Not 20 times slower. ### Environment info - `datasets` version: 2.14.5 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - PyArrow version: 13.0.0 - Pandas version: 2.1.1
false
2,002,032,804
https://api.github.com/repos/huggingface/datasets/issues/6438
https://github.com/huggingface/datasets/issues/6438
6,438
Support GeoParquet
open
6
2023-11-20T11:54:58
2024-02-07T08:36:51
null
severo
[ "enhancement" ]
### Feature request Support the GeoParquet format ### Motivation GeoParquet (https://geoparquet.org/) is a common format for sharing vectorial geospatial data on the cloud, along with "traditional" data columns. It would be nice to be able to load this format with datasets, and more generally, in the Datasets Hub (see https://huggingface.co/datasets/joshuasundance/govgis_nov2023-slim-spatial/discussions/1). ### Your contribution I would be happy to help work on a PR (but I don't think I can do one on my own). Also, we have to define what we want to support: - load all the columns, but get the "geospatial" column in text-only mode for now - or, fully support the spatial features, maybe taking inspiration from (or depending upon) https://geopandas.org/en/stable/index.html (which itself depends on https://fiona.readthedocs.io/en/stable/, which requires a local install of https://gdal.org/)
false
2,001,272,606
https://api.github.com/repos/huggingface/datasets/issues/6437
https://github.com/huggingface/datasets/issues/6437
6,437
Problem in training iterable dataset
open
5
2023-11-20T03:04:02
2024-05-22T03:14:13
null
21Timothy
[]
### Describe the bug I am using PyTorch DDP (Distributed Data Parallel) to train my model. Since the data is too large to load into memory at once, I am using load_dataset to read the data as an iterable dataset. I have used datasets.distributed.split_dataset_by_node to distribute the dataset. However, I have noticed that this distribution results in different processes having different amounts of data to train on. As a result, when the earliest process finishes training and starts predicting on the test set, other processes are still training, causing the overall training speed to be very slow. ### Steps to reproduce the bug ``` def train(args, model, device, train_loader, optimizer, criterion, epoch, length): model.train() idx_length = 0 for batch_idx, data in enumerate(train_loader): s_time = time.time() X = data['X'] target = data['y'].reshape(-1, 28) X, target = X.to(device), target.to(device) optimizer.zero_grad() output = model(X) loss = criterion(output, target) loss.backward() optimizer.step() idx_length += 1 if batch_idx % args.log_interval == 0: # print('Train Epoch: {} Batch_idx: {} Process: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format( # epoch, batch_idx, torch.distributed.get_rank(), batch_idx * len(X), length / torch.distributed.get_world_size(), # 100. * batch_idx * len( # X) * torch.distributed.get_world_size() / length, loss.item())) print('Train Epoch: {} Batch_idx: {} Process: {} [{}/{} ({:.0f}%)]\t'.format( epoch, batch_idx, torch.distributed.get_rank(), batch_idx * len(X), length / torch.distributed.get_world_size(), 100. * batch_idx * len( X) * torch.distributed.get_world_size() / length)) if args.dry_run: break print('Process %s length: %s time: %s' % (torch.distributed.get_rank(), idx_length, datetime.datetime.now())) train_iterable_dataset = load_dataset("parquet", data_files=data_files, split="train", streaming=True) test_iterable_dataset = load_dataset("parquet", data_files=data_files, split="test", streaming=True) train_iterable_dataset = train_iterable_dataset.map(process_fn) test_iterable_dataset = test_iterable_dataset.map(process_fn) train_iterable_dataset = train_iterable_dataset.map(scale) test_iterable_dataset = test_iterable_dataset.map(scale) train_iterable_dataset = datasets.distributed.split_dataset_by_node(train_iterable_dataset, world_size=world_size, rank=local_rank).shuffle(seed=1234) test_iterable_dataset = datasets.distributed.split_dataset_by_node(test_iterable_dataset, world_size=world_size, rank=local_rank).shuffle(seed=1234) print(torch.distributed.get_rank(), train_iterable_dataset.n_shards, test_iterable_dataset.n_shards) train_kwargs = {'batch_size': args.batch_size} test_kwargs = {'batch_size': args.test_batch_size} if use_cuda: cuda_kwargs = {'num_workers': 3,#ngpus_per_node, 'pin_memory': True, 'shuffle': False} train_kwargs.update(cuda_kwargs) test_kwargs.update(cuda_kwargs) train_loader = torch.utils.data.DataLoader(train_iterable_dataset, **train_kwargs, # sampler=torch.utils.data.distributed.DistributedSampler( # train_iterable_dataset, # num_replicas=ngpus_per_node, # rank=0) ) test_loader = torch.utils.data.DataLoader(test_iterable_dataset, **test_kwargs, # sampler=torch.utils.data.distributed.DistributedSampler( # test_iterable_dataset, # num_replicas=ngpus_per_node, # rank=0) ) for epoch in range(1, args.epochs + 1): start_time = time.time() train_iterable_dataset.set_epoch(epoch) test_iterable_dataset.set_epoch(epoch) train(args, model, device, train_loader, optimizer, criterion, epoch, train_len) test(args, model, device, criterion2, test_loader) ``` And here’s the part of output: ``` Train Epoch: 1 Batch_idx: 5000 Process: 0 [320000/4710975.0 (7%)] Train Epoch: 1 Batch_idx: 5000 Process: 1 [320000/4710975.0 (7%)] Train Epoch: 1 Batch_idx: 5000 Process: 2 [320000/4710975.0 (7%)] Train Epoch: 1 Batch_idx: 5862 Process: 3 Data_length: 12 coststime: 0.04095172882080078 Train Epoch: 1 Batch_idx: 5862 Process: 0 Data_length: 3 coststime: 0.0751960277557373 Train Epoch: 1 Batch_idx: 5867 Process: 3 Data_length: 49 coststime: 0.0032558441162109375 Train Epoch: 1 Batch_idx: 5872 Process: 1 Data_length: 2 coststime: 0.022842884063720703 Train Epoch: 1 Batch_idx: 5876 Process: 3 Data_length: 63 coststime: 0.002694845199584961 Process 3 length: 5877 time: 2023-11-17 17:03:26.582317 Train epoch 1 costTime: 241.72063446044922s . Process 3 Start to test. 3 0 tensor(45508.8516, device='cuda:3') 3 100 tensor(45309.0469, device='cuda:3') 3 200 tensor(45675.3047, device='cuda:3') 3 300 tensor(45263.0273, device='cuda:3') Process 3 Reduce metrics. Train Epoch: 2 Batch_idx: 0 Process: 3 [0/4710975.0 (0%)] Train Epoch: 1 Batch_idx: 5882 Process: 1 Data_length: 63 coststime: 0.05185818672180176 Train Epoch: 1 Batch_idx: 5887 Process: 1 Data_length: 12 coststime: 0.006895303726196289 Process 1 length: 5888 time: 2023-11-17 17:20:48.578204 Train epoch 1 costTime: 1285.7279663085938s . Process 1 Start to test. 1 0 tensor(45265.9141, device='cuda:1') ``` ### Expected behavior I'd like to know how to fix this problem. ### Environment info ``` torch==2.0 datasets==2.14.0 ```
false
2,000,844,474
https://api.github.com/repos/huggingface/datasets/issues/6436
https://github.com/huggingface/datasets/issues/6436
6,436
TypeError: <lambda>() takes 0 positional arguments but 1 was given
closed
3
2023-11-19T13:10:20
2025-05-05T18:21:21
2023-11-29T16:28:34
ahmadmustafaanis
[]
### Describe the bug ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-35-7b6becee3685>](https://localhost:8080/#) in <cell line: 1>() ----> 1 from datasets import Dataset 9 frames [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 20 __version__ = "2.15.0" 21 ---> 22 from .arrow_dataset import Dataset 23 from .arrow_reader import ReadInstruction 24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 61 import pyarrow.compute as pc 62 from huggingface_hub import CommitOperationAdd, CommitOperationDelete, DatasetCard, DatasetCardData, HfApi ---> 63 from multiprocess import Pool 64 from requests import HTTPError 65 [/usr/local/lib/python3.10/dist-packages/multiprocess/__init__.py](https://localhost:8080/#) in <module> 31 32 import sys ---> 33 from . import context 34 35 # [/usr/local/lib/python3.10/dist-packages/multiprocess/context.py](https://localhost:8080/#) in <module> 4 5 from . import process ----> 6 from . import reduction 7 8 __all__ = () [/usr/local/lib/python3.10/dist-packages/multiprocess/reduction.py](https://localhost:8080/#) in <module> 14 import os 15 try: ---> 16 import dill as pickle 17 except ImportError: 18 import pickle [/usr/local/lib/python3.10/dist-packages/dill/__init__.py](https://localhost:8080/#) in <module> 24 25 ---> 26 from ._dill import ( 27 dump, dumps, load, loads, copy, 28 Pickler, Unpickler, register, pickle, pickles, check, [/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in <module> 166 try: 167 from _pyio import open as _open --> 168 PyTextWrapperType = get_file_type('r', buffering=-1, open=_open) 169 PyBufferedRandomType = get_file_type('r+b', buffering=-1, open=_open) 170 PyBufferedReaderType = get_file_type('rb', buffering=-1, open=_open) [/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in get_file_type(*args, **kwargs) 154 def get_file_type(*args, **kwargs): 155 open = kwargs.pop("open", __builtin__.open) --> 156 f = open(os.devnull, *args, **kwargs) 157 t = type(f) 158 f.close() [/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in open(file, mode, buffering, encoding, errors, newline, closefd, opener) 280 return result 281 encoding = text_encoding(encoding) --> 282 text = TextIOWrapper(buffer, encoding, errors, newline, line_buffering) 283 result = text 284 text.mode = mode [/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in __init__(self, buffer, encoding, errors, newline, line_buffering, write_through) 2043 encoding = "utf-8" 2044 else: -> 2045 encoding = locale.getpreferredencoding(False) 2046 2047 if not isinstance(encoding, str): TypeError: <lambda>() takes 0 positional arguments but 1 was given ``` or ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) [<ipython-input-36-652e886d387f>](https://localhost:8080/#) in <cell line: 1>() ----> 1 import datasets 9 frames [/usr/local/lib/python3.10/dist-packages/datasets/__init__.py](https://localhost:8080/#) in <module> 20 __version__ = "2.15.0" 21 ---> 22 from .arrow_dataset import Dataset 23 from .arrow_reader import ReadInstruction 24 from .builder import ArrowBasedBuilder, BeamBasedBuilder, BuilderConfig, DatasetBuilder, GeneratorBasedBuilder [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in <module> 61 import pyarrow.compute as pc 62 from huggingface_hub import CommitOperationAdd, CommitOperationDelete, DatasetCard, DatasetCardData, HfApi ---> 63 from multiprocess import Pool 64 from requests import HTTPError 65 [/usr/local/lib/python3.10/dist-packages/multiprocess/__init__.py](https://localhost:8080/#) in <module> 31 32 import sys ---> 33 from . import context 34 35 # [/usr/local/lib/python3.10/dist-packages/multiprocess/context.py](https://localhost:8080/#) in <module> 4 5 from . import process ----> 6 from . import reduction 7 8 __all__ = () [/usr/local/lib/python3.10/dist-packages/multiprocess/reduction.py](https://localhost:8080/#) in <module> 14 import os 15 try: ---> 16 import dill as pickle 17 except ImportError: 18 import pickle [/usr/local/lib/python3.10/dist-packages/dill/__init__.py](https://localhost:8080/#) in <module> 24 25 ---> 26 from ._dill import ( 27 dump, dumps, load, loads, copy, 28 Pickler, Unpickler, register, pickle, pickles, check, [/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in <module> 166 try: 167 from _pyio import open as _open --> 168 PyTextWrapperType = get_file_type('r', buffering=-1, open=_open) 169 PyBufferedRandomType = get_file_type('r+b', buffering=-1, open=_open) 170 PyBufferedReaderType = get_file_type('rb', buffering=-1, open=_open) [/usr/local/lib/python3.10/dist-packages/dill/_dill.py](https://localhost:8080/#) in get_file_type(*args, **kwargs) 154 def get_file_type(*args, **kwargs): 155 open = kwargs.pop("open", __builtin__.open) --> 156 f = open(os.devnull, *args, **kwargs) 157 t = type(f) 158 f.close() [/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in open(file, mode, buffering, encoding, errors, newline, closefd, opener) 280 return result 281 encoding = text_encoding(encoding) --> 282 text = TextIOWrapper(buffer, encoding, errors, newline, line_buffering) 283 result = text 284 text.mode = mode [/usr/lib/python3.10/_pyio.py](https://localhost:8080/#) in __init__(self, buffer, encoding, errors, newline, line_buffering, write_through) 2043 encoding = "utf-8" 2044 else: -> 2045 encoding = locale.getpreferredencoding(False) 2046 2047 if not isinstance(encoding, str): TypeError: <lambda>() takes 0 positional arguments but 1 was given ``` ### Steps to reproduce the bug `import datasets` on colab ### Expected behavior work fine ### Environment info colab `!pip install datasets`
false
2,000,690,513
https://api.github.com/repos/huggingface/datasets/issues/6435
https://github.com/huggingface/datasets/issues/6435
6,435
Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method
closed
3
2023-11-19T04:21:16
2024-01-27T17:14:20
2023-12-04T16:57:43
kopyl
[]
### Describe the bug 1. I ran dataset mapping with `num_proc=6` in it and got this error: `RuntimeError: Cannot re-initialize CUDA in forked subprocess. To use CUDA with multiprocessing, you must use the 'spawn' start method` I can't actually find a way to run multi-GPU dataset mapping. Can you help? ### Steps to reproduce the bug 1. Rund SDXL training with `num_proc=6`: https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py ### Expected behavior Should work well ### Environment info 6x A100 SXM, Linux
false
1,999,554,915
https://api.github.com/repos/huggingface/datasets/issues/6434
https://github.com/huggingface/datasets/pull/6434
6,434
Use `ruff` for formatting
closed
3
2023-11-17T16:53:22
2023-11-21T14:19:21
2023-11-21T14:13:13
mariosasko
[]
Use `ruff` instead of `black` for formatting to be consistent with `transformers` ([PR](https://github.com/huggingface/transformers/pull/27144)) and `huggingface_hub` ([PR 1](https://github.com/huggingface/huggingface_hub/pull/1783) and [PR 2](https://github.com/huggingface/huggingface_hub/pull/1789)).
true
1,999,419,105
https://api.github.com/repos/huggingface/datasets/issues/6433
https://github.com/huggingface/datasets/pull/6433
6,433
Better `tqdm` wrapper
closed
9
2023-11-17T15:45:15
2023-11-22T16:48:18
2023-11-22T16:42:08
mariosasko
[]
This PR aligns the `tqdm` logic with `huggingface_hub` (without introducing breaking changes), as the current one is error-prone. Additionally, it improves the doc page about the `datasets`' utilities, and the handling of local `fsspec` paths in `cached_path`. Fix #6409
true
1,999,258,140
https://api.github.com/repos/huggingface/datasets/issues/6432
https://github.com/huggingface/datasets/issues/6432
6,432
load_dataset does not load all of the data in my input file
open
1
2023-11-17T14:28:50
2023-11-22T17:34:58
null
demongolem-biz2
[]
### Describe the bug I have 127 elements in my input dataset. When I do a len on the dataset after loaded, it is only 124 elements. ### Steps to reproduce the bug train_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.TRAIN) valid_dataset = nlp.load_dataset(data_args.dataset_path, name=data_args.qg_format, split=nlp.Split.VALIDATION) logger.info(len(train_dataset)) logger.info(len(valid_dataset)) Both train and valid input are 127 items. However, they both only load 124 items. The input format is in json. At the end of the day, I am trying to create .pt files. ### Expected behavior I see all 127 elements in my dataset when performing len ### Environment info Python 3.10. CentOS operating system. nlp==0.40, datasets==2.14.5, transformers==4.26.1
false
1,997,202,770
https://api.github.com/repos/huggingface/datasets/issues/6431
https://github.com/huggingface/datasets/pull/6431
6,431
Create DatasetNotFoundError and DataFilesNotFoundError
closed
10
2023-11-16T16:02:55
2023-11-22T15:18:51
2023-11-22T15:12:33
albertvillanova
[]
Create `DatasetNotFoundError` and `DataFilesNotFoundError`. Fix #6397. CC: @severo
true
1,996,723,698
https://api.github.com/repos/huggingface/datasets/issues/6429
https://github.com/huggingface/datasets/pull/6429
6,429
Add trust_remote_code argument
closed
14
2023-11-16T12:12:54
2023-11-28T16:10:39
2023-11-28T16:03:43
lhoestq
[]
Draft about adding `trust_remote_code` to `load_dataset`. ```python ds = load_dataset(..., trust_remote_code=True) # run remote code (current default) ``` It would default to `True` (current behavior) and in the next major release it will prompt the user to check the code before running it (we'll communicate on this before doing it of course). ```python # in the future ds = load_dataset(...) # prompt the user to check the code before running it (future default) ds = load_dataset(..., trust_remote_code=True) # run remote code ds = load_dataset(..., trust_remote_code=False) # disallow remote code ``` Related to https://github.com/huggingface/datasets/issues/6400 Will do a separate PR to use the parquet export when possible
true
1,996,306,394
https://api.github.com/repos/huggingface/datasets/issues/6428
https://github.com/huggingface/datasets/pull/6428
6,428
Set dev version
closed
3
2023-11-16T08:12:55
2023-11-16T08:19:39
2023-11-16T08:13:28
albertvillanova
[]
null
true
1,996,248,605
https://api.github.com/repos/huggingface/datasets/issues/6427
https://github.com/huggingface/datasets/pull/6427
6,427
Release: 2.15.0
closed
4
2023-11-16T07:37:20
2023-11-16T08:12:12
2023-11-16T07:43:05
albertvillanova
[]
null
true
1,995,363,264
https://api.github.com/repos/huggingface/datasets/issues/6426
https://github.com/huggingface/datasets/pull/6426
6,426
More robust temporary directory deletion
closed
7
2023-11-15T19:06:42
2023-12-01T15:37:32
2023-12-01T15:31:19
mariosasko
[]
While fixing the Windows errors in #6362, I noticed that `PermissionError` can still easily be thrown on the session exit by the temporary cache directory's finalizer (we would also have to keep track of intermediate datasets, copies, etc.). ~~Due to the low usage of `datasets` on Windows, this PR takes a simpler approach to the issue than https://github.com/huggingface/datasets/pull/2403 - it tries to delete the temporary cache directory, and if this fails, logs a warning message about using a `delete-temp-cache` CLI command to delete it manually. The problematic references are freed after the session exits, so the CLI command should then succeed.~~ This PR implements `Dataset.__setstate__` to register datasets with temporary cache files for deletion.
true
1,995,269,382
https://api.github.com/repos/huggingface/datasets/issues/6425
https://github.com/huggingface/datasets/pull/6425
6,425
Fix deprecation warning when building conda package
closed
3
2023-11-15T18:00:11
2023-12-13T14:22:30
2023-12-13T14:16:00
albertvillanova
[]
When building/releasing conda package, we get this deprecation warning: ``` /usr/share/miniconda/envs/build-datasets/bin/conda-build:11: DeprecationWarning: conda_build.cli.main_build.main is deprecated and will be removed in 4.0.0. Use `conda build` instead. ``` This PR fixes the deprecation warning by using `conda build` instead.
true
1,995,224,516
https://api.github.com/repos/huggingface/datasets/issues/6424
https://github.com/huggingface/datasets/pull/6424
6,424
[docs] troubleshooting guide
closed
2
2023-11-15T17:28:14
2023-11-30T17:29:55
2023-11-30T17:23:46
MKhalusova
[]
Hi all! This is a PR adding a troubleshooting guide for Datasets docs. I went through the library's GitHub Issues and Forum questions and identified a few issues that are common enough that I think it would be valuable to include them in the troubleshooting guide. These are: - creating a dataset from a folder and not following the required format - authentication issues when using `push_to_hub` - `Too Many Requests` with `push_to_hub` - Pickling issues when using Dataset.from_generator() There's also a section on asking for help. Please let me know if there are other common issues or advice that we can include here.
true
1,994,946,847
https://api.github.com/repos/huggingface/datasets/issues/6423
https://github.com/huggingface/datasets/pull/6423
6,423
Fix conda release by adding pyarrow-hotfix dependency
closed
6
2023-11-15T14:57:12
2023-11-15T17:15:33
2023-11-15T17:09:24
albertvillanova
[]
Fix conda release by adding pyarrow-hotfix dependency. Note that conda release failed in latest 2.14.7 release: https://github.com/huggingface/datasets/actions/runs/6874667214/job/18696761723 ``` Traceback (most recent call last): File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/test_tmp/run_test.py", line 2, in <module> import datasets File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/__init__.py", line 22, in <module> from .arrow_dataset import Dataset File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/arrow_dataset.py", line 67, in <module> from .arrow_writer import ArrowWriter, OptimizedTypedSequence File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/arrow_writer.py", line 27, in <module> from .features import Features, Image, Value File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/features/__init__.py", line 18, in <module> from .features import Array2D, Array3D, Array4D, Array5D, ClassLabel, Features, Sequence, Value File "/usr/share/miniconda/envs/build-datasets/conda-bld/datasets_1700036460222/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.12/site-packages/datasets/features/features.py", line 34, in <module> import pyarrow_hotfix # noqa: F401 # to fix vulnerability on pyarrow<14.0.1 ^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'pyarrow_hotfix' ```
true
1,994,579,267
https://api.github.com/repos/huggingface/datasets/issues/6422
https://github.com/huggingface/datasets/issues/6422
6,422
Allow to choose the `writer_batch_size` when using `save_to_disk`
open
2
2023-11-15T11:18:34
2023-11-16T10:00:21
null
NathanGodey
[ "enhancement" ]
### Feature request Add an argument in `save_to_disk` regarding batch size, which would be passed to `shard` and other methods. ### Motivation The `Dataset.save_to_disk` method currently calls `shard` without passing a `writer_batch_size` argument, thus implicitly using the default value (1000). This can result in RAM saturation when using a lot of processes on long text sequences or other modalities, or for specific IO configs. ### Your contribution I would be glad to submit a PR, as long as it does not imply extensive tests refactoring.
false
1,994,451,553
https://api.github.com/repos/huggingface/datasets/issues/6421
https://github.com/huggingface/datasets/pull/6421
6,421
Add pyarrow-hotfix to release docs
closed
3
2023-11-15T10:06:44
2023-11-15T13:49:55
2023-11-15T13:38:22
albertvillanova
[ "maintenance" ]
Add `pyarrow-hotfix` to release docs.
true
1,994,278,903
https://api.github.com/repos/huggingface/datasets/issues/6420
https://github.com/huggingface/datasets/pull/6420
6,420
Set dev version
closed
3
2023-11-15T08:22:19
2023-11-15T08:33:36
2023-11-15T08:22:33
albertvillanova
[]
null
true
1,994,257,873
https://api.github.com/repos/huggingface/datasets/issues/6419
https://github.com/huggingface/datasets/pull/6419
6,419
Release: 2.14.7
closed
6
2023-11-15T08:07:37
2023-11-15T17:35:30
2023-11-15T08:12:59
albertvillanova
[]
Release 2.14.7.
true
1,993,224,629
https://api.github.com/repos/huggingface/datasets/issues/6418
https://github.com/huggingface/datasets/pull/6418
6,418
Remove token value from warnings
closed
3
2023-11-14T17:34:06
2023-11-14T22:26:04
2023-11-14T22:19:45
mariosasko
[]
Fix #6412
true
1,993,149,416
https://api.github.com/repos/huggingface/datasets/issues/6417
https://github.com/huggingface/datasets/issues/6417
6,417
Bug: LayoutLMv3 finetuning on FUNSD Notebook; Arrow Error
closed
3
2023-11-14T16:53:20
2023-11-16T20:23:41
2023-11-16T20:23:41
Davo00
[]
### Describe the bug Arrow issues when running the example Notebook laptop locally on Mac with M1. Works on Google Collab. **Notebook**: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/LayoutLMv3/Fine_tune_LayoutLMv3_on_FUNSD_(HuggingFace_Trainer).ipynb **Error**: `ValueError: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent.` **Caused by**: ``` # we need to define custom features for `set_format` (used later on) to work properly features = Features({ 'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'labels': Sequence(feature=Value(dtype='int64')), }) ``` ### Steps to reproduce the bug Run the notebook provided, locally. If possible also on M1. ### Expected behavior The cell where features are mapped to Array2D and Array3D should work without any issues. ### Environment info Tried with Python 3.9 and 3.10 conda envs. Running Mac M1. `pip show datasets` > Name: datasets Version: 2.14.6 Summary: HuggingFace community-driven open-source library of datasets `pip list` > Package Version > ------------------------- ------------ > accelerate 0.24.1 > aiohttp 3.8.6 > aiosignal 1.3.1 > anyio 3.5.0 > appnope 0.1.2 > argon2-cffi 21.3.0 > argon2-cffi-bindings 21.2.0 > asttokens 2.0.5 > async-timeout 4.0.3 > attrs 23.1.0 > backcall 0.2.0 > beautifulsoup4 4.12.2 > bleach 4.1.0 > certifi 2023.7.22 > cffi 1.15.1 > charset-normalizer 3.3.2 > comm 0.1.2 > datasets 2.14.6 > debugpy 1.6.7 > decorator 5.1.1 > defusedxml 0.7.1 > dill 0.3.7 > entrypoints 0.4 > exceptiongroup 1.0.4 > executing 0.8.3 > fastjsonschema 2.16.2 > filelock 3.13.1 > frozenlist 1.4.0 > fsspec 2023.10.0 > huggingface-hub 0.17.3 > idna 3.4 > importlib-metadata 6.0.0 > IProgress 0.4 > ipykernel 6.25.0 > ipython 8.15.0 > ipython-genutils 0.2.0 > jedi 0.18.1 > Jinja2 3.1.2 > joblib 1.3.2 > jsonschema 4.19.2 > jsonschema-specifications 2023.7.1 > jupyter_client 7.4.9 > jupyter_core 5.5.0 > jupyter-server 1.23.4 > jupyterlab-pygments 0.1.2 > MarkupSafe 2.1.1 > matplotlib-inline 0.1.6 > mistune 2.0.4 > mpmath 1.3.0 > multidict 6.0.4 > multiprocess 0.70.15 > nbclassic 1.0.0 > nbclient 0.8.0 > nbconvert 7.10.0 > nbformat 5.9.2 > nest-asyncio 1.5.6 > networkx 3.2.1 > notebook 6.5.4 > notebook_shim 0.2.3 > numpy 1.26.1 > packaging 23.1 > pandas 2.1.3 > pandocfilters 1.5.0 > parso 0.8.3 > pexpect 4.8.0 > pickleshare 0.7.5 > Pillow 10.1.0 > pip 23.3 > platformdirs 3.10.0 > prometheus-client 0.14.1 > prompt-toolkit 3.0.36 > psutil 5.9.0 > ptyprocess 0.7.0 > pure-eval 0.2.2 > pyarrow 14.0.1 > pycparser 2.21 > Pygments 2.15.1 > python-dateutil 2.8.2 > pytz 2023.3.post1 > PyYAML 6.0.1 > pyzmq 23.2.0 > referencing 0.30.2 > regex 2023.10.3 > requests 2.31.0 > rpds-py 0.10.6 > safetensors 0.4.0 > scikit-learn 1.3.2 > scipy 1.11.3 > Send2Trash 1.8.2 > seqeval 1.2.2 > setuptools 68.0.0 > six 1.16.0 > sniffio 1.2.0 > soupsieve 2.5 > stack-data 0.2.0 > sympy 1.12 > terminado 0.17.1 > threadpoolctl 3.2.0 > tinycss2 1.2.1 > tokenizers 0.14.1 > torch 2.1.0 > tornado 6.3.3 > tqdm 4.66.1 > traitlets 5.7.1 > transformers 4.36.0.dev0 > typing_extensions 4.7.1 > tzdata 2023.3 > urllib3 2.0.7 > wcwidth 0.2.5 > webencodings 0.5.1 > websocket-client 0.58.0 > wheel 0.41.2 > xxhash 3.4.1 > yarl 1.9.2 > zipp 3.11.0
false
1,992,954,723
https://api.github.com/repos/huggingface/datasets/issues/6416
https://github.com/huggingface/datasets/pull/6416
6,416
Rename audio_classificiation.py to audio_classification.py
closed
4
2023-11-14T15:15:29
2023-11-15T11:59:32
2023-11-15T11:53:20
carlthome
[]
null
true
1,992,917,248
https://api.github.com/repos/huggingface/datasets/issues/6415
https://github.com/huggingface/datasets/pull/6415
6,415
Fix multi gpu map example
closed
23
2023-11-14T14:57:18
2024-01-31T00:49:15
2023-11-22T15:42:19
lhoestq
[]
- use `orch.cuda.set_device` instead of `CUDA_VISIBLE_DEVICES ` - add `if __name__ == "__main__"` fix https://github.com/huggingface/datasets/issues/6186
true
1,992,482,491
https://api.github.com/repos/huggingface/datasets/issues/6414
https://github.com/huggingface/datasets/pull/6414
6,414
Set `usedforsecurity=False` in hashlib methods (FIPS compliance)
closed
10
2023-11-14T10:47:09
2023-11-17T14:23:20
2023-11-17T14:17:00
Wauplin
[]
Related to https://github.com/huggingface/transformers/issues/27034 and https://github.com/huggingface/huggingface_hub/pull/1782. **TL;DR:** `hashlib` is not a secure library for cryptography-related stuff. We are only using `hashlib` for non-security-related purposes in `datasets` so it's fine. From Python 3.9 we set can `usedforsecurity=False` in any `hashlib` method which is mandatory for companies that forbid the use of `hashlib` for security purposes. This PR fixes that. **Note:** before merging this we need to release a new tokenizers version that would allow the newest `huggingface_hub` version (see https://github.com/huggingface/tokenizers/pull/1385). Otherwise it might create friction to users that want to install `datasets` + `tokenizers` at the same time.
true
1,992,401,594
https://api.github.com/repos/huggingface/datasets/issues/6412
https://github.com/huggingface/datasets/issues/6412
6,412
User token is printed out!
closed
1
2023-11-14T10:01:34
2023-11-14T22:19:46
2023-11-14T22:19:46
mohsen-goodarzi
[]
This line prints user token on command line! Is it safe? https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/load.py#L2091
false
1,992,386,630
https://api.github.com/repos/huggingface/datasets/issues/6411
https://github.com/huggingface/datasets/pull/6411
6,411
Fix dependency conflict within CI build documentation
closed
1
2023-11-14T09:52:51
2023-11-14T10:05:59
2023-11-14T10:05:35
albertvillanova
[]
Manually fix dependency conflict on `typing-extensions` version originated by `apache-beam` + `pydantic` (now a dependency of `huggingface-hub`). This is a temporary hot fix of our CI build documentation until we stop using `apache-beam`. Fix #6406.
true
1,992,100,209
https://api.github.com/repos/huggingface/datasets/issues/6410
https://github.com/huggingface/datasets/issues/6410
6,410
Datasets does not load HuggingFace Repository properly
open
2
2023-11-14T06:50:49
2023-11-16T06:54:36
null
MikeDoes
[]
### Describe the bug Dear Datasets team, We just have published a dataset on Huggingface: https://huggingface.co/ai4privacy However, when trying to read it using the Dataset library we get an error. As I understand jsonl files are compatible, could you please clarify how we can solve the issue? Please let me know and we would be more than happy to adapt the structure of the repository or meta data so it works easier: ```python from datasets import load_dataset dataset = load_dataset("ai4privacy/pii-masking-200k") ``` ``` Downloading readme: 100% 11.8k/11.8k [00:00<00:00, 512kB/s] Downloading data files: 100% 1/1 [00:11<00:00, 11.16s/it] Downloading data: 100% 64.3M/64.3M [00:02<00:00, 32.9MB/s] Downloading data: 100% 113M/113M [00:03<00:00, 35.0MB/s] Downloading data: 100% 97.7M/97.7M [00:02<00:00, 46.1MB/s] Downloading data: 100% 90.8M/90.8M [00:02<00:00, 44.9MB/s] Downloading data: 100% 7.63k/7.63k [00:00<00:00, 41.0kB/s] Downloading data: 100% 1.03k/1.03k [00:00<00:00, 9.44kB/s] Extracting data files: 100% 1/1 [00:00<00:00, 29.26it/s] Generating train split: 209261/0 [00:05<00:00, 41201.25 examples/s] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1939 ) -> 1940 writer.write_table(table) 1941 num_examples_progress_update += len(table) 8 frames [/usr/local/lib/python3.10/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_table(self, pa_table, writer_batch_size) 571 pa_table = pa_table.combine_chunks() --> 572 pa_table = table_cast(pa_table, self._schema) 573 if self.embed_local_files: [/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in table_cast(table, schema) 2327 if table.schema != schema: -> 2328 return cast_table_to_schema(table, schema) 2329 elif table.schema.metadata != schema.metadata: [/usr/local/lib/python3.10/dist-packages/datasets/table.py](https://localhost:8080/#) in cast_table_to_schema(table, schema) 2285 if sorted(table.column_names) != sorted(features): -> 2286 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") 2287 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ValueError: Couldn't cast JOBTYPE: int64 PHONEIMEI: int64 ACCOUNTNAME: int64 VEHICLEVIN: int64 GENDER: int64 CURRENCYCODE: int64 CREDITCARDISSUER: int64 JOBTITLE: int64 SEX: int64 CURRENCYSYMBOL: int64 IP: int64 EYECOLOR: int64 MASKEDNUMBER: int64 SECONDARYADDRESS: int64 JOBAREA: int64 ACCOUNTNUMBER: int64 language: string BITCOINADDRESS: int64 MAC: int64 SSN: int64 EMAIL: int64 ETHEREUMADDRESS: int64 DOB: int64 VEHICLEVRM: int64 IPV6: int64 AMOUNT: int64 URL: int64 PHONENUMBER: int64 PIN: int64 TIME: int64 CREDITCARDNUMBER: int64 FIRSTNAME: int64 IBAN: int64 BIC: int64 COUNTY: int64 STATE: int64 LASTNAME: int64 ZIPCODE: int64 HEIGHT: int64 ORDINALDIRECTION: int64 MIDDLENAME: int64 STREET: int64 USERNAME: int64 CURRENCY: int64 PREFIX: int64 USERAGENT: int64 CURRENCYNAME: int64 LITECOINADDRESS: int64 CREDITCARDCVV: int64 AGE: int64 CITY: int64 PASSWORD: int64 BUILDINGNUMBER: int64 IPV4: int64 NEARBYGPSCOORDINATE: int64 DATE: int64 COMPANYNAME: int64 to {'masked_text': Value(dtype='string', id=None), 'unmasked_text': Value(dtype='string', id=None), 'privacy_mask': Value(dtype='string', id=None), 'span_labels': Value(dtype='string', id=None), 'bio_labels': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'tokenised_text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)} because column names don't match The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) [<ipython-input-2-f1c6811e9c83>](https://localhost:8080/#) in <cell line: 3>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("ai4privacy/pii-masking-200k") [/usr/local/lib/python3.10/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2151 2152 # Download and prepare data -> 2153 builder_instance.download_and_prepare( 2154 download_config=download_config, 2155 download_mode=download_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 952 if num_proc is not None: 953 prepare_split_kwargs["num_proc"] = num_proc --> 954 self._download_and_prepare( 955 dl_manager=dl_manager, 956 verification_mode=verification_mode, [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1047 try: 1048 # Prepare split will record examples associated to the split -> 1049 self._prepare_split(split_generator, **prepare_split_kwargs) 1050 except OSError as e: 1051 raise OSError( [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split(self, split_generator, file_format, num_proc, max_shard_size) 1811 job_id = 0 1812 with pbar: -> 1813 for job_id, done, content in self._prepare_split_single( 1814 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1815 ): [/usr/local/lib/python3.10/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1956 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1957 e = e.__context__ -> 1958 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1959 1960 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` Thank you and have a great day ahead ### Steps to reproduce the bug Open Google Colab Notebook: Run command: !pip3 install datasets Run code: from datasets import load_dataset dataset = load_dataset("ai4privacy/pii-masking-200k") ### Expected behavior Download the dataset successfully from HuggingFace to the notebook so that we can start working with it ### Environment info - `datasets` version: 2.14.6 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - Huggingface_hub version: 0.19.1 - PyArrow version: 9.0.0 - Pandas version: 1.5.3
false
1,991,960,865
https://api.github.com/repos/huggingface/datasets/issues/6409
https://github.com/huggingface/datasets/issues/6409
6,409
using DownloadManager to download from local filesystem and disable_progress_bar, there will be an exception
closed
0
2023-11-14T04:21:01
2023-11-22T16:42:09
2023-11-22T16:42:09
neiblegy
[]
### Describe the bug i'm using datasets.download.download_manager.DownloadManager to download files like "file:///a/b/c.txt", and i disable_progress_bar() to disable bar. there will be an exception as follows: `AttributeError: 'function' object has no attribute 'close' Exception ignored in: <function TqdmCallback.__del__ at 0x7fa8683d84c0> Traceback (most recent call last): File "/home/protoss.gao/.local/lib/python3.9/site-packages/fsspec/callbacks.py", line 233, in __del__ self.tqdm.close()` i check your source code in datasets/utils/file_utils.py:348 you define TqdmCallback derive from fsspec.callbacks.TqdmCallback but in the newest fsspec code [https://github.com/fsspec/filesystem_spec/blob/master/fsspec/callbacks.py](url) , line 146, in this case, _DEFAULT_CALLBACK will take effect, but in line 234, it calls "close()" function which _DEFAULT_CALLBACK don't have such thing. so i think the class "TqdmCallback" in datasets/utils/file_utils.py may override "__del__" function or report this bug to fsspec. ### Steps to reproduce the bug as i said ### Expected behavior no exception ### Environment info datasets: 2.14.4 python: 3.9 platform: x86_64
false
1,991,902,972
https://api.github.com/repos/huggingface/datasets/issues/6408
https://github.com/huggingface/datasets/issues/6408
6,408
`IterableDataset` lost but not keep columns when map function adding columns with names in `remove_columns`
open
0
2023-11-14T03:12:08
2023-11-16T06:24:10
null
shmily326
[]
### Describe the bug IterableDataset lost but not keep columns when map function adding columns with names in remove_columns, Dataset not. May be related to the code below: https://github.com/huggingface/datasets/blob/06c3ffb8d068b6307b247164b10f7c7311cefed4/src/datasets/iterable_dataset.py#L750-L756 ### Steps to reproduce the bug ```python dataset: IterableDataset = load_dataset("Anthropic/hh-rlhf", streaming=True, split="train") column_names = list(next(iter(dataset)).keys()) # ['chosen', 'rejected'] # map_fn will return dict {"chosen": xxx, "rejected": xxx, "prompt": xxx, "history": xxxx} dataset = dataset.map(map_fn, batched=True, remove_columns=column_names) next(iter(dataset)) # output # {'prompt': 'xxx, 'history': xxx} ``` ```python # when load_dataset with streaming=False, the column_names are kept: dataset: Dataset = load_dataset("Anthropic/hh-rlhf", streaming=False, split="train") column_names = list(next(iter(dataset)).keys()) # ['chosen', 'rejected'] # map_fn will return dict {"chosen": xxx, "rejected": xxx, "prompt": xxx, "history": xxxx} dataset = dataset.map(map_fn, batched=True, remove_columns=column_names) next(iter(dataset)) # output # {'prompt': 'xxx, 'history': xxx, "chosen": xxx, "rejected": xxx} ``` ### Expected behavior IterableDataset keep columns when map function adding columns with names in remove_columns ### Environment info datasets==2.14.6
false
1,991,514,079
https://api.github.com/repos/huggingface/datasets/issues/6407
https://github.com/huggingface/datasets/issues/6407
6,407
Loading the dataset from private S3 bucket gives "TypeError: cannot pickle '_contextvars.Context' object"
open
1
2023-11-13T21:27:43
2024-07-30T12:35:09
null
eawer
[]
### Describe the bug I'm trying to read the parquet file from the private s3 bucket using the `load_dataset` function, but I receive `TypeError: cannot pickle '_contextvars.Context' object` error I'm working on a machine with `~/.aws/credentials` file. I can't give credentials and the path to a file in a private bucket for obvious reasons, but I'll try to give all possible outputs. ### Steps to reproduce the bug ```python import s3fs from datasets import load_dataset from aiobotocore.session import get_session DATA_PATH = "s3://bucket_name/path/validation.parquet" fs = s3fs.S3FileSystem(session=get_session()) ``` `fs.stat` returns the data, so we can say that fs is working and we have all permissions ```python fs.stat(DATA_PATH) # Returns: # {'ETag': '"123123a-19"', # 'LastModified': datetime.datetime(2023, 11, 1, 10, 16, 57, tzinfo=tzutc()), # 'size': 312237170, # 'name': 'bucket_name/path/validation.parquet', # 'type': 'file', # 'StorageClass': 'STANDARD', # 'VersionId': 'Abc.HtmsC9h.as', # 'ContentType': 'binary/octet-stream'} ``` ```python fs.storage_options # Returns: # {'session': <aiobotocore.session.AioSession at 0x7f9193fa53c0>} ``` ```python ds = load_dataset("parquet", data_files={"train": DATA_PATH}, storage_options=fs.storage_options) ``` <details> <summary>Returns such error (expandable)</summary> ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[88], line 1 ----> 1 ds = load_dataset("parquet", data_files={"train": DATA_PATH}, storage_options=fs.storage_options) File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/load.py:2153, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, token, use_auth_token, task, streaming, num_proc, storage_options, **config_kwargs) 2150 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 2152 # Download and prepare data -> 2153 builder_instance.download_and_prepare( 2154 download_config=download_config, 2155 download_mode=download_mode, 2156 verification_mode=verification_mode, 2157 try_from_hf_gcs=try_from_hf_gcs, 2158 num_proc=num_proc, 2159 storage_options=storage_options, 2160 ) 2162 # Build dataset for splits 2163 keep_in_memory = ( 2164 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 2165 ) File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/builder.py:954, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 952 if num_proc is not None: 953 prepare_split_kwargs["num_proc"] = num_proc --> 954 self._download_and_prepare( 955 dl_manager=dl_manager, 956 verification_mode=verification_mode, 957 **prepare_split_kwargs, 958 **download_and_prepare_kwargs, 959 ) 960 # Sync info 961 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/builder.py:1027, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 1025 split_dict = SplitDict(dataset_name=self.dataset_name) 1026 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) -> 1027 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 1029 # Checksums verification 1030 if verification_mode == VerificationMode.ALL_CHECKS and dl_manager.record_checksums: File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py:34, in Parquet._split_generators(self, dl_manager) 32 if not self.config.data_files: 33 raise ValueError(f"At least one data file must be specified, but got data_files={self.config.data_files}") ---> 34 data_files = dl_manager.download_and_extract(self.config.data_files) 35 if isinstance(data_files, (str, list, tuple)): 36 files = data_files File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_manager.py:565, in DownloadManager.download_and_extract(self, url_or_urls) 549 def download_and_extract(self, url_or_urls): 550 """Download and extract given `url_or_urls`. 551 552 Is roughly equivalent to: (...) 563 extracted_path(s): `str`, extracted paths of given URL(s). 564 """ --> 565 return self.extract(self.download(url_or_urls)) File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_manager.py:420, in DownloadManager.download(self, url_or_urls) 401 def download(self, url_or_urls): 402 """Download given URL(s). 403 404 By default, only one process is used for download. Pass customized `download_config.num_proc` to change this behavior. (...) 418 ``` 419 """ --> 420 download_config = self.download_config.copy() 421 download_config.extract_compressed_file = False 422 if download_config.download_desc is None: File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_config.py:94, in DownloadConfig.copy(self) 93 def copy(self) -> "DownloadConfig": ---> 94 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) File ~/miniconda3/envs/test-env/lib/python3.10/site-packages/datasets/download/download_config.py:94, in <dictcomp>(.0) 93 def copy(self) -> "DownloadConfig": ---> 94 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()}) File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) [... skipping similar frames: _deepcopy_dict at line 231 (2 times), deepcopy at line 146 (2 times)] File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) [... skipping similar frames: deepcopy at line 146 (1 times)] File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:206, in _deepcopy_list(x, memo, deepcopy) 204 append = y.append 205 for a in x: --> 206 append(deepcopy(a, memo)) 207 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:238, in _deepcopy_method(x, memo) 237 def _deepcopy_method(x, memo): # Copy instance methods --> 238 return type(x)(x.__func__, deepcopy(x.__self__, memo)) File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) [... skipping similar frames: _deepcopy_dict at line 231 (3 times), deepcopy at line 146 (3 times), deepcopy at line 172 (3 times), _reconstruct at line 271 (2 times)] File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) [... skipping similar frames: _deepcopy_dict at line 231 (1 times), deepcopy at line 146 (1 times)] File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:265, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 263 if deep and args: 264 args = (deepcopy(arg, memo) for arg in args) --> 265 y = func(*args) 266 if deep: 267 memo[id(x)] = y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:264, in <genexpr>(.0) 262 deep = memo is not None 263 if deep and args: --> 264 args = (deepcopy(arg, memo) for arg in args) 265 y = func(*args) 266 if deep: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in _deepcopy_tuple(x, memo, deepcopy) 210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 211 y = [deepcopy(a, memo) for a in x] 212 # We're not going to put the tuple in the memo, but it's still important we 213 # check for it, in case the tuple contains recursive mutable structures. 214 try: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in <listcomp>(.0) 210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 211 y = [deepcopy(a, memo) for a in x] 212 # We're not going to put the tuple in the memo, but it's still important we 213 # check for it, in case the tuple contains recursive mutable structures. 214 try: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:172, in deepcopy(x, memo, _nil) 170 y = x 171 else: --> 172 y = _reconstruct(x, memo, *rv) 174 # If is its own copy, don't memoize. 175 if y is not x: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:271, in _reconstruct(x, memo, func, args, state, listiter, dictiter, deepcopy) 269 if state is not None: 270 if deep: --> 271 state = deepcopy(state, memo) 272 if hasattr(y, '__setstate__'): 273 y.__setstate__(state) File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in _deepcopy_tuple(x, memo, deepcopy) 210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 211 y = [deepcopy(a, memo) for a in x] 212 # We're not going to put the tuple in the memo, but it's still important we 213 # check for it, in case the tuple contains recursive mutable structures. 214 try: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:211, in <listcomp>(.0) 210 def _deepcopy_tuple(x, memo, deepcopy=deepcopy): --> 211 y = [deepcopy(a, memo) for a in x] 212 # We're not going to put the tuple in the memo, but it's still important we 213 # check for it, in case the tuple contains recursive mutable structures. 214 try: File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:146, in deepcopy(x, memo, _nil) 144 copier = _deepcopy_dispatch.get(cls) 145 if copier is not None: --> 146 y = copier(x, memo) 147 else: 148 if issubclass(cls, type): File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:231, in _deepcopy_dict(x, memo, deepcopy) 229 memo[id(x)] = y 230 for key, value in x.items(): --> 231 y[deepcopy(key, memo)] = deepcopy(value, memo) 232 return y File ~/miniconda3/envs/test-env/lib/python3.10/copy.py:161, in deepcopy(x, memo, _nil) 159 reductor = getattr(x, "__reduce_ex__", None) 160 if reductor is not None: --> 161 rv = reductor(4) 162 else: 163 reductor = getattr(x, "__reduce__", None) TypeError: cannot pickle '_contextvars.Context' object ``` </details> ### Expected behavior If I choose to load the file from the public bucket with `anon=True` passed - everything works, so I expected loading from the private bucket to work as well ### Environment info - `datasets` version: 2.14.6 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.13 - Huggingface_hub version: 0.19.1 - PyArrow version: 14.0.1 - Pandas version: 1.5.3 - s3fs version: 2023.10.0 - fsspec version: 2023.10.0 - aiobotocore version: 2.7.0
false
1,990,469,045
https://api.github.com/repos/huggingface/datasets/issues/6406
https://github.com/huggingface/datasets/issues/6406
6,406
CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
closed
0
2023-11-13T11:36:10
2023-11-14T10:05:36
2023-11-14T10:05:36
albertvillanova
[]
Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390 ``` ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' ```
false
1,990,358,743
https://api.github.com/repos/huggingface/datasets/issues/6405
https://github.com/huggingface/datasets/issues/6405
6,405
ConfigNamesError on a simple CSV file
closed
3
2023-11-13T10:28:29
2023-11-13T20:01:24
2023-11-13T20:01:24
severo
[ "bug" ]
See https://huggingface.co/datasets/Nguyendo1999/mmath/discussions/1 ``` Error code: ConfigNamesError Exception: TypeError Message: __init__() missing 1 required positional argument: 'dtype' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, token=hf_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1512, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1489, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1039, in get_module dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 468, in from_dataset_card_data dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 399, in _from_yaml_dict yaml_data["features"] = Features._from_yaml_list(yaml_data["features"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1838, in _from_yaml_list return cls.from_dict(from_yaml_inner(yaml_data)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1690, in from_dict obj = generate_from_dict(dic) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1345, in generate_from_dict return {key: generate_from_dict(value) for key, value in obj.items()} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1345, in <dictcomp> return {key: generate_from_dict(value) for key, value in obj.items()} File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1353, in generate_from_dict return class_type(**{k: v for k, v in obj.items() if k in field_names}) TypeError: __init__() missing 1 required positional argument: 'dtype' ``` This is the CSV file: https://huggingface.co/datasets/Nguyendo1999/mmath/blob/dbcdd7c2c6fc447f852ec136a7532292802bb46f/math_train.csv
false
1,990,211,901
https://api.github.com/repos/huggingface/datasets/issues/6404
https://github.com/huggingface/datasets/pull/6404
6,404
Support pyarrow 14.0.1 and fix vulnerability CVE-2023-47248
closed
15
2023-11-13T09:15:39
2023-11-14T10:29:48
2023-11-14T10:23:29
albertvillanova
[]
Support `pyarrow` 14.0.1 and fix vulnerability [CVE-2023-47248](https://github.com/advisories/GHSA-5wvp-7f3h-6wmm). Fix #6396.
true
1,990,098,817
https://api.github.com/repos/huggingface/datasets/issues/6403
https://github.com/huggingface/datasets/issues/6403
6,403
Cannot import datasets on google colab (python 3.10.12)
closed
2
2023-11-13T08:14:43
2023-11-16T05:04:22
2023-11-16T05:04:21
nabilaannisa
[]
### Describe the bug I'm trying A full colab demo notebook of zero-shot-distillation from https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation but i got this type of error when importing datasets on my google colab (python version is 3.10.12) ![image](https://github.com/huggingface/datasets/assets/15389235/6f7758a2-681d-4436-87d0-5e557838e368) I found the same problem that have been solved in [#3326 ] but it seem still error on the google colab. I can't try on my local using jupyter notebook because of my laptop resource doesn't fulfill the requirements. Please can anyone help me solve this problem. Thank you πŸ˜… ### Steps to reproduce the bug Error: ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) [<ipython-input-8-b6e092f83978>](https://localhost:8080/#) in <cell line: 1>() ----> 1 from datasets import load_dataset 2 3 # Print all the available datasets 4 from huggingface_hub import list_datasets 5 print([dataset.id for dataset in list_datasets()]) 6 frames [/usr/lib/python3.10/functools.py](https://localhost:8080/#) in update_wrapper(wrapper, wrapped, assigned, updated) 59 # Issue #17482: set __wrapped__ last so we don't inadvertently copy it 60 # from the wrapped function when updating __dict__ ---> 61 wrapper.__wrapped__ = wrapped 62 # Return the wrapper so this can be used as a decorator via partial() 63 return wrapper AttributeError: readonly attribute ``` ### Expected behavior Run success on Google Colab (free) ### Environment info Windows 11 x64, Google Colab free
false
1,989,094,542
https://api.github.com/repos/huggingface/datasets/issues/6402
https://github.com/huggingface/datasets/pull/6402
6,402
Update torch_formatter.py
closed
2
2023-11-11T19:40:41
2024-03-15T11:31:53
2024-03-15T11:25:37
varunneal
[]
Ensure PyTorch images are converted to (C, H, W) instead of (H, W, C). See #6394 for motivation.
true
1,988,710,061
https://api.github.com/repos/huggingface/datasets/issues/6401
https://github.com/huggingface/datasets/issues/6401
6,401
dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") not working
closed
2
2023-11-11T04:09:07
2023-11-20T17:45:20
2023-11-20T17:45:20
userbox020
[]
### Describe the bug ``` (datasets) mruserbox@guru-X99:/media/10TB_HHD/_LLM_DATASETS$ python dataset.py Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 360/360 [00:00<00:00, 2.16MB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 65.1M/65.1M [00:19<00:00, 3.38MB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6.35k/6.35k [00:00<00:00, 20.7kB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7.29M/7.29M [00:01<00:00, 3.99MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:21<00:00, 7.14s/it] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 1624.23it/s] Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 314294/314294 [00:00<00:00, 668186.58 examples/s] Generating validation split: 120 examples [00:00, 100422.28 examples/s] Generating test split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 34922/34922 [00:00<00:00, 754683.41 examples/s] Traceback (most recent call last): File "/media/10TB_HHD/_LLM_DATASETS/dataset.py", line 3, in <module> dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/load.py", line 2153, in load_dataset builder_instance.download_and_prepare( File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/builder.py", line 954, in download_and_prepare self._download_and_prepare( File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/builder.py", line 1067, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/mruserbox/miniconda3/envs/datasets/lib/python3.10/site-packages/datasets/utils/info_utils.py", line 93, in verify_splits raise UnexpectedSplits(str(set(recorded_splits) - set(expected_splits))) datasets.utils.info_utils.UnexpectedSplits: {'validation'} ``` ### Steps to reproduce the bug Name: `dataset.py` Code: ``` from datasets import load_dataset dataset = load_dataset("Hyperspace-Technologies/scp-wiki-text") ``` ### Expected behavior Run without errors ### Environment info ``` name: datasets channels: - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=5.1=1_gnu - bzip2=1.0.8=h7b6447c_0 - ca-certificates=2023.08.22=h06a4308_0 - ld_impl_linux-64=2.38=h1181459_1 - libffi=3.4.4=h6a678d5_0 - libgcc-ng=11.2.0=h1234567_1 - libgomp=11.2.0=h1234567_1 - libstdcxx-ng=11.2.0=h1234567_1 - libuuid=1.41.5=h5eee18b_0 - ncurses=6.4=h6a678d5_0 - openssl=3.0.12=h7f8727e_0 - python=3.10.13=h955ad1f_0 - readline=8.2=h5eee18b_0 - setuptools=68.0.0=py310h06a4308_0 - sqlite=3.41.2=h5eee18b_0 - tk=8.6.12=h1ccaba5_0 - wheel=0.41.2=py310h06a4308_0 - xz=5.4.2=h5eee18b_0 - zlib=1.2.13=h5eee18b_0 - pip: - aiohttp==3.8.6 - aiosignal==1.3.1 - async-timeout==4.0.3 - attrs==23.1.0 - certifi==2023.7.22 - charset-normalizer==3.3.2 - click==8.1.7 - datasets==2.14.6 - dill==0.3.7 - filelock==3.13.1 - frozenlist==1.4.0 - fsspec==2023.10.0 - huggingface-hub==0.19.0 - idna==3.4 - multidict==6.0.4 - multiprocess==0.70.15 - numpy==1.26.1 - openai==0.27.8 - packaging==23.2 - pandas==2.1.3 - pip==23.3.1 - platformdirs==4.0.0 - pyarrow==14.0.1 - python-dateutil==2.8.2 - pytz==2023.3.post1 - pyyaml==6.0.1 - requests==2.31.0 - six==1.16.0 - tomli==2.0.1 - tqdm==4.66.1 - typer==0.9.0 - typing-extensions==4.8.0 - tzdata==2023.3 - urllib3==2.0.7 - xxhash==3.4.1 - yarl==1.9.2 prefix: /home/mruserbox/miniconda3/envs/datasets ```
false
1,988,571,317
https://api.github.com/repos/huggingface/datasets/issues/6400
https://github.com/huggingface/datasets/issues/6400
6,400
Safely load datasets by disabling execution of dataset loading script
closed
4
2023-11-10T23:48:29
2024-06-13T15:56:13
2024-06-13T15:56:13
irenedea
[ "enhancement" ]
### Feature request Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution. Any suggested workarounds are welcome as well. ### Motivation This is a security vulnerability that could lead to arbitrary code execution. ### Your contribution n/a
false
1,988,368,503
https://api.github.com/repos/huggingface/datasets/issues/6399
https://github.com/huggingface/datasets/issues/6399
6,399
TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array
open
1
2023-11-10T20:48:46
2024-06-22T00:13:48
null
y-hwang
[]
### Describe the bug Hi, I am preprocessing a large custom dataset with numpy arrays. I am running into this TypeError during writing in a dataset.map() function. I've tried decreasing writer batch size, but this error persists. This error does not occur for smaller datasets. Thank you! ### Steps to reproduce the bug Traceback (most recent call last): File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/multiprocess/pool.py", line 125, in worker result = (True, func(*args, **kwds)) File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/utils/py_utils.py", line 1354, in _write_generator_to_queue for i, result in enumerate(func(**kwargs)): File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 3493, in _map_single writer.write_batch(batch) File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_writer.py", line 555, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 243, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/arrow_writer.py", line 184, in __arrow_array__ out = numpy_to_pyarrow_listarray(data) File "/n/home12/yhwang/.conda/envs/lib/python3.10/site-packages/datasets/features/features.py", line 1394, in numpy_to_pyarrow_listarray values = pa.ListArray.from_arrays(offsets, values) File "pyarrow/array.pxi", line 2004, in pyarrow.lib.ListArray.from_arrays TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array ### Expected behavior Type should not be a ChunkedArray ### Environment info datasets v2.14.5 arrow v1.2.3 pyarrow v12.0.1
false
1,987,786,446
https://api.github.com/repos/huggingface/datasets/issues/6398
https://github.com/huggingface/datasets/pull/6398
6,398
Remove redundant condition in builders
closed
3
2023-11-10T14:56:43
2023-11-14T10:49:15
2023-11-14T10:43:00
albertvillanova
[]
Minor refactoring to remove redundant condition.
true
1,987,622,152
https://api.github.com/repos/huggingface/datasets/issues/6397
https://github.com/huggingface/datasets/issues/6397
6,397
Raise a different exception for inexisting dataset vs files without known extension
closed
0
2023-11-10T13:22:14
2023-11-22T15:12:34
2023-11-22T15:12:34
severo
[]
See https://github.com/huggingface/datasets-server/issues/2082#issuecomment-1805716557 We have the same error for: - https://huggingface.co/datasets/severo/a_dataset_that_does_not_exist: a dataset that does not exist - https://huggingface.co/datasets/severo/test_files_without_extension: a dataset with files without a known extension ``` >>> import datasets >>> datasets.get_dataset_config_names('severo/a_dataset_that_does_not_exist') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /home/slesage/hf/datasets-server/services/worker/severo/a_dataset_that_does_not_exist/a_dataset_that_does_not_exist.py or any data file in the same directory. Couldn't find 'severo/a_dataset_that_does_not_exist' on the Hugging Face Hub either: FileNotFoundError: Dataset 'severo/a_dataset_that_does_not_exist' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`. >>> datasets.get_dataset_config_names('severo/test_files_without_extension') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1508, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /home/slesage/hf/datasets-server/services/worker/severo/test_files_without_extension/test_files_without_extension.py or any data file in the same directory. Couldn't find 'severo/test_files_without_extension' on the Hugging Face Hub either: FileNotFoundError: No (supported) data files or dataset script found in severo/test_files_without_extension. ``` To differentiate, we must parse the error message (only the end is different). We should have a different exception for these two errors.
false
1,987,308,077
https://api.github.com/repos/huggingface/datasets/issues/6396
https://github.com/huggingface/datasets/issues/6396
6,396
Issue with pyarrow 14.0.1
closed
5
2023-11-10T10:02:12
2023-11-14T10:23:30
2023-11-14T10:23:30
severo
[]
See https://github.com/huggingface/datasets-server/pull/2089 for reference ``` from datasets import (Array2D, Dataset, Features) feature_type = Array2D(shape=(2, 2), dtype="float32") content = [[0.0, 0.0], [0.0, 0.0]] features = Features({"col": feature_type}) dataset = Dataset.from_dict({"col": [content]}, features=features) ``` generates ``` /home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py:648: FutureWarning: pyarrow.PyExtensionType is deprecated and will refuse deserialization by default. Instead, please derive from pyarrow.ExtensionType and implement your own serialization mechanism. pa.PyExtensionType.__init__(self, self.storage_dtype) /home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py:1661: RuntimeWarning: pickle-based deserialization of pyarrow.PyExtensionType subclasses is disabled by default; if you only ingest trusted data files, you may re-enable this using `pyarrow.PyExtensionType.set_auto_load(True)`. In the future, Python-defined extension subclasses should derive from pyarrow.ExtensionType (not pyarrow.PyExtensionType) and implement their own serialization mechanism. obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} /home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py:1661: FutureWarning: pyarrow.PyExtensionType is deprecated and will refuse deserialization by default. Instead, please derive from pyarrow.ExtensionType and implement your own serialization mechanism. obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 924, in from_dict return cls(pa_table, info=info, split=split) File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 693, in __init__ inferred_features = Features.from_arrow_schema(arrow_table.schema) File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1661, in from_arrow_schema obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1661, in <dictcomp> obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1381, in generate_from_arrow_type return Value(dtype=_arrow_to_datasets_dtype(pa_type)) File "/home/slesage/hf/datasets-server/libs/libcommon/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 111, in _arrow_to_datasets_dtype raise ValueError(f"Arrow type {arrow_type} does not have a datasets dtype equivalent.") ValueError: Arrow type extension<arrow.py_extension_type<pyarrow.lib.UnknownExtensionType>> does not have a datasets dtype equivalent. ```
false
1,986,484,124
https://api.github.com/repos/huggingface/datasets/issues/6395
https://github.com/huggingface/datasets/issues/6395
6,395
Add ability to set lock type
closed
1
2023-11-09T22:12:30
2023-11-23T18:50:00
2023-11-23T18:50:00
leoleoasd
[ "enhancement" ]
### Feature request Allow setting file lock type, maybe from an environment variable Currently, it only depends on whether fnctl is available: https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/utils/filelock.py#L463-L470C16 ### Motivation In my environment, flock isn't supported on a network attached drive ### Your contribution I'll be happy to submit a pr.
false
1,985,947,116
https://api.github.com/repos/huggingface/datasets/issues/6394
https://github.com/huggingface/datasets/issues/6394
6,394
TorchFormatter images (H, W, C) instead of (C, H, W) format
closed
9
2023-11-09T16:02:15
2024-04-11T12:40:16
2024-04-11T12:40:16
Modexus
[]
### Describe the bug Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy. However, pytorch normally uses (C, H, W) format. Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways. If not using the format it is possible to directly use torchvision transforms but any non-transformed value will not be a tensor. Is there a reason for this choice? ### Steps to reproduce the bug ```python from datasets import Dataset, Features, Audio, Image images = ["path/to/image.png"] * 10 features = Features({"image": Image()}) ds = Dataset.from_dict({"image": images}, features=features) ds = ds.with_format("torch") ds[0]["image"].shape ``` ```python torch.Size([512, 512, 4]) ``` ### Expected behavior ```python from datasets import Dataset, Features, Audio, Image images = ["path/to/image.png"] * 10 features = Features({"image": Image()}) ds = Dataset.from_dict({"image": images}, features=features) ds = ds.with_format("torch") ds[0]["image"].shape ``` ```python torch.Size([4, 512, 512]) ``` ### Environment info - `datasets` version: 2.14.6 - Platform: Linux-6.5.9-100.fc37.x86_64-x86_64-with-glibc2.31 - Python version: 3.11.6 - Huggingface_hub version: 0.18.0 - PyArrow version: 14.0.1 - Pandas version: 2.1.2
false
1,984,913,259
https://api.github.com/repos/huggingface/datasets/issues/6393
https://github.com/huggingface/datasets/issues/6393
6,393
Filter occasionally hangs
closed
12
2023-11-09T06:18:30
2025-02-22T00:49:19
2025-02-22T00:49:19
dakinggg
[]
### Describe the bug A call to `.filter` occasionally hangs (after the filter is complete, according to tqdm) There is a trace produced ``` Exception ignored in: <function Dataset.__del__ at 0x7efb48130c10> Traceback (most recent call last): File "/usr/lib/python3/dist-packages/datasets/arrow_dataset.py", line 1366, in __del__ if hasattr(self, "_indices"): File "/usr/lib/python3/dist-packages/composer/core/engine.py", line 123, in sigterm_handler sys.exit(128 + signal) SystemExit: 143 ``` but I'm not sure if the trace is actually from `datasets`, or from surrounding code that is trying to clean up after datasets gets stuck. Unfortunately I can't reproduce this issue anywhere close to reliably. It happens infrequently when using `num_procs > 1`. Anecdotally I started seeing it when using larger datasets (~10M samples). ### Steps to reproduce the bug N/A see description ### Expected behavior map/filter calls always complete sucessfully ### Environment info - `datasets` version: 2.14.6 - Platform: Linux-5.4.0-137-generic-x86_64-with-glibc2.31 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - PyArrow version: 13.0.0 - Pandas version: 2.1.2
false
1,984,369,545
https://api.github.com/repos/huggingface/datasets/issues/6392
https://github.com/huggingface/datasets/issues/6392
6,392
`push_to_hub` is not robust to hub closing connection
closed
12
2023-11-08T20:44:53
2023-12-20T07:28:24
2023-12-01T17:51:34
msis
[]
### Describe the bug Like to #6172, `push_to_hub` will crash if Hub resets the connection and raise the following error: ``` Pushing dataset shards to the dataset hub: 32%|β–ˆβ–ˆβ–ˆβ– | 54/171 [06:38<14:23, 7.38s/it] Traceback (most recent call last): File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 715, in urlopen httplib_response = self._make_request( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 467, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 462, in _make_request httplib_response = conn.getresponse() File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse response.begin() File "/usr/lib/python3.8/http/client.py", line 316, in begin version, status, reason = self._read_status() File "/usr/lib/python3.8/http/client.py", line 285, in _read_status raise RemoteDisconnected("Remote end closed connection without" http.client.RemoteDisconnected: Remote end closed connection without response During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 799, in urlopen retries = retries.increment( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/util/retry.py", line 550, in increment raise six.reraise(type(error), error, _stacktrace) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/packages/six.py", line 769, in reraise raise value.with_traceback(tb) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 715, in urlopen httplib_response = self._make_request( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 467, in _make_request six.raise_from(e, None) File "<string>", line 3, in raise_from File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/urllib3/connectionpool.py", line 462, in _make_request httplib_response = conn.getresponse() File "/usr/lib/python3.8/http/client.py", line 1348, in getresponse response.begin() File "/usr/lib/python3.8/http/client.py", line 316, in begin version, status, reason = self._read_status() File "/usr/lib/python3.8/http/client.py", line 285, in _read_status raise RemoteDisconnected("Remote end closed connection without" urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 383, in _wrapped_lfs_upload lfs_upload(operation=operation, lfs_batch_action=batch_action, token=token) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 223, in lfs_upload _upload_multi_part(operation=operation, header=header, chunk_size=chunk_size, upload_url=upload_action["href"]) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 319, in _upload_multi_part else _upload_parts_iteratively(operation=operation, sorted_parts_urls=sorted_parts_urls, chunk_size=chunk_size) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/lfs.py", line 375, in _upload_parts_iteratively part_upload_res = http_backoff("PUT", part_upload_url, data=fileobj_slice) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 258, in http_backoff response = session.request(method=method, url=url, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 63, in send return super().send(request, *args, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/requests/adapters.py", line 501, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: (ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')), '(Request ID: 2bab8c06-b701-4266-aead-fe2e0dc0e3ed)') The above exception was the direct cause of the following exception: Traceback (most recent call last): File "convert_to_hf.py", line 116, in <module> main() File "convert_to_hf.py", line 108, in main audio_dataset.push_to_hub( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/dataset_dict.py", line 1641, in push_to_hub repo_id, split, uploaded_size, dataset_nbytes, _, _ = self[split]._push_parquet_shards_to_hub( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 5308, in _push_parquet_shards_to_hub _retry( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 290, in _retry return func(*func_args, **func_kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 828, in _inner return fn(self, *args, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 3221, in upload_file commit_info = self.create_commit( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 828, in _inner return fn(self, *args, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/hf_api.py", line 2695, in create_commit upload_lfs_files( File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 118, in _inner_fn return fn(*args, **kwargs) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 393, in upload_lfs_files _wrapped_lfs_upload(filtered_actions[0]) File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/site-packages/huggingface_hub/_commit_api.py", line 385, in _wrapped_lfs_upload raise RuntimeError(f"Error while uploading '{operation.path_in_repo}' to the Hub.") from exc RuntimeError: Error while uploading 'batch_19/train-00054-of-00171-932beb4082c034bf.parquet' to the Hub. ``` The function should retry if the operations fails, or at least offer a way to recover after such a failure. Right now, calling the function again will start sending all the parquets files leading to duplicates in the repository, with no guarantee that it will actually be pushed. Previously, it would crash with an error 400 #4677 . ### Steps to reproduce the bug Any large dataset pushed the hub: ```py audio_dataset.push_to_hub( repo_id="org/dataset", ) ``` ### Expected behavior `push_to_hub` should have an option for max retries or resume. ### Environment info - `datasets` version: 2.14.6 - Platform: Linux-5.15.0-1044-aws-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.16.4 - PyArrow version: 13.0.0 - Pandas version: 2.0.3
false
1,984,091,776
https://api.github.com/repos/huggingface/datasets/issues/6391
https://github.com/huggingface/datasets/pull/6391
6,391
Webdataset dataset builder
closed
5
2023-11-08T17:31:59
2024-05-22T16:51:08
2023-11-28T16:33:10
lhoestq
[]
Allow `load_dataset` to support the Webdataset format. It allows users to download/stream data from local files or from the Hugging Face Hub. Moreover it will enable the Dataset Viewer for Webdataset datasets on HF. ## Implementation details - I added a new Webdataset builder - dataset with TAR files are now read using the Webdataset builder - Basic decoding from `webdataset` is used by default, except unsafe ones like pickle - HF authentication support is done with `xopen` ## TODOS - [x] tests - [x] docs
true
1,983,725,707
https://api.github.com/repos/huggingface/datasets/issues/6390
https://github.com/huggingface/datasets/pull/6390
6,390
handle future deprecation argument
closed
1
2023-11-08T14:21:25
2023-11-21T02:10:24
2023-11-14T15:15:59
winglian
[]
getting this error: ``` /root/miniconda3/envs/py3.10/lib/python3.10/site-packages/datasets/table.py:1387: FutureWarning: promote has been superseded by mode='default'. return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0) ``` Since datasets supports arrow greater than 8.0.0, we need to handle both cases. [Arrow v14 docs](https://arrow.apache.org/docs/python/generated/pyarrow.concat_tables.html) [Arrow v13 docs](https://arrow.apache.org/docs/13.0/python/generated/pyarrow.concat_tables.html)
true
1,983,545,744
https://api.github.com/repos/huggingface/datasets/issues/6389
https://github.com/huggingface/datasets/issues/6389
6,389
Index 339 out of range for dataset of size 339 <-- save_to_file()
open
2
2023-11-08T12:52:09
2023-11-24T09:14:13
null
jaggzh
[]
### Describe the bug When saving out some Audio() data. The data is audio recordings with associated 'sentences'. (They use the audio 'bytes' approach because they're clips within audio files). Code is below the traceback (I can't upload the voice audio/text (it's not even me)). ``` Traceback (most recent call last): File "/mnt/ddrive/prj/voice/voice-training-dataset-create/./dataset.py", line 156, in <module> create_dataset(args) File "/mnt/ddrive/prj/voice/voice-training-dataset-create/./dataset.py", line 138, in create_dataset hf_dataset.save_to_disk(args.outds, max_shard_size='50MB') File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 1531, in save_to_disk for kwargs in kwargs_per_job: File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 1508, in <genexpr> "shard": self.shard(num_shards=num_shards, index=shard_idx, contiguous=True), ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 4609, in shard return self.select( ^^^^^^^^^^^^ File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 556, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/j/src/py/datasets/src/datasets/fingerprint.py", line 511, in wrapper out = func(dataset, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 3797, in select return self._select_contiguous(start, length, new_fingerprint=new_fingerprint) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 556, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/j/src/py/datasets/src/datasets/fingerprint.py", line 511, in wrapper out = func(dataset, *args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 3857, in _select_contiguous _check_valid_indices_value(start, len(self)) File "/home/j/src/py/datasets/src/datasets/arrow_dataset.py", line 648, in _check_valid_indices_value raise IndexError(f"Index {index} out of range for dataset of size {size}.") IndexError: Index 339 out of range for dataset of size 339. ``` ### Steps to reproduce the bug (I had to set the default max batch size down due to a different bug... or maybe it's related: https://github.com/huggingface/datasets/issues/5717) ```python3 #!/usr/bin/env python3 import argparse import os from pathlib import Path import soundfile as sf import datasets datasets.config.DEFAULT_MAX_BATCH_SIZE=35 from datasets import Features, Array2D, Value, Dataset, Sequence, Audio import numpy as np import librosa import sys import soundfile as sf import io import logging logging.basicConfig(level=logging.DEBUG, filename='debug.log', filemode='w', format='%(name)s - %(levelname)s - %(message)s') # Define the arguments for the command-line interface def parse_args(): parser = argparse.ArgumentParser(description="Create a Huggingface dataset from labeled audio files.") parser.add_argument("--indir_labeled", action="append", help="Directory containing labeled audio files.", required=True) parser.add_argument("--outds", help="Path to save the dataset file.", required=True) parser.add_argument("--max_clips", type=int, help="Max count of audio samples to add to the dataset.", default=None) parser.add_argument("-r", "--sr", type=int, help="Sample rate for the audio files.", default=16000) parser.add_argument("--no-resample", action="store_true", help="Disable resampling of the audio files.") parser.add_argument("--max_clip_secs", type=float, help="Max length of audio clips in seconds.", default=3.0) parser.add_argument("-v", "--verbose", action='count', default=1, help="Increase verbosity") return parser.parse_args() # Convert the NumPy arrays to audio bytes in WAV format def numpy_to_bytes(audio_array, sampling_rate=16000): with io.BytesIO() as bytes_io: sf.write(bytes_io, audio_array, samplerate=sampling_rate, format='wav', subtype='FLOAT') # float32 return bytes_io.getvalue() # Function to find audio and label files in a directory def find_audio_label_pairs(indir_labeled): audio_label_pairs = [] for root, _, files in os.walk(indir_labeled): for file in files: if file.endswith(('.mp3', '.wav', '.aac', '.flac')): audio_path = Path(root) / file if args.verbose>1: print(f'File: {audio_path}') label_path = audio_path.with_suffix('.labels.txt') if label_path.exists(): if args.verbose>0: print(f' Pair: {audio_path}') audio_label_pairs.append((audio_path, label_path)) return audio_label_pairs def process_audio_label_pair(audio_path, label_path, sampling_rate, no_resample, max_clip_secs): # Read the label file with open(label_path, 'r') as label_file: labels = label_file.readlines() # Load the full audio file full_audio, current_sr = sf.read(audio_path) if not no_resample and current_sr != sampling_rate: # You can use librosa.resample here if librosa is available full_audio = librosa.resample(full_audio, orig_sr=current_sr, target_sr=sampling_rate) audio_segments = [] sentences = [] # Process each label for label in labels: start_secs, end_secs, label_text = label.strip().split('\t') start_sample = int(float(start_secs) * sampling_rate) end_sample = int(float(end_secs) * sampling_rate) # Extract segment and truncate or pad to max_clip_secs audio_segment = full_audio[start_sample:end_sample] max_samples = int(max_clip_secs * sampling_rate) if len(audio_segment) > max_samples: # Truncate audio_segment = audio_segment[:max_samples] elif len(audio_segment) < max_samples: # Pad padding = np.zeros(max_samples - len(audio_segment), dtype=audio_segment.dtype) audio_segment = np.concatenate((audio_segment, padding)) audio_segment = numpy_to_bytes(audio_segment) audio_data = { 'path': str(audio_path), 'bytes': audio_segment, } audio_segments.append(audio_data) sentences.append(label_text) return audio_segments, sentences # Main function to create the dataset def create_dataset(args): audio_label_pairs = [] for indir in args.indir_labeled: audio_label_pairs.extend(find_audio_label_pairs(indir)) # Initialize our dataset data dataset_data = { 'path': [], # This will be a list of strings 'audio': [], # This will be a list of dictionaries 'sentence': [], # This will be a list of strings } # Process each audio-label pair and add the data to the dataset for audio_path, label_path in audio_label_pairs[:args.max_clips]: audio_segments, sentences = process_audio_label_pair(audio_path, label_path, args.sr, args.no_resample, args.max_clip_secs) if audio_segments and sentences: for audio_data, sentence in zip(audio_segments, sentences): if args.verbose>1: print(f'Appending {audio_data["path"]}') dataset_data['path'].append(audio_data['path']) dataset_data['audio'].append({ 'path': audio_data['path'], 'bytes': audio_data['bytes'], }) dataset_data['sentence'].append(sentence) features = Features({ 'path': Value('string'), # Path is redundant in common voice set also 'audio': Audio(sampling_rate=16000), 'sentence': Value('string'), }) hf_dataset = Dataset.from_dict(dataset_data, features=features) for key in dataset_data: for i, item in enumerate(dataset_data[key]): if item is None or (isinstance(item, bytes) and len(item) == 0): logging.error(f"Invalid {key} at index {i}: {item}") import ipdb; ipdb.set_trace(context=16); pass hf_dataset.save_to_disk(args.outds, max_shard_size='50MB') # try: # hf_dataset.save_to_disk(args.outds) # except TypeError as e: # # If there's a TypeError, log the exception and the dataset data that might have caused it # logging.exception("An error occurred while saving the dataset.") # import ipdb; ipdb.set_trace(context=16); pass # for key in dataset_data: # logging.debug(f"{key} length: {len(dataset_data[key])}") # if key == 'audio': # # Log the first 100 bytes of the audio data to avoid huge log files # for i, audio in enumerate(dataset_data[key]): # logging.debug(f"Audio {i}: {audio['bytes'][:100]}") # raise # Run the script if __name__ == "__main__": args = parse_args() create_dataset(args) ``` ### Expected behavior It shouldn't fail. ### Environment info - `datasets` version: 2.14.7.dev0 - Platform: Linux-6.1.0-13-amd64-x86_64-with-glibc2.36 - Python version: 3.11.2 - `huggingface_hub` version: 0.17.3 - PyArrow version: 13.0.0 - Pandas version: 2.1.2 - `fsspec` version: 2023.9.2
false
1,981,136,093
https://api.github.com/repos/huggingface/datasets/issues/6388
https://github.com/huggingface/datasets/issues/6388
6,388
How to create 3d medical imgae dataset?
open
0
2023-11-07T11:27:36
2023-11-07T11:28:53
null
QingYunA
[ "enhancement" ]
### Feature request I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii') ### Motivation help us to upload 3d medical dataset to huggingface! ### Your contribution I'll submit a PR if I find a way to add this feature
false
1,980,224,020
https://api.github.com/repos/huggingface/datasets/issues/6387
https://github.com/huggingface/datasets/issues/6387
6,387
How to load existing downloaded dataset ?
closed
1
2023-11-06T22:51:44
2023-11-16T18:07:01
2023-11-16T18:07:01
liming-ai
[ "enhancement" ]
Hi @mariosasko @lhoestq @katielink Thanks for your contribution and hard work. ### Feature request First, I download a dataset as normal by: ``` from datasets import load_dataset dataset = load_dataset('username/data_name', cache_dir='data') ``` The dataset format in `data` directory will be: ``` -data |-data_name |-test-00000-of-00001-bf4c733542e35fcb.parquet |-train-00000-of-00001-2a1df75c6bce91ab.parquet ``` Then I use SCP to clone this dataset into another machine, and then try: ``` from datasets import load_dataset dataset = load_dataset('data/data_name') # load from local path ``` This leads to re-generating training and validation split for each time, and the disk quota will be duplicated occupation. How can I just load the dataset without generating and saving these splits again? ### Motivation I do not want to download the same dataset in two machines, scp is much faster and better than HuggingFace API. I hope we can directly load the downloaded datasets (.parquest) ### Your contribution Please refer to the feature
false
1,979,878,014
https://api.github.com/repos/huggingface/datasets/issues/6386
https://github.com/huggingface/datasets/issues/6386
6,386
Formatting overhead
closed
2
2023-11-06T19:06:38
2023-11-06T23:56:12
2023-11-06T23:56:12
d-miketa
[]
### Describe the bug Hi! I very recently noticed that my training time is dominated by batch formatting. Using Lightning's profilers, I located the bottleneck within `datasets.formatting.formatting` and then narrowed it down with `line-profiler`. It turns out that almost all of the overhead is due to creating new instances of `self.python_arrow_extractor`. I admit I'm confused why that could be the case - as far as I can tell there's no complex `__init__` logic to execute. ![image](https://github.com/huggingface/datasets/assets/320321/5e022e0b-0d21-43d0-8e6f-9e641142e96b) ### Steps to reproduce the bug 1. Set up a dataset `ds` with potentially several (4+) columns (not sure if this is necessary, but it did at one point of the investigation make overhead worse) 2. Process it using a custom transform, `ds = ds.with_transform(transform_func)` 3. Decorate this function https://github.com/huggingface/datasets/blob/main/src/datasets/formatting/formatting.py#L512 with `@profile` from https://pypi.org/project/line-profiler/ 4. Profile with `$ kernprof -l script_to_profile.py` ### Expected behavior Batch formatting should have acceptable overhead. ### Environment info ``` datasets=2.14.6 pyarrow=14.0.0 ```
false
1,979,308,338
https://api.github.com/repos/huggingface/datasets/issues/6385
https://github.com/huggingface/datasets/issues/6385
6,385
Get an error when i try to concatenate the squad dataset with my own dataset
closed
2
2023-11-06T14:29:22
2023-11-06T16:50:45
2023-11-06T16:50:45
CCDXDX
[]
### Describe the bug Hello, I'm new here and I need to concatenate the squad dataset with my own dataset i created. I find the following error when i try to do it: Traceback (most recent call last): Cell In[9], line 1 concatenated_dataset = concatenate_datasets([train_dataset, dataset1]) File ~\anaconda3\Lib\site-packages\datasets\combine.py:213 in concatenate_datasets return _concatenate_map_style_datasets(dsets, info=info, split=split, axis=axis) File ~\anaconda3\Lib\site-packages\datasets\arrow_dataset.py:6002 in _concatenate_map_style_datasets _check_if_features_can_be_aligned([dset.features for dset in dsets]) File ~\anaconda3\Lib\site-packages\datasets\features\features.py:2122 in _check_if_features_can_be_aligned raise ValueError( ValueError: The features can't be aligned because the key answers of features {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)} has unexpected type - Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None) (expected either {'answer_start': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'text': Value(dtype='string', id=None)} or Value("null"). ### Steps to reproduce the bug ```python from huggingface_hub import notebook_login from datasets import load_dataset notebook_login("mymailadresse", "mypassword") squad = load_dataset("squad", split="train[:5000]") squad = squad.train_test_split(test_size=0.2) dataset1 = squad["train"] import json mybase = [ { "id": "1", "context": "She lives in Nantes", "question": "Where does she live?", "answers": { "text": "Nantes", "answer_start": [13], } } ] # Save the data to a JSON file json_file_path = r"C:\Users\mypath\thefile.json" with open(json_file_path, "w", encoding= "utf-8") as json_file: json.dump(mybase, json_file, indent=4) # Load the JSON file as a dataset custom_dataset = load_dataset("json", data_files=json_file_path) # Access the train split train_dataset = custom_dataset["train"] from datasets import concatenate_datasets # Concatenate the datasets concatenated_dataset = concatenate_datasets([train_dataset, dataset1]) ``` ### Expected behavior I would expect the two datasets to be concatenated without error. The len(dataset1) is equal to 4000 and the len(train_dataset) is equal to 1 so I would exepect concatenated_dataset to be created and having lenght 4001. ### Environment info Python 3.11.4 and using windows Thank you for your help
false
1,979,117,069
https://api.github.com/repos/huggingface/datasets/issues/6384
https://github.com/huggingface/datasets/issues/6384
6,384
Load the local dataset folder from other place
closed
1
2023-11-06T13:07:04
2023-11-19T05:42:06
2023-11-19T05:42:05
OrangeSodahub
[]
This is from https://github.com/huggingface/diffusers/issues/5573
false
1,978,189,389
https://api.github.com/repos/huggingface/datasets/issues/6383
https://github.com/huggingface/datasets/issues/6383
6,383
imagenet-1k downloads over and over
closed
1
2023-11-06T02:58:58
2024-06-12T13:15:00
2023-11-06T06:02:39
seann999
[]
### Describe the bug What could be causing this? ``` $ python3 Python 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> from datasets import load_dataset >>> load_dataset("imagenet-1k") Downloading builder script: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.72k/4.72k [00:00<00:00, 7.51MB/s] Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 85.4k/85.4k [00:00<00:00, 510kB/s] Downloading extra modules: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 46.4k/46.4k [00:00<00:00, 300kB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 29.1G/29.1G [19:36<00:00, 24.8MB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 29.3G/29.3G [08:38<00:00, 56.5MB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 29.0G/29.0G [09:26<00:00, 51.2MB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 29.2G/29.2G [09:38<00:00, 50.6MB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 29.2G/29.2G [09:37<00:00, 44.1MB/s^Downloading data: 0%| | 106M/29.1G [00:05<23:49, 20.3MB/s] ``` ### Steps to reproduce the bug See above commands/code ### Expected behavior imagenet-1k is downloaded ### Environment info - `datasets` version: 2.14.6 - Platform: Linux-6.2.0-34-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - Huggingface_hub version: 0.15.1 - PyArrow version: 14.0.0 - Pandas version: 1.5.2
false
1,977,400,799
https://api.github.com/repos/huggingface/datasets/issues/6382
https://github.com/huggingface/datasets/issues/6382
6,382
Add CheXpert dataset for vision
open
3
2023-11-04T15:36:11
2024-01-10T11:53:52
null
SauravMaheshkar
[ "enhancement", "dataset request" ]
### Feature request ### Name **CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison** ### Paper https://arxiv.org/abs/1901.07031 ### Data https://stanfordaimi.azurewebsites.net/datasets/8cbd9ed4-2eb9-4565-affc-111cf4f7ebe2 ### Motivation CheXpert is one of the fundamental models in medical image classification and can serve as a viable pre-training dataset for radiology classification or low-scale ablation / exploratory studies. This could also serve as a good pre-training dataset for Kaggle competitions. ### Your contribution Would love to make a PR and pre-process / get this into πŸ€—
false
1,975,028,470
https://api.github.com/repos/huggingface/datasets/issues/6381
https://github.com/huggingface/datasets/pull/6381
6,381
Add my dataset
closed
3
2023-11-02T20:59:52
2023-11-08T14:37:46
2023-11-06T15:50:14
keyur536
[]
## medical data **Description:** This dataset, named "medical data," is a collection of text data from various sources, carefully curated and cleaned for use in natural language processing (NLP) tasks. It consists of a diverse range of text, including articles, books, and online content, covering topics from science to literature. **Citation:** If applicable, please include a citation for this dataset to give credit to the original sources or contributors. **Key Features:** - Language: The text is primarily in English, but it may include content in other languages as well. - Use Cases: This dataset is suitable for text classification, language modeling, sentiment analysis, and other NLP tasks. **Usage:** To access this dataset, use the `load_your_dataset` function provided in the `your_dataset.py` script within this repository. You can specify the dataset split you need, such as "train," "test," or "validation," to get the data for your specific task. **Contributors:** - [Keyur Chaudhari] **Contact:** If you have any questions or need assistance regarding this dataset, please feel free to contact [keyurchaudhari536@gmail.com]. Please note that this dataset is shared under a specific license, which can be found in the [LICENSE](link to your dataset's license) file. Make sure to review and adhere to the terms of the license when using this dataset for your projects.
true
1,974,741,221
https://api.github.com/repos/huggingface/datasets/issues/6380
https://github.com/huggingface/datasets/pull/6380
6,380
Fix for continuation behaviour on broken dataset archives due to starving download connections via HTTP-GET
open
0
2023-11-02T17:28:23
2023-11-02T17:31:19
null
RuntimeRacer
[]
This PR proposes a (slightly hacky) fix for an Issue that can occur when downloading large dataset parts over unstable connections. The underlying issue is also being discussed in https://github.com/huggingface/datasets/issues/5594. Issue Symptoms & Behaviour: - Download of a large archive file during dataset download via HTTP-GET fails. - An silent net exception (which I was unable to identify) is thrown within the `tqdm` download progress. - Due to missing exception catch code, the above process just continues processing, assuming `http_get` completed successfully. - Pending Archive file gets renamed to remove the `.incomplete` extension, despite not all data has been downloaded. - Also, for reasons I did not investigate, there seems to be no real integrity check for the downloaded files; or it does not detect this problem. This is especially problematic, since the downloader script won't retry downloading this archive after CRC-Checking, even if it is being manually restarted / executed again after running into errors on extraction. Fix proposal: Adding a retry mechanic for HTTP-GET downloads, which adds the following behaviour: - Download Progress Thread checks for download size validity in case the HTTP connection starves mid download. If the check fails, a RuntimeError is thrown - Cache Downloader code with retry mechanic monitors for an exception thrown by the download progress thread, and retries download with updated `resume_size`. - Cache Downloader will not mark incomplete files which have thrown an exception during download, and exceeded retries, as complete.
true
1,974,638,850
https://api.github.com/repos/huggingface/datasets/issues/6379
https://github.com/huggingface/datasets/pull/6379
6,379
Avoid redundant warning when encoding NumPy array as `Image`
closed
5
2023-11-02T16:37:58
2023-11-06T17:53:27
2023-11-02T17:08:07
mariosasko
[]
Avoid a redundant warning in `encode_np_array` by removing the identity check as NumPy `dtype`s can be equal without having identical `id`s. Additionally, fix "unreachable" checks in `encode_np_array`.
true
1,973,942,770
https://api.github.com/repos/huggingface/datasets/issues/6378
https://github.com/huggingface/datasets/pull/6378
6,378
Support pyarrow 14.0.0
closed
3
2023-11-02T10:25:10
2023-11-02T15:24:28
2023-11-02T15:15:44
albertvillanova
[]
Support `pyarrow` 14.0.0. Fix #6377 and fix #6374 (root cause). This fix is analog to a previous one: - #6175
true