id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
βŒ€
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
βŒ€
is_pull_request
bool
2 classes
1,179,381,021
https://api.github.com/repos/huggingface/datasets/issues/4007
https://github.com/huggingface/datasets/issues/4007
4,007
set_format does not work with multi dimension tensor
closed
4
2022-03-24T11:27:43
2022-03-30T07:28:57
2022-03-24T14:39:29
phihung
[ "bug" ]
## Describe the bug set_format only transforms the last dimension of a multi-dimension list to tensor ## Steps to reproduce the bug ```python import torch from datasets import Dataset ds = Dataset.from_dict({"A": [torch.rand((2, 2))]}) # ds = Dataset.from_dict({"A": [np.random.rand(2, 2)]}) # => same result ds = ds.with_format("torch") print(ds[0]) ``` ## Expected results ``` {'A': [tensor([[0.6689, 0.1516], [0.1403, 0.5567]])]} ``` ## Actual results ``` {'A': [tensor([0.6689, 0.1516]), tensor([0.1403, 0.5567])]} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - datasets version: 2.0.0 - Platform: Mac OSX - Python version: 3.8.12 - PyArrow version: 7.0.0
false
1,179,367,195
https://api.github.com/repos/huggingface/datasets/issues/4006
https://github.com/huggingface/datasets/pull/4006
4,006
Use audio feature in ASR task template
closed
1
2022-03-24T11:15:22
2022-03-24T17:19:29
2022-03-24T16:48:02
lhoestq
[]
The AutomaticSpeechRecognition task template is outdated: it still uses the file path column as input instead of the audio column. I changed that and updated all the datasets as well as the tests. The only community dataset that will need to be updated is `facebook/multilingual_librispeech`. It has almost zero usage unfortunately (probably because users load the duplicate `multilingual_librispeech` directly instead), but it means we can update it. (this makes me think that we should deprecate `multilingual_librispeech` it and redirect users to `facebook/multilingual_librispeech`). This PR is also useful for the AudioFolder in https://github.com/huggingface/datasets/pull/3963
true
1,179,365,663
https://api.github.com/repos/huggingface/datasets/issues/4005
https://github.com/huggingface/datasets/issues/4005
4,005
Yelp not working
closed
6
2022-03-24T11:14:00
2022-03-25T14:59:57
2022-03-25T14:56:10
patrickvonplaten
[]
## Dataset viewer issue for '*name of the dataset*' **Link:** https://huggingface.co/datasets/yelp_review_full/viewer/yelp_review_full/train Doesn't work: ``` Server error Status code: 400 Exception: Error Message: line contains NULL ``` Am I the one who added this dataset ? No A seamingly copy of the dataset: https://huggingface.co/datasets/SetFit/yelp_review_full works . The original one: https://huggingface.co/datasets/yelp_review_full has > 20K downloads.
false
1,179,320,795
https://api.github.com/repos/huggingface/datasets/issues/4004
https://github.com/huggingface/datasets/pull/4004
4,004
ASSIN 2 dataset: replace broken Google Drive _URLS by links on github
closed
1
2022-03-24T10:37:39
2022-03-28T14:01:46
2022-03-28T13:56:39
ruanchaves
[]
Closes #4003 . Fixes checksum error. Replaces Google Drive urls by the files hosted here: [Multilingual Transformer Ensembles for Portuguese Natural Language Tasks](https://github.com/ruanchaves/assin)
true
1,179,286,877
https://api.github.com/repos/huggingface/datasets/issues/4003
https://github.com/huggingface/datasets/issues/4003
4,003
ASSIN2 dataset checksum bug
closed
6
2022-03-24T10:08:50
2022-04-27T14:14:45
2022-03-28T13:56:39
ruanchaves
[ "bug" ]
## Describe the bug Checksum error after trying to load the [ASSIN 2 dataset](https://huggingface.co/datasets/assin2). `NonMatchingChecksumError` triggered by calling `load_dataset("assin2")`. Similar to #3952 , #3942 , #3941 , etc. ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) [<ipython-input-13-c664a92ad5e7>](https://localhost:8080/#) in <module>() ----> 1 load_dataset('assin2') 4 frames [/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?id=1Q9j1a83CuKzsHCGaNulSkNxBm7Dkn7Ln&export=download'] ``` ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("assin2") ``` ## Expected results Load the dataset. ## Actual results The dataset won't load. ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Google Colab - Python version: 3.7.12 - PyArrow version: 6.0.1
false
1,179,263,787
https://api.github.com/repos/huggingface/datasets/issues/4002
https://github.com/huggingface/datasets/pull/4002
4,002
Support streaming conll2012_ontonotesv5 dataset
closed
1
2022-03-24T09:49:56
2022-03-24T10:53:41
2022-03-24T10:48:47
albertvillanova
[]
Use another URL whit a single ZIP file (instead of previous one with a ZIP file inside another ZIP file).
true
1,179,231,418
https://api.github.com/repos/huggingface/datasets/issues/4001
https://github.com/huggingface/datasets/issues/4001
4,001
How to use generate this multitask dataset for SQUAD? I am getting a value error.
closed
4
2022-03-24T09:21:51
2022-03-26T09:48:21
2022-03-26T03:35:43
gsk1692
[]
## Dataset viewer issue for 'squad_multitask*' **Link:** https://huggingface.co/datasets/vershasaxena91/squad_multitask *short description of the issue* I am trying to generate the multitask dataset for squad dataset. However, gives the error in dataset explorer as well as my local machine. I tried the command: dataset = load_dataset("vershasaxena91/squad_multitask", 'highlight_qg_format') Error: Status code: 400 Exception: TypeError Message: argument of type 'Value' is not iterable Kindly advice.
false
1,178,844,616
https://api.github.com/repos/huggingface/datasets/issues/4000
https://github.com/huggingface/datasets/issues/4000
4,000
load_dataset error: sndfile library not found
closed
4
2022-03-24T01:52:32
2022-03-25T17:53:33
2022-03-25T17:53:33
i-am-neo
[ "bug" ]
## Describe the bug Can't load ami dataset ## Steps to reproduce the bug ``` python3 -c "from datasets import load_dataset; print(load_dataset('ami', 'headset-single', split='validation')[0])" ``` ## Expected results ## Actual results Downloading and preparing dataset ami/headset-single (download: 10.71 GiB, generated: 49.99 MiB, post-processed: Unknown size, total: 10.76 GiB) to /home/neo/.cache/huggingface/datasets/ami/headset-single/1.6.2/2accdf810f7c0585f78f4bcfa47684fbb980e35d29ecf126e6906dbecb872d9e... AMI corpus cannot be downloaded using multi-processing. Setting number of downloaded processes `num_proc` to 1. 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 136/136 [00:00<00:00, 36004.88it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 136/136 [00:01<00:00, 79.10it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 18/18 [00:00<00:00, 25343.23it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 18/18 [00:00<00:00, 2874.78it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16/16 [00:00<00:00, 27950.38it/s] 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 16/16 [00:00<00:00, 2892.25it/s] Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/load.py", line 1707, in load_dataset use_auth_token=use_auth_token, File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/neo/.virtualenvs/hubert/lib/python3.7/site-packages/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: sndfile library not found ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-4.19.0-18-cloud-amd64-x86_64-with-debian-10.11 - Python version: 3.7.3 - PyArrow version: 7.0.0
false
1,178,685,280
https://api.github.com/repos/huggingface/datasets/issues/3999
https://github.com/huggingface/datasets/pull/3999
3,999
Docs maintenance
closed
1
2022-03-23T21:27:33
2022-03-30T17:01:45
2022-03-30T16:56:38
stevhliu
[ "documentation" ]
This PR links some functions to the API reference. These functions previously only showed up in code format because the path to the actual API was incorrect.
true
1,178,631,986
https://api.github.com/repos/huggingface/datasets/issues/3998
https://github.com/huggingface/datasets/pull/3998
3,998
Fix Audio.encode_example() when writing an array
closed
2
2022-03-23T20:32:13
2022-03-29T14:21:44
2022-03-29T14:16:13
polinaeterna
[]
Closes #3996
true
1,178,566,568
https://api.github.com/repos/huggingface/datasets/issues/3997
https://github.com/huggingface/datasets/pull/3997
3,997
Sync Features dictionaries
closed
1
2022-03-23T19:23:51
2022-04-13T15:52:27
2022-04-13T15:46:19
mariosasko
[]
This PR adds a wrapper to the `Features` class to keep the secondary dict, `_column_requires_decoding`, aligned with the main dict (as discussed in https://github.com/huggingface/datasets/pull/3723#discussion_r806912731). A more elegant approach would be to subclass `UserDict` and override `__setitem__` and `__delitem__`, but this PR doesn't implement it for the following reasons: * it requires replacing all occurrences of `isinstance(obj, dict)` with `isinstance(obj, Mapping)`, which is five times slower than `isinstance(obj, dict)` on my machine, in `features.py` * is a breaking change, i.e., `isinstance(Features(...), dict)` would return `False` after it * IMO, it makes sense to be consistent in the user-facing API and subclass either `dict` or `UserDict`. The problem with the latter is that it can't be used for `DatasetDict` because `DatasetDict` exposes the `data` property, which is also used by `UserDict`, so this would result in a collision.
true
1,178,415,905
https://api.github.com/repos/huggingface/datasets/issues/3996
https://github.com/huggingface/datasets/issues/3996
3,996
Audio.encode_example() throws an error when writing example from array
closed
3
2022-03-23T17:11:47
2022-03-29T14:16:13
2022-03-29T14:16:13
polinaeterna
[ "bug" ]
## Describe the bug When trying to do `Audio().encode_example()` with preexisting array (see [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L73)), `sf.write()` throws you an error: `TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7f4218c0db30>` ## Steps to reproduce the bug ### Sample code to reproduce the bug ```python # download sample file !wget https://huggingface.co/datasets/polinaeterna/test_encode_example/resolve/main/common_voice_vi_21824030.mp3 arr, sr = librosa.load("common_voice_vi_21824030.mp3") Audio().encode_example({ "path": "common_voice_vi_21824030.mp3", "array": arr, "sampling_rate":sr }) ``` ## Expected results An encoded example (`{"bytes": b'....', "path": 'path'}`) ## Actual results ```python TypeError Traceback (most recent call last) Input In [3], in <module> 1 arr, sr = librosa.load("common_voice_vi_21824030.mp3") ----> 3 Audio().encode_example({ 4 "path": "common_voice_vi_21824030.mp3", 5 "array": arr, 6 "sampling_rate":sr 7 }) File ~/workspace/datasets/src/datasets/features/audio.py:75, in Audio.encode_example(self, value) 73 elif isinstance(value, dict) and "array" in value: 74 buffer = BytesIO() ---> 75 sf.write(buffer, value["array"], value["sampling_rate"]) 76 return {"bytes": buffer.getvalue(), "path": value.get("path")} 77 elif value.get("bytes") is not None or value.get("path") is not None: File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:314, in write(file, data, samplerate, subtype, endian, format, closefd) 312 else: 313 channels = data.shape[1] --> 314 with SoundFile(file, 'w', samplerate, channels, 315 subtype, endian, format, closefd) as f: 316 f.write(data) File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:627, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 625 mode_int = _check_mode(mode) 626 self._mode = mode --> 627 self._info = _create_info_struct(file, mode, samplerate, channels, 628 format, subtype, endian) 629 self._file = self._open(file, mode_int, closefd) 630 if set(mode).issuperset('r+') and self.seekable(): 631 # Move write position to 0 (like in Python file objects) File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1416, in _create_info_struct(file, mode, samplerate, channels, format, subtype, endian) 1414 original_format = format 1415 if format is None: -> 1416 format = _get_format_from_filename(file, mode) 1417 assert isinstance(format, (_unicode, str)) 1418 else: File ~/miniconda3/envs/datasets/lib/python3.8/site-packages/soundfile.py:1457, in _get_format_from_filename(file, mode) 1455 pass 1456 if format.upper() not in _formats and 'r' not in mode: -> 1457 raise TypeError("No format specified and unable to get format from " 1458 "file extension: {0!r}".format(file)) 1459 return format TypeError: No format specified and unable to get format from file extension: <_io.BytesIO object at 0x7fd8daf88180> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets master - Platform: Ubuntu 20.04 - Python version: python 3.8.12 - PyArrow version: 6.0.1 ## Solution I guess we just need to add `format` arg in [this line](https://github.com/huggingface/datasets/blob/master/src/datasets/features/audio.py#L75) like this: ```python sf.write(buffer, value["array"], value["sampling_rate"], format="wav") ``` BTW discovered this when trying to decode audio in mp3 format without torchaudio (would be useful for TensorFlow users), like this: ```python from datasets import load_dataset, Features, Audio ds = load_dataset("common_voice", "vi", split="test") ds = ds.remove_columns("audio") ds.select(range(3)) # 3 samples just for testing def load_mp3_with_librosa(example): arr, sr = librosa.load(example["path"]) example["audio"] = { "path": example["path"], "array": arr, "sampling_rate": sr } return example updated_dataset = ds.map(lambda example: load_mp3_with_librosa(example), features=Features( {"audio": Audio(decode=False)} )) ``` @lhoestq @mariosasko @albertvillanova am I right in my logic? do we agree that we can set wav as the format? πŸ€—
false
1,178,232,623
https://api.github.com/repos/huggingface/datasets/issues/3995
https://github.com/huggingface/datasets/pull/3995
3,995
Close `PIL.Image` file handler in `Image.decode_example`
closed
1
2022-03-23T14:51:48
2022-03-23T18:24:52
2022-03-23T18:19:27
mariosasko
[]
Closes the file handler of the PIL image object in `Image.decode_example` to avoid the `Too many open files` error. To pass [the image equality checks](https://app.circleci.com/pipelines/github/huggingface/datasets/10774/workflows/d56670e6-16bb-4c64-b601-a152c5acf5ed/jobs/65825) in CI, `Image.decode_example` calls `image.load()` regardless of how the image object is created (not only for the `PIL.Image.open(local_path)` case). This is needed because `load()` sets the `readonly` attribute of a `PIL.Image` object to 0 (it's 1 after `PIL.Image.open(file_like)`), and in the older PIL versions (only fixed on main), that attribute is considered in `PIL.Image.__eq__`. More info can be found here: https://github.com/python-pillow/Pillow/issues/5926. Fix #3985
true
1,178,211,138
https://api.github.com/repos/huggingface/datasets/issues/3994
https://github.com/huggingface/datasets/pull/3994
3,994
Change audio column from string path to Audio feature in ASR task
closed
0
2022-03-23T14:34:52
2022-03-23T15:43:43
2022-03-23T15:43:43
polinaeterna
[]
Will fix #3990
true
1,178,201,495
https://api.github.com/repos/huggingface/datasets/issues/3993
https://github.com/huggingface/datasets/issues/3993
3,993
Streaming dataset + interleave + DataLoader hangs with multiple workers
open
5
2022-03-23T14:27:29
2023-02-28T14:14:24
null
jpilaul
[ "bug" ]
## Describe the bug Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers. ## Steps to reproduce the bug ```python from datasets import interleave_datasets, load_dataset from torch.utils.data import DataLoader en_dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) fr_dataset = load_dataset('oscar', "unshuffled_deduplicated_fr", split='train', streaming=True) it_dataset = load_dataset('oscar', "unshuffled_deduplicated_it", split='train', streaming=True) de_dataset = load_dataset('oscar', "unshuffled_deduplicated_de", split='train', streaming=True) multilingual_dataset = interleave_datasets([en_dataset, fr_dataset, de_dataset, it_dataset]) multilingual_dataset = multilingual_dataset.with_format('torch') next(iter(multilingual_dataset)) # works fairly fast dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=4) for batch in dataloader: print(len(batch)) # prints nothing after 30 min of waiting dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=0) for batch in dataloader: print(len(batch)) # prints right away ``` ## Expected results It should be able to iterate the dataset with multiple workers. ## Actual results Prints with results with `next(iter(multilingual_dataset)) ` and `num_workers=0` but it prints nothing with `num_workers=4` or any number above 0. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.1.dev0 - `pytorch` version: 1.10.0+cu113 - Python version: 3.7 - PyArrow version: 6.0.1
false
1,177,946,153
https://api.github.com/repos/huggingface/datasets/issues/3992
https://github.com/huggingface/datasets/issues/3992
3,992
Image column is not decoded in map when using with with_transform
closed
1
2022-03-23T10:51:13
2022-12-13T16:59:06
2022-12-13T16:59:06
phihung
[ "bug" ]
## Describe the bug Image column is not _decoded_ in **map** when using with `with_transform` ## Steps to reproduce the bug ```python from datasets import Image, Dataset def add_C(batch): batch["C"] = batch["A"] return batch ds = Dataset.from_dict({"A": ["image.png"]}).cast_column("A", Image()) ds = ds.with_transform(lambda x: x) # <= This line causes the problem ds = ds.map(add_C, batched=True) print(ds[0]) ``` ## Expected results ``` {'C': <PIL.PngImagePlugin.PngImageFile>, ...} ``` ## Actual results ``` {'C': {'bytes': None, 'path': 'image.png'}, ...} ``` If we remove the `with_transform` line, we get the expected result. ## Environment info - `datasets` version: 2.0.0 - Platform: Mac OSX - Python version: 3.8.12 - PyArrow version: 7.0.0
false
1,177,362,901
https://api.github.com/repos/huggingface/datasets/issues/3991
https://github.com/huggingface/datasets/issues/3991
3,991
Add Lung Image Database Consortium image collection (LIDC-IDRI) dataset
open
0
2022-03-22T22:16:05
2022-03-23T12:57:16
null
omarespejel
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** *Lung Image Database Consortium image collection (LIDC-IDRI)* - **Description:** *Consists of diagnostic and lung cancer screening thoracic computed tomography (CT) scans with marked-up annotated lesions. It is a web-accessible international resource for development, training, and evaluation of computer-assisted diagnostic (CAD) methods for lung cancer detection and diagnosis. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.* - **Data:** *[link to the Github repository or current dataset location](https://wiki.cancerimagingarchive.net/display/Public/LIDC-IDRI)* - **Motivation:** *Key dataset in the healthcare community* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). FYI @osanseviero @abidlabs
false
1,176,976,247
https://api.github.com/repos/huggingface/datasets/issues/3990
https://github.com/huggingface/datasets/issues/3990
3,990
Improve AutomaticSpeechRecognition task template
closed
2
2022-03-22T15:41:08
2022-03-23T17:12:40
2022-03-23T17:12:40
polinaeterna
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** [AutomaticSpeechRecognition task template](https://github.com/huggingface/datasets/blob/master/src/datasets/tasks/automatic_speech_recognition.py) is outdated as it uses path to audiofile as an audio column instead of a Audio feature itself (I guess it's because Audio feature didn't exist at the time this template was created). **Describe the solution you'd like** Change audio columns from string path to Audio feature.
false
1,176,955,078
https://api.github.com/repos/huggingface/datasets/issues/3989
https://github.com/huggingface/datasets/pull/3989
3,989
Remove old wikipedia leftovers
closed
3
2022-03-22T15:25:46
2022-03-31T15:35:26
2022-03-31T15:30:16
albertvillanova
[]
After updating Wikipedia dataset, remove old wikipedia leftovers from doc.
true
1,176,858,540
https://api.github.com/repos/huggingface/datasets/issues/3988
https://github.com/huggingface/datasets/pull/3988
3,988
More consistent references in docs
closed
2
2022-03-22T14:18:41
2022-03-22T17:06:32
2022-03-22T16:50:44
mariosasko
[]
Aligns the internal references with style discussed in https://github.com/huggingface/datasets/pull/3980. cc @stevhliu
true
1,176,481,659
https://api.github.com/repos/huggingface/datasets/issues/3987
https://github.com/huggingface/datasets/pull/3987
3,987
Fix Faiss custom_index device
closed
1
2022-03-22T09:11:24
2022-03-24T12:18:59
2022-03-24T12:14:12
albertvillanova
[]
Currently, if both `custom_index` and `device` are passed to `FaissIndex`, `device` is silently ignored. This PR fixes this by raising a ValueError if both arguments are passed. Alternatively, the `custom_index` could be transferred to the target `device`.
true
1,176,429,565
https://api.github.com/repos/huggingface/datasets/issues/3986
https://github.com/huggingface/datasets/issues/3986
3,986
Dataset loads indefinitely after modifying default cache path (~/.cache/huggingface)
open
5
2022-03-22T08:23:21
2023-03-06T16:55:04
null
kelvinAI
[ "bug" ]
## Describe the bug Dataset loads indefinitely after modifying cache path (~/.cache/huggingface) If none of the environment variables are set, this custom dataset loads fine ( json-based dataset with custom dataset load script) ** Update: Transformer modules faces the same issue as well during loading ## A clear and concise description of what the bug is. Issue: - Dataset loading stalls / freezes indefinitely when HF_HOME is changed to a custom directory - No error code, had to terminate the process - There are some files created in the cache directory: ``` custom_cache_dir | -- modules | -- __init__.py | -- datasets_modules | -- __init__.py | -- datasets | -- __init__.py | -- script.py (Dataset loading script) | -- script.lock ``` There's no error nor any logs thrown so I'm out of ideas of how to to debug this. The custom dataset works fine if the default ~/.cache dir is used, but unfortunately it's out of space and we do not have permissions to modify the disk. ## Steps to reproduce the bug What I've tried: - Modifying HF_HOME (https://github.com/huggingface/transformers/issues/8703) - Modifying HF_DATASETS_CACHE (https://huggingface.co/docs/datasets/v1.12.0/cache.html) - Modifying cache_dir param during runtime ```python >>> from datasets import load_dataset >>> dataset = load_dataset('test_dataset', cache_dir='/path/to/new/cache') ``` - Disabling dataset cache ```python >>> from datasets import set_caching_enabled >>> set_caching_enabled(False) ``` ## Expected results Datasets should load / cache as usual with the only exception that cache directory is different ## Actual results Any actions taken above to change the cache directory results in loading indefinitely without terminating. ## Environment info - `transformers` version: 4.18.0.dev0 - Platform: Linux-4.15.0-54-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - Huggingface_hub version: 0.4.0 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): 2.4.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No
false
1,175,982,937
https://api.github.com/repos/huggingface/datasets/issues/3985
https://github.com/huggingface/datasets/issues/3985
3,985
[image feature] Too many files open error when image feature is returned as a path
closed
0
2022-03-21T21:54:05
2022-03-23T18:19:27
2022-03-23T18:19:27
apsdehal
[ "bug" ]
## Describe the bug PR in context: #3967. If I load the dataset in this PR (TextVQA), and do a simple list comprehension on the dataset, I get `Too many open files error`. This is happening due to the way we are loading the image feature when a str path is returned from the `_generate_examples`. Specifically at https://github.com/huggingface/datasets/blob/508eb4ab5d52f590baa677b4f64b1cc069139f7b/src/datasets/features/image.py#L110, we are open the file handle to the image but never closing it. This in my understanding is causing the issue. ## Steps to reproduce the bug Pull the PR locally and run the following code ```python from datasets import load_dataset dataset = load_dataset("./datasets/textvqa")["train"] data = [item for item in dataset] # Error happens ``` ## Expected results List comprehension should work smoothly ## Actual results `Too many open files error` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.1.dev0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.10.0 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
false
1,175,822,117
https://api.github.com/repos/huggingface/datasets/issues/3984
https://github.com/huggingface/datasets/issues/3984
3,984
Local and automatic tests fail
closed
1
2022-03-21T19:07:37
2023-07-25T15:18:40
2023-07-25T15:18:40
MarkusSagen
[ "bug" ]
## Describe the bug Running the tests from CircleCI on a PR or locally fails, even with no changes. Tests seem to fail on `test_metric_common.py` ## Steps to reproduce the bug ```shell git clone https://huggingface/datasets.git cd datasets ``` ```python python -m pip install -e . pytest ``` ## Expected results All tests passing ## Actual results ``` tests/test_metric_common.py:91: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ ../.pyenv/versions/3.8.5/lib/python3.8/doctest.py:1336: in __run exec(compile(example.source, filename, "single", <doctest datasets_modules.metrics.ter.c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155.ter.Ter[3]>:1: in <module> ??? ../datasets/src/datasets/metric.py:430: in compute output = self._compute(**inputs, **compute_kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Metric(name: "ter", features: {'predictions': Value(dtype='string', id='sequence'), 'references': Sequence(feature=Val...ences=references) >>> print(results) {'score': 0.0, 'num_edits': 0, 'ref_length': 6.5} """, stored examples: 0) predictions = ['hello there general kenobi', 'foo bar foobar'] references = [['hello there general kenobi', 'hello there !'], ['foo bar foobar', 'foo bar foobar']] normalized = False, no_punct = False, asian_support = False, case_sensitive = False def _compute( self, predictions, references, normalized: bool = False, no_punct: bool = False, asian_support: bool = False, case_sensitive: bool = False, ): references_per_prediction = len(references[0]) if any(len(refs) != references_per_prediction for refs in references): raise ValueError("Sacrebleu requires the same number of references for each prediction") transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)] > sb_ter = TER(normalized, no_punct, asian_support, case_sensitive) E TypeError: __init__() takes 2 positional arguments but 5 were given /tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/metrics/ter/c0cfb5adedac7eb15ffa47bba6a70fabd80f3eb906ee508abf5e1906285d1155/ter.py:130: TypeError ------------------------------ Captured stdout call ------------------------------- Trying: predictions = ["hello there general kenobi", "foo bar foobar"] Expecting nothing ok Trying: references = [["hello there general kenobi", "hello there !"], ["foo bar foobar", "foo bar foobar"]] Expecting nothing ok Trying: ter = datasets.load_metric("ter") Expecting nothing ok Trying: results = ter.compute(predictions=predictions, references=references) Expecting nothing ================================ warnings summary ================================= ../.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15 /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/hdfs/config.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses from imp import load_source ../datasets/src/datasets/commands/test.py:35 /home/markussagen/datasets/src/datasets/commands/test.py:35: PytestCollectionWarning: cannot collect test class 'TestCommand' because it has a __init__ constructor (from: tests/commands/test_test.py) class TestCommand(BaseDatasetsCLICommand): tests/commands/test_test.py:33 /home/markussagen/mydataset/tests/commands/test_test.py:33: PytestCollectionWarning: cannot collect test class 'TestCommandArgs' because it has a __new__ constructor (from: tests/commands/test_test.py) class TestCommandArgs: tests/test_arrow_dataset.py: 760 warnings tests/test_formatting.py: 60 warnings tests/test_search.py: 31 warnings tests/features/test_array_xd.py: 117 warnings /home/markussagen/datasets/src/datasets/formatting/formatting.py:197: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations (isinstance(x, np.ndarray) and (x.dtype == np.object or x.shape != array[0].shape)) tests/test_arrow_dataset.py: 154 warnings tests/features/test_array_xd.py: 1 warning /home/markussagen/datasets/src/datasets/formatting/formatting.py:201: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations return np.array(array, copy=False, **{**self.np_array_kwargs, "dtype": np.object}) tests/test_arrow_dataset.py: 60 warnings /home/markussagen/datasets/src/datasets/arrow_dataset.py:3105: DeprecationWarning: `np.str` is a deprecated alias for the builtin `str`. To silence this warning, use `str` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.str_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations elif np.issubdtype(values.dtype, np.str): tests/test_arrow_dataset.py: 138 warnings tests/test_formatting.py: 21 warnings /home/markussagen/datasets/src/datasets/formatting/tf_formatter.py:69: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations data_struct.dtype == np.object tests/test_arrow_dataset.py: 240 warnings tests/test_formatting.py: 20 warnings /home/markussagen/datasets/src/datasets/formatting/torch_formatter.py:49: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations if data_struct.dtype == np.object: # pytorch tensors cannot be instantied from an array of objects tests/test_arrow_dataset.py: 12 warnings tests/test_search.py: 2 warnings tests/features/test_array_xd.py: 6 warnings tests/features/test_image.py: 4 warnings /home/markussagen/datasets/src/datasets/features/features.py:1129: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations [0] + [len(arr) for arr in l_arr], dtype=np.object tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_banking77 /tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/banking77/aec0289529599d4572d76ab00c8944cb84f88410ad0c9e7da26189d31f62a55b/banking77.py:24: DeprecationWarning: invalid escape sequence \~ _CITATION = """\ tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_universal_dependencies /tmp/pytest-of-markussagen/pytest-1/cache/modules/datasets_modules/datasets/universal_dependencies/065e728dfe9a8371434a6e87132c2386a6eacab1a076d3a12aa417b994e6ef7d/universal_dependencies.py:6: DeprecationWarning: invalid escape sequence \= _CITATION = """\ tests/test_filesystem.py: 105 warnings /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/responses/__init__.py:398: DeprecationWarning: stream argument is deprecated. Use stream parameter in request directly warn( tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs tests/test_formatting.py::FormatterTest::test_jax_formatter_np_array_kwargs /home/markussagen/datasets/src/datasets/formatting/jax_formatter.py:57: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations if data_struct.dtype == np.object: # jax arrays cannot be instantied from an array of objects tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter tests/test_formatting.py::FormatterTest::test_jax_formatter /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/jax/_src/numpy/lax_numpy.py:3567: UserWarning: Explicitly requested dtype <class 'jax._src.numpy.lax_numpy.int64'> requested in array is not available, and will be truncated to dtype int32. To enable more dtypes, set the jax_enable_x64 configuration option or the JAX_ENABLE_X64 shell environment variable. See https://github.com/google/jax#current-gotchas for more. lax._check_user_dtype_supported(dtype, "array") tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/apscheduler/util.py:95: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html if obj.zone == 'local': tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features _audio /home/markussagen/.pyenv/versions/3.8.5/envs/huggingface/lib/python3.8/site-packages/librosa/core/constantq.py:1059: DeprecationWarning: `np.complex` is a deprecated alias for the builtin `complex`. To silence this warning, use `complex` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.complex128` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations dtype=np.complex, tests/features/test_array_xd.py::test_array_xd_with_none /home/markussagen/mydataset/tests/features/test_array_xd.py:338: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations assert isinstance(arr, np.ndarray) and arr.dtype == np.object and arr.shape == (3,) -- Docs: https://docs.pytest.org/en/stable/warnings.html ============================= short test summary info ============================= FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bleurt - I... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_chrf - Att... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_code_eval FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_comet - Im... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_coval - Im... FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_frugalscore FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_perplexity FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_ter - Type... ``` ## Environment info - `datasets` version: 2.0.1.dev0 - Platform: Linux-5.16.11-76051611-generic-x86_64-with-glibc2.33 - Python version: 3.8.5 - PyArrow version: 5.0.0
false
1,175,759,412
https://api.github.com/repos/huggingface/datasets/issues/3983
https://github.com/huggingface/datasets/issues/3983
3,983
Infinitely attempting lock
closed
4
2022-03-21T18:11:57
2024-05-09T08:24:34
2022-05-06T16:12:18
jyrr
[]
I am trying to run one of the examples of the `transformers` repo, which makes use of `datasets`. Important to note is that I am trying to run this via a Databricks notebook, and all the files reside in the Databricks Filesystem (DBFS). ``` %sh python /dbfs/transformers/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir /dbfs/transformers/tmp/tst-summarization \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate \ --log_level debug \ --cache_dir /dbfs/transformers/cache ``` All goes well until acquiring a lock -- ``` 03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:19 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Attempting to acquire lock 140386484514192 on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock 03/21/2022 17:53:20 - DEBUG - datasets.utils.filelock - Lock 140386484514192 not acquired on /dbfs/transformers/cache/_dbfs_transformers_cache_cnn_dailymail_3.0.0_3.0.0_3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234.lock, waiting 0.05 seconds ... ``` and so on. I imagine this has to do with DBFS -- is there a way to tackle this?
false
1,175,478,099
https://api.github.com/repos/huggingface/datasets/issues/3982
https://github.com/huggingface/datasets/pull/3982
3,982
Exclude Google Drive tests of the CI
closed
2
2022-03-21T14:34:16
2022-03-31T16:38:02
2022-03-21T14:51:35
lhoestq
[]
These tests make the CI spam the Google Drive API, the CI now gets banned by Google Drive very often. I think we can just skip these tests from the CI for now. In the future we could have a CI job that runs only once a day or once a week for such cases cc @albertvillanova @mariosasko @severo Close #3415 ![image](https://user-images.githubusercontent.com/42851186/159283608-fdeca1ac-b57f-4fa3-bf09-6fa5361c494f.png)
true
1,175,423,517
https://api.github.com/repos/huggingface/datasets/issues/3981
https://github.com/huggingface/datasets/pull/3981
3,981
Add TER metric card
closed
1
2022-03-21T13:54:36
2022-03-29T13:57:11
2022-03-29T13:51:40
emibaylor
[]
Add TER metric card This card is still missing content for the following sections: - **Limitations & Biases** - **Values from Papers** If anyone has any ideas for either of the above, feel free to either add them or point me to them and I'll add them!
true
1,175,412,905
https://api.github.com/repos/huggingface/datasets/issues/3980
https://github.com/huggingface/datasets/pull/3980
3,980
Add tip on how to speed up loading with ImageFolder
closed
5
2022-03-21T13:45:58
2022-03-22T13:39:45
2022-03-22T13:34:56
mariosasko
[]
This PR does two things: * adds a tip on how to speed up loading of a large number of files with ImageFolder (motivated by [this issue](https://github.com/huggingface/datasets/issues/3960)) * replaces the current references to the `Dataset` methods in the Image Processing doc with their fully qualified counterparts (to align it with the Audio Processing doc) cc @stevhliu
true
1,175,258,969
https://api.github.com/repos/huggingface/datasets/issues/3979
https://github.com/huggingface/datasets/pull/3979
3,979
Fix google drive streaming for small files
closed
4
2022-03-21T11:38:46
2023-09-24T09:55:19
2022-03-21T14:25:58
lhoestq
[]
Google drive did another change recently, following #3787 #3843 . In particular Google Drive now returns 403 for GET requests with `confirm=t` when a files doesn't have a virus warning message. I fixed this by passing `confirm=t` if and only if when there is one (i.e. when status code is 200 for HEAD)
true
1,175,226,456
https://api.github.com/repos/huggingface/datasets/issues/3978
https://github.com/huggingface/datasets/issues/3978
3,978
I can't view HFcallback dataset for ASR Space
open
4
2022-03-21T11:07:49
2023-09-25T12:19:53
null
kingabzpro
[]
## Dataset viewer issue for '*Urdu-ASR-flags*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)* *I think dataset should show some thing and if you want me to add script, please show me the documentation. I thought this was suppose to be automatic task.* Am I the one who added this dataset ? Yes
false
1,175,049,927
https://api.github.com/repos/huggingface/datasets/issues/3977
https://github.com/huggingface/datasets/issues/3977
3,977
Adapt `docs/README.md` for datasets
closed
1
2022-03-21T08:26:49
2023-02-27T10:32:37
2023-02-27T10:32:37
qqaatw
[ "documentation" ]
## Describe the bug Currently `docs/README.md` is a direct copy from `transformers`, we should probably adapt this file for `datasets`.
false
1,175,043,780
https://api.github.com/repos/huggingface/datasets/issues/3976
https://github.com/huggingface/datasets/pull/3976
3,976
Fix main classes reference in docs
closed
3
2022-03-21T08:19:46
2022-04-12T14:19:39
2022-04-12T14:19:38
qqaatw
[]
Currently the section index (on the page's right side) of the [main classes reference](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes) incorrectly displays `Tensor returned:`, this PR fixes this issue by wrapping code examples in this page with markdown code block. There are other examples in datasets library having this issue.
true
1,174,678,942
https://api.github.com/repos/huggingface/datasets/issues/3975
https://github.com/huggingface/datasets/pull/3975
3,975
Update many missing tags to dataset README's
closed
0
2022-03-20T20:42:27
2022-03-21T18:39:52
2022-03-21T18:39:52
MarkusSagen
[]
I've started to go through the datasets available and noticed that there are 127 datasets that does not have all the tags so I started filling them in; starting with some of the most common and QA datasets Not 100% certain that the task_id is correct for SuperGLUE If anyone is browsing the issues and would like to help make Hugging face datasets even more feature complete and awesome, feel free to use this tool I wrote to find the missing tags in the [datacards](https://github.com/Hugging-Face-Supporter/datacards)
true
1,174,485,044
https://api.github.com/repos/huggingface/datasets/issues/3974
https://github.com/huggingface/datasets/pull/3974
3,974
Add XFUN dataset
closed
8
2022-03-20T09:24:54
2022-10-03T09:38:16
2022-10-03T09:36:22
qqaatw
[ "dataset contribution" ]
This PR adds XFUN dataset. Home page and repository: https://github.com/doc-analysis/XFUND Source code: https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/data/datasets/xfun.py
true
1,174,455,431
https://api.github.com/repos/huggingface/datasets/issues/3973
https://github.com/huggingface/datasets/issues/3973
3,973
ConnectionError and SSLError
closed
6
2022-03-20T06:45:37
2022-03-30T08:13:32
2022-03-30T08:13:32
yanyu2015
[ "bug" ]
code ``` from datasets import load_dataset dataset = load_dataset('oscar', 'unshuffled_deduplicated_it') ``` bug report ``` --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_29788/2615425180.py in <module> ----> 1 dataset = load_dataset('oscar', 'unshuffled_deduplicated_it') D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1658 1659 # Create a dataset builder -> 1660 builder_instance = load_dataset_builder( 1661 path=path, 1662 name=name, D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1484 download_config = download_config.copy() if download_config else DownloadConfig() 1485 download_config.use_auth_token = use_auth_token -> 1486 dataset_module = dataset_module_factory( 1487 path, 1488 revision=revision, D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1236 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}" 1237 ) from None -> 1238 raise e1 from None 1239 else: 1240 raise FileNotFoundError( D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1173 if path.count("/") == 0: # even though the dataset is on the Hub, we get it from GitHub for now 1174 # TODO(QL): use a Hub dataset module factory instead of GitHub -> 1175 return GithubDatasetModuleFactory( 1176 path, 1177 revision=revision, D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in get_module(self) 531 revision = self.revision 532 try: --> 533 local_path = self.download_loading_script(revision) 534 except FileNotFoundError: 535 if revision is not None or os.getenv("HF_SCRIPTS_VERSION", None) is not None: D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\load.py in download_loading_script(self, revision) 511 if download_config.download_desc is None: 512 download_config.download_desc = "Downloading builder script" --> 513 return cached_path(file_path, download_config=download_config) 514 515 def download_dataset_infos_file(self, revision: Optional[str]) -> str: D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 232 if is_remote_url(url_or_filename): 233 # URL, so get it from the cache (downloading if necessary) --> 234 output_path = get_from_cache( 235 url_or_filename, 236 cache_dir=cache_dir, D:\DataScience\PythonSet\IDES\anaconda\lib\site-packages\datasets\utils\file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc) 580 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 581 if head_error is not None: --> 582 raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") 583 elif response is not None: 584 raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/oscar/oscar.py (SSLError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.0.0/datasets/oscar/oscar.py (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')))"))) ``` It may be caused by Caused by SSLError(in China?) because it works well on google colab. So how can I download this dataset manually?
false
1,174,402,033
https://api.github.com/repos/huggingface/datasets/issues/3972
https://github.com/huggingface/datasets/pull/3972
3,972
Adding Roman Urdu Hate Speech dataset
closed
3
2022-03-20T00:19:26
2022-03-25T15:56:19
2022-03-25T15:51:20
bp-high
[]
This Pull request will add the Roman Urdu Hate speech Dataset.
true
1,174,329,442
https://api.github.com/repos/huggingface/datasets/issues/3971
https://github.com/huggingface/datasets/pull/3971
3,971
Applied index-filters on scores in search.py.
closed
1
2022-03-19T18:43:42
2022-04-12T14:48:23
2022-04-12T14:41:58
vishalsrao
[]
Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961. Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py.
true
1,174,327,367
https://api.github.com/repos/huggingface/datasets/issues/3970
https://github.com/huggingface/datasets/pull/3970
3,970
Apply index-filters on scores in get_nearest_examples and get_nearest…
closed
0
2022-03-19T18:32:31
2022-03-19T18:38:12
2022-03-19T18:38:12
vishalsrao
[]
Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961. Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py.
true
1,174,273,824
https://api.github.com/repos/huggingface/datasets/issues/3969
https://github.com/huggingface/datasets/issues/3969
3,969
Cannot preview cnn_dailymail dataset
closed
10
2022-03-19T14:08:57
2022-04-20T15:52:49
2022-04-20T15:52:49
hasan-besh
[]
## Dataset viewer issue for '*cnn_dailymail*' **Link:** https://huggingface.co/datasets/cnn_dailymail *short description of the issue* Am I the one who added this dataset ? Yes-No
false
1,174,193,962
https://api.github.com/repos/huggingface/datasets/issues/3968
https://github.com/huggingface/datasets/issues/3968
3,968
Cannot preview 'indonesian-nlp/eli5_id' dataset
closed
5
2022-03-19T06:54:09
2022-03-24T16:34:24
2022-03-24T16:34:24
cahya-wirawan
[ "dataset-viewer" ]
## Dataset viewer issue for '*indonesian-nlp/eli5_id*' **Link:** https://huggingface.co/datasets/indonesian-nlp/eli5_id I can not see the dataset preview. ``` Server Error Status code: 400 Exception: Status400Error Message: Not found. Maybe the cache is missing, or maybe the dataset does not exist. ``` Am I the one who added this dataset ? Yes
false
1,174,107,128
https://api.github.com/repos/huggingface/datasets/issues/3967
https://github.com/huggingface/datasets/pull/3967
3,967
[feat] Add TextVQA dataset
closed
3
2022-03-18T23:29:39
2022-05-05T06:51:31
2022-05-05T06:44:29
apsdehal
[]
This would be the first classification-based vision-and-language dataset in the datasets library. Currently, the dataset downloads everything you need beforehand. See the [paper](https://arxiv.org/abs/1904.08920) for more details. Test Plan: - Ran the full and the dummy data test locally
true
1,173,883,084
https://api.github.com/repos/huggingface/datasets/issues/3966
https://github.com/huggingface/datasets/pull/3966
3,966
Create metric card for BERTScore
closed
1
2022-03-18T18:21:56
2022-03-22T13:35:28
2022-03-22T13:30:56
sashavor
[]
Proposing a metric card for BERTScore
true
1,173,708,739
https://api.github.com/repos/huggingface/datasets/issues/3965
https://github.com/huggingface/datasets/issues/3965
3,965
TypeError: Couldn't cast array of type for JSONLines dataset
closed
1
2022-03-18T15:17:53
2022-05-06T16:13:51
2022-05-06T16:13:51
lewtun
[ "bug" ]
## Describe the bug One of the [course participants](https://discuss.huggingface.co/t/chapter-5-questions/11744/20?u=lewtun) is having trouble loading a JSONLines dataset that's composed of the GitHub issues from `spacy` (see stack trace below). This reminds me a bit of #2799 where one can load the dataset in `pandas` but not in `datasets` and perhaps increasing the `block_size` is needed again. ## Steps to reproduce the bug ```python from datasets import load_dataset from huggingface_hub import hf_hub_url import pandas as pd # returns 'https://huggingface.co/datasets/Evan/spaCy-github-issues/resolve/main/spacy-issues.jsonl' data_files = hf_hub_url(repo_id="Evan/spaCy-github-issues", filename="spacy-issues.jsonl", repo_type="dataset") # throws TypeError: Couldn't cast array of type dset = load_dataset("json", data_files=data_files, split="test") # no problem with pandas - note this take a while as the file is >2GB df = pd.read_json(data_files, orient="records", lines=True) df.head() ``` ## Expected results I can load any line-separated JSON file, similar to pandas. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/load.py", line 1702, in load_dataset builder_instance.download_and_prepare( File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 594, in download_and_prepare self._download_and_prepare( File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 683, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/builder.py", line 1136, in _prepare_split writer.write_table(table) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 511, in write_table pa_table = table_cast(pa_table, self._schema) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1121, in table_cast return cast_table_to_features(table, Features.from_arrow_schema(schema)) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in cast_table_to_features arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1102, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper return func(array, *args, **kwargs) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 918, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1086, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 944, in wrapper return func(array, *args, **kwargs) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 920, in wrapper return func(array, *args, **kwargs) File "/Users/lewtun/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/table.py", line 1019, in array_cast raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") TypeError: Couldn't cast array of type struct<url: string, html_url: string, labels_url: string, id: int64, node_id: string, number: int64, title: string, description: string, creator: struct<login: string, id: int64, node_id: string, avatar_url: string, gravatar_id: string, url: string, html_url: string, followers_url: string, following_url: string, gists_url: string, starred_url: string, subscriptions_url: string, organizations_url: string, repos_url: string, events_url: string, received_events_url: string, type: string, site_admin: bool>, open_issues: int64, closed_issues: int64, state: string, created_at: timestamp[s], updated_at: timestamp[s], due_on: null, closed_at: timestamp[s]> to null ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.9.7 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
false
1,173,564,993
https://api.github.com/repos/huggingface/datasets/issues/3964
https://github.com/huggingface/datasets/issues/3964
3,964
Add default Audio Loader
closed
0
2022-03-18T12:58:55
2022-08-22T14:20:46
2022-08-22T14:20:46
polinaeterna
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Writing a custom loading dataset script might be a bit challenging for users. **Describe the solution you'd like** Add default Audio loader (analogous to ImageFolder) for small datasets with standard directory structure. **Describe alternatives you've considered** Create a custom loading script? that's what users doing now.
false
1,173,492,562
https://api.github.com/repos/huggingface/datasets/issues/3963
https://github.com/huggingface/datasets/pull/3963
3,963
Add Audio Folder
closed
14
2022-03-18T11:40:09
2022-06-15T16:33:19
2022-06-15T16:33:19
polinaeterna
[]
Would resolve #3964 AudioFolder loads a .txt file with transcriptions and creates a dataset with all audiofiles in provided directory that has a transcription (independently of the directory structure) as a single split (train). Can be loaded via: ```python # for local dirs dataset = load_dataset("audiofolder", data_dir="/path/to/folder", transcripts_filename="transcripts.txt") ``` ```python # for local and remote zip archives dataset = load_dataset("audiofolder", data_files="path/to/archive/archive.zip", transcripts_filename="transcripts.txt") ``` default transcriptions filename is `transcripts.txt`. it should have the following structure: ``` audio_id_1 transcription text 1 audio_id_1 transcription text 1 ``` separator is `\t`! --- sorry for first old commits from other branch, don't know how that happened...
true
1,173,482,291
https://api.github.com/repos/huggingface/datasets/issues/3962
https://github.com/huggingface/datasets/pull/3962
3,962
Fix flatten of Sequence feature type
closed
1
2022-03-18T11:27:42
2022-03-21T14:40:47
2022-03-21T14:36:12
lhoestq
[]
The `Sequence` features type is not correctly flattened if it contains a dictionary. This PR fixes this, and I added a test case for this. Close https://github.com/huggingface/datasets/issues/3795
true
1,173,223,086
https://api.github.com/repos/huggingface/datasets/issues/3961
https://github.com/huggingface/datasets/issues/3961
3,961
Scores from Index at extra positions are not filtered out
closed
2
2022-03-18T06:13:23
2022-04-12T14:41:58
2022-04-12T14:41:58
vishalsrao
[ "bug" ]
If a FAISS index has fewer records than the requested number of top results (k), then it returns -1 in indices for the additional positions. The get_nearest_examples method only filters out the extra results from the dataset samples. It would be better to filter out extra scores too. Reference: https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/search.py#L693
false
1,173,148,884
https://api.github.com/repos/huggingface/datasets/issues/3960
https://github.com/huggingface/datasets/issues/3960
3,960
Load local dataset error
open
13
2022-03-18T03:32:49
2023-08-02T17:12:20
null
TXacs
[ "bug", "dataset bug" ]
When i used the datasets==1.11.0, it's all right. Util update the latest version, it get the error like this: ``` >>> from datasets import load_dataset >>> data_files={'train': ['/ssd/datasets/imagenet/pytorch/train'], 'validation': ['/ssd/datasets/imagenet/pytorch/val']} >>> ds = load_dataset('nateraw/image-folder', data_files=data_files, cache_dir='./', task='image-classification') [] https://huggingface.co/datasets/nateraw/image-folder/resolve/main/ /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1671, in load_dataset **config_kwargs, File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/load.py", line 1521, in load_dataset_builder **config_kwargs, File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 1031, in __init__ super().__init__(*args, **kwargs) File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/builder.py", line 255, in __init__ sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 584, in from_local_or_remote if not isinstance(patterns_for_key, DataFilesList) File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 546, in from_local_or_remote data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 196, in resolve_patterns_locally_or_by_urls for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions): File "/gf3/home/txacs/gv3/anaconda3/envs/txacs/lib/python3.6/site-packages/datasets/data_files.py", line 146, in _resolve_single_pattern_locally raise FileNotFoundError(error_msg) FileNotFoundError: Unable to find '/ssd/datasets/imagenet/pytorch/train' at /dat/txacs/git/txacs/examples/image-classification/https:/huggingface.co/datasets/nateraw/image-folder/resolve/main ``` I need some help to solve the problem, thanks!
false
1,172,872,695
https://api.github.com/repos/huggingface/datasets/issues/3959
https://github.com/huggingface/datasets/issues/3959
3,959
Medium-sized dataset conversion from pandas causes a crash
closed
3
2022-03-17T20:20:35
2022-12-12T17:14:06
2022-04-20T12:35:37
Antymon
[ "bug" ]
Hi, I am suffering from the following issue: ## Describe the bug Conversion to arrow dataset from pandas dataframe of a certain size deterministically causes the following crash: ``` File "/home/datasets_crash.py", line 7, in <module> arrow=datasets.Dataset.from_pandas(d) File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 783, in from_pandas table = InMemoryTable.from_pandas( File "/home/.conda/envs/tools/lib/python3.9/site-packages/datasets/table.py", line 379, in from_pandas return cls(pa.Table.from_pandas(*args, **kwargs)) File "pyarrow/table.pxi", line 1487, in pyarrow.lib.Table.from_pandas File "pyarrow/table.pxi", line 1532, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 1181, in pyarrow.lib.Table.validate File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458) ``` ## Steps to reproduce the bug I have a dataset made from replicated single example mocking a dict representation of a publication. I copy over this example 140k times and create a pandas frame. I use 'Dataset.from_pandas' and boom ```python # Sample code to reproduce the bug import copy import datasets import pandas # serialized dict is quite long to be realistic representation of a publication content paper_as_dict=eval("{'article_id': '2020-11-05T14:25:05.321Z02bc3286-91b7-486a-9c74-4f457fbc586a', 'sections': [{'section_id': 'body.0', 'paragraphs': [{'sentences': ['11010111001000000011010011110011101110111011000100001010011100101001111010110111101011101111101010101110001111011110111010111', '1101100110110010010101010100110011000111001100100000011100010111010000011100001101111000000011010111001111001010101111110011010010111011000110100110010', '101011011000010100000010011001011011000000110011011110000101001110110000010001100110111100011100110101010010110000101', '1101101110101010101000000010101011111001111000101000110001110100111000100000011001110100110000110100111011001010110011101001001110']}]}, {'section_id': 'body.1', 'paragraphs': [{'sentences': ['11111100100100111000101001011110100110011001011011001001100110100111011010000110011000010001010100101110001001101011110111110101111100001001001000011110110010110011100110110111110011100011111000101010111010101011001110000100000001001010010010011101111100011010', '10101000110000110111110011101111000101010010001001010000001111001100000010001000001110111110010011101000000111011', '111010011111101111110011111110110001000111100101001000100110101111110000111000111111110000101001101000110011010111011101001010110110001000100000001110001111100110110001110001001100011010100110100010100111000110110100010010100101011110000110000101010010001110101100000']}, {'sentences': ['111110011110110110001111001101011110010110100011101010110101011001101110110111100000111101010110011110111101001111000101110001001010010101100111111001001000011101000100110000101', '011101101101111101001100101010000010111101100101110100101000001100010100110011010010100001101001110111100011010011011111000111111101110001010111010011010110001000010101100110000100010110101110110011001010011001100111101100001001', '1110001011011010101001100001110001110001000111111111101110100001011101101001110100000110000011010001101010101110101110101101001010100100010000000010110010010010', '11101111000111111100111110010000111101110010010101001111011001111110011000011100110001010010000100101010', '111000110110110010101100010010100001100100110010101000001000011101000100101011011010000011001011011111001101100001110010100001111110111001001010101100100110001011011100000101010010000000001100010000101100110110111101110010100010011101110110111010011011000011001010111011100000000010101001011000100000011010100011101001011001010010011110100100']}, {'sentences': ['001101111100001101001001001110000110010101011101001001111111011000111001111011101011110111000000100001110110101110001010001111110100010', '0000110010110101001100011011000011001101001110001000000110010101000011101011110110000000100111000001010000101011111011110001001100001110101010101110101011111000000011001111011110001010010111010000100100000001111001011100101111010101111001001101100101001101111000111011010110010001010010010111010000001101101111100101000111101011001000101', '00000101100101100111101010000101011100101100001100011001100100001100001010001010010011001001111001000010100010000110100111110000001000101000111100010111110011000100000111100010000100010111100010101', '111100110010100110000010010101010101110011110100000101110000000111010101111001011110010101001110000001001000010110010010011110111110010110100101110011001101110111001111100011100100011110010010100101011111111']}, {'sentences': ['1100001110101111000001011001100110001011100011110110010011001000101000011110010101010011011000111010000101010011010000000111011001000010100101000011111101000000000101111000', '1110101000100110001111000011000101110111001100101010011001100011010011111111111010101011010101010011000101001100100000110010100110110110110001101100', '00010001100100101100100111111110111111101000100110101111101111110101110001010001011100000000000011010101101001111010001110101101110011001011111101110100010000111101', '011100011101011001000110010110100100000010100010010110011000000010101110011111111101010010010001100110101010010001100010110011110001011011101010111111100100110110010111101001100101010111001', '10111000011010101111110110011010101011111001000001010010111111010010111111100100010100110100101101110100110011001000110100000111000100110000001000111010', '0010011111111011100111010001111001011101001010000010110000010111000101001101000011101110100100000000100100010010101010100011100101001000100110110000010111111110000011011101111000111010']}]}, {'section_id': 'body.2.0', 'paragraphs': [{'sentences': ['110010010011001110100100011001111100010011110111101011011011001010010010010011101011', '000110101110011011101011000000100011111000001100011011110101101011000110011010001010001101101100000111100101001011111001001101111', '1000011100100000100100100010010000111011000100110010000011110111100110110001101001010100011111010100101000111', '11110111111000110010000000000100010010110001100010001010000111011000101100011010010101110110011010110101001101110011101011101100000001000100101011010110110100101011101010010101101000011110000010101011001011000001000000001010110000100010000100011110101001111100001000100000111000001010011111111110101010100011011000010000111000110', '1001000111011000111110001111111001100001000000101000111011101101100101010110001101000000001111010111100011111000000100001001110', '100110010111010101111010100000010001110101111001010010001100001110100100100101110011010101001000100101000100100011001110001100111000010010011011000010011010010000110001000000100011110010110110011010001100111010111110011']}, {'sentences': ['10010101011100010111011111001001001010100011001001111101101001000000001111101110000111101011000001001011101110101001100010010001101111001110000100010010001001101111011111110010011011110011', '110001110010110000101111000000110010010010100000010100001111101101000101100000000110000000011111011001111000010110110001011010011011101100100110011000100110101010111010111111000111001111010110010001001110100001011011000110000000111101110000001111011011101110100000100010000110001000000110100000', '101010000000010000110110111000110000100111000001110100101101101010001010010010101010100111010110001001000101011110010011001001001110111001101101100100011110011011110101100010110111001010000001000110100000001010011111111110111010011110001001110100011011000101011000110110011011010110100100011111111011100111110110000110011011110110110011101010101111001101010110101000000001100101111010000101110', '1010100110111111111000110110111110010100000100001110101110111001011000010001110110001111111110000101001001110010001110000111010101111010111111011100100011100111111101101111000010001100101000010001100110110100110111111100100011001011000001111110010100110111000010011110111011001101100000101011111110101000011000010', '00000001110000101001110101110011101001110011000111111101111101111000010011100000101000001011001110', '101000111010010000011010011010011010010010100010110100011100100111011101010100101110100111010001000000', '01101000110001101011001101100010100011011010000000001010101000010101000110100010000000110001110001010010000000101101000011000100000110011101100001010100011111101010010110001101110101010111101100001110000011001101', '0010010111000011110010011110001010100000111100001011010100100010101010010011101101100110001001111001000110000111011110010000110101010110111111010110100000011010001001010001000110001101101000101110001011110000101101110000110010110010111001100010011011100011', '00110111110000000100110111101011000100100110001000001001101011001000010100100001100111100110000110110101111010000010101000000101000011001011101001', '0100100001000111001110110110000001000100111001101101110100100111010111110001110010110111100110011111001001000011101110100101111011000110100000111010011101']}, {'sentences': ['100001001011101111111100110111011110001101111101100001000110110000100101011000000100000', '10101001001111110101001010100110011110101101001']}]}, {'section_id': 'body.2.0.0', 'paragraphs': [{'sentences': ['1110101100001100011000101000010000100010101101010110101011100101110110110111010101001100100000000111011001000100011110101011111010100101001010000010001001101010100011110010101110011001100010000100110011000011101010001000111001000001100', '101000000011001001110101000100101010000111000111100010010001111111100110001100000100011010011010010101101111010101010000110011101001111001111011111001110001010000110101101011101111010000001100', '01100001011110010100000101001101111101010011100010011001011110110010010011100101000', '0011100111000101111000010001111100000111000101110001111010001100001000111010000101100001110101100111111', '00001100000011110001011010010110000000111110110001111000110000011011001110000000100011001010110000010000010001101010101100000010011011000101011111100010010', '1011101011101111000001100100111000011000010010011110011000110111010010111100111101100110011010000110000111000110111110101111000001000010011101111000110000100011110101101101001101000110010000001000010011011010101100', '1000010011100011100000010011011111111110101101111011101010010111000000101011000000110101111000010011', '01100000110011001110101111101101011001011101000010001100101010100011010101010100111011011110100010100111', '011011010100011011110010101000110001111110110']}]}, {'section_id': 'body.2.0.1', 'paragraphs': [{'sentences': ['00111011011101000100100111000001101001011000111100100010101001010011001011000010011111001100000100010001100101110011001000110001101011010111011111011000010011010010111010011111101000110111011100010011100111111110110111011', '011011010101101101010000001011010110011111011110100111010101010110001101000010011111000011100', '110001000110010000000111101110111110101110111000101000010001110101000101001000111000010001011101010000110001010001101001001110111110111010111010011101000101101010000', '001000111110100110000001111100000111001110111001110111001000111010001001100111001101000001001001010111000111011100001111011001111110001011000111110011111101011101000100101001111011100001000110101010101111111110011111111011000101110001000000000100111011111011001100111', '11010101100010010100010010010101001011001011000001100010101111111101001101110011001010010100000111010101', '01110000110011111000110010011010000011100000010010001111100010010100100001011011111110001100', '011101111100011101100111110101111001101010010001001110101100001101000000111000']}]}, {'section_id': 'body.2.0.2', 'paragraphs': [{'sentences': ['0111011000110100110000001011001110111000011110100111011000000001000010001111111001101111011100101110101101000111000101000010000111011010110000011101111110111110100111000111000011', '00100110111000110101100111000110100010011010010101001010011000000101000110100110011010011111000100000011000000010001010000100111101011111111101010001111010000001011100001110100000101001101101010011011101000', '000001110001010010100101010100010101001100011001001101101101110111011111101010010111010110110111011110101100001000011110111011001', '0001110010111110100110110011000001111100100100110101011010010101010100101000010101000100101000011011', '1000010010010101001100101110010111010100000110101110000000111001111111001011111010000011110001011001001001000101', '0001111100111010010100010111010110011011000000001111010010110001000011010001100111101110001110000011010101111100001000011010110100000100100001111011110110000000101000010001111001010010110101110111101101110111000100', '1000101100001000100001101110111110000100000001000010101111010011010010010111011010100011001000100100001010001100110']}]}, {'section_id': 'body.2.0.3', 'paragraphs': [{'sentences': ['1010100111100011110110101011100001011010011010100100010011000110111000001010010110111001001101111000010100100110101001010001010001000110010000001', '100010101010100111000011111101010100101110011000100011100100100111000010000011001010010111011010000101010011011110111001010110', '0110000110110110110011011000011010010000001010011000010001011110110010000100011111010100110111111010010111000101111', '10100100000011100010110110011111011011101101111000001001010100001001011010000011001010101100000', '1011111111100001001100000010000100110010101000010100111111110010110011101110000101101011101', '10001111110000011100100000101100000000010000100000011100110000011110111010011101010111101001111000100000000110000011010010001100110111100001001011101011001111110010100111001001010001010011010010010111001101110101110000101011', '101101111111101101010010000110111110000110000111001001010011111101011001011010101100010100110101101011100111100100110010001011110001110010000011101100100100001001110010000010011111100110101']}]}, {'section_id': 'body.2.1', 'paragraphs': [{'sentences': ['1010010011010011001111111001000110010001101111101011001011011000101001010101010001000110100011110101110001110110111010010010100100111000101100100101111110100000011111001101010111101010100101011011110111111110', '000010101101111100000110010110011001111100001101011101000100010001001001000000101101000001110000011010111100000010010000010101110101100010011000101110110111111001000101000111000110100001001100001010101010100011', '0000000011101110111100100010111100101010110001111101110110010000100100010000101001101111001111001001100110010011010000101001110010000000100101011101001010100100011101101001011000010111110100101010110110011001110000110010010111110110101100001011101001100111010001000010111010001010000100010010011110111100110011100011111101101000011100111110101010100110001100100000100011011010111000111110010110100010111101001001101000001100100010000111110000011101111100111101000000000']}, {'sentences': ['01011000010110011000000101101000110101011010100111011001001001100001101101111101111001101111100101111001101011011001011110110110110100001100111111010100101110111111101000101100101010110011111011100101101010100110111001111100100011001110011101000110100000001100001100110001110101001000011010000110101011010000001111100100000100101110011000001001010011011101100011000001100000011', '1001100000101000000011110100110001100001101001100011010000111111010110101111001000100111000011010100100000110110001', '10010011000110110111010110000010010000000111101000100101100111101101001100111110101001001111100001110011110000010101000001000000010100011011110011000100110101001100110111111001101000011010100110000000011110001000101010101000110010010']}]}, {'section_id': 'body.2.2', 'paragraphs': [{'sentences': ['000011000000010011000001101111000101000111111111111010001011110000011001010111010101010110001111110000010', '10101001101011101010001111011000110100000100011110010001100111111101101100010010111110110101101011000011000001101110010111011111100111110000000101110010111', '100001011110010111010110001101101001100000000001000010110101011001111100101101101111010010111111000000111001111010011111000100010001111011110001010000110010101010111110100101011011100001010101000001011011111111101', '1000110111111011101000110101001111111111000100011001000011010100001010011110001111010011011111000111011100101001011111001000010101110110101000111011111111010010001101001010110111000011110101011000010000110', '1011100000100000010101101111001001100110111000010001011010111111000000001010101001111011101011010101101001111101101100101001011101000011011010001001101100100111101111111100010011010101111011100001100001000100101100100110101000010000011000000011001100000110000001', '0001001101111001111111010000001101010110110110100110110100000100110101101010010101011000010010111011000010111110000001110101110111000010011000100110111001000111011000100101110111111', '0110010010011000011010001111001100101001100001001000010100101100010110000000101010110001001010001100111101010001110010010000111011100101101010111111101001100010001011100110010100110111010101000100001110000101110011111011111000010101010110101100010010111100100010010100111110111100101010100011101001110110010000011110001010101010000100010000100111001111011101', '000001010000010001100000101011000000110101000100010111111100101111111000110111001001110110101111110011100001001000011001010000011011', '0101101001010101001101010100011000111011001000100001110100110011100000001001010110001101010110011100111111100101101111101111011001111111110010111010011011011111011011110000101011010', '11000001110111000001100100001110000111001010000101011011101010111001011100010010010111111111000011111110010111100011100110001001100011111010100111110111001110010', '0100010110100001010101110111100011100100010111111011101001100101111110101011010010101111001000101001111000001110001100011001110010100110101100110100100000001010101101011110011001000101100111001001001110100', '100000100010011111001101010000100110011110001100000010010110110100000111111011010100101111010111001110101000100001111101001110000011010110000010100', '00100110000011100101000110110001000011101000011010101000010001111011100001111111001011100111101000001000000110110001000101111010010010001100111', '0110110100011001110011001111100010101001011111011001011001101101010010101101110101010100001000100100000111101110001001110111000110011101101010100000101', '0011111010010011011101010110100110000011000011100100101011011001110110001110001111000011010111011000110100111111011101110111000010010000011011010011011100000011101100110110100100000010110101110100110101001100111011101001010111011011110100110101110010011011010001010111110011001000010100010101010010110010010110000100110001000011010011000100101011010100100111010']}]}]}") d=pandas.DataFrame.from_records(copy.deepcopy(paper_as_dict) for _ in range(140_100)) arrow=datasets.Dataset.from_pandas(d) ``` ## Expected results The dataset should be converted without error. ## Actual results Error `pyarrow.lib.ArrowInvalid: Column 1: In chunk 0: Invalid: List child array invalid: Invalid: Struct child array #1 has length smaller than expected for struct array (1192457 < 1192458)` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets==1.18.4 pandas==1.3.5 - Platform: macOS 11.6 or CentOS Linux 7 (Core) - Python version: Python 3.9.7 - PyArrow version: pyarrow==3.0.0
false
1,172,657,981
https://api.github.com/repos/huggingface/datasets/issues/3958
https://github.com/huggingface/datasets/pull/3958
3,958
Update Wikipedia metadata
closed
2
2022-03-17T17:50:05
2022-03-21T12:26:48
2022-03-21T12:26:47
albertvillanova
[]
This PR updates: - dataset card - metadata JSON
true
1,172,401,455
https://api.github.com/repos/huggingface/datasets/issues/3957
https://github.com/huggingface/datasets/pull/3957
3,957
Fix xtreme s metrics
closed
2
2022-03-17T13:39:04
2022-03-18T13:46:19
2022-03-18T13:42:16
patrickvonplaten
[]
We in fact do need BABEL in xtreme-s
true
1,172,272,327
https://api.github.com/repos/huggingface/datasets/issues/3956
https://github.com/huggingface/datasets/issues/3956
3,956
TypeError: __init__() missing 1 required positional argument: 'scheme'
closed
8
2022-03-17T11:43:13
2023-11-21T04:26:20
2022-03-28T08:00:01
amirj
[ "bug" ]
## Describe the bug Based on [this tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) the provided code should add Elasticsearch index but raised the following error, probably the new Elasticsearch version is not compatible though the tutorial doesn't provide any information about the supporting Elasticsearch version. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset squad = load_dataset('squad', split='validation') squad.add_elasticsearch_index("context", host="localhost", port="9200") ``` ## Expected results [Creating an elastic index based on the provided tutorial](https://huggingface.co/docs/datasets/faiss_es#elasticsearch) ## Actual results ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-8fb51aa33961> in <module> 1 from datasets import load_dataset 2 squad = load_dataset('squad', split='validation') ----> 3 squad.add_elasticsearch_index("context", host="localhost", port="9200") ~/opt/anaconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config) 3777 """ 3778 with self.formatted_as(type=None, columns=[column]): -> 3779 super().add_elasticsearch_index( 3780 column=column, 3781 index_name=index_name, ~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in add_elasticsearch_index(self, column, index_name, host, port, es_client, es_index_name, es_index_config) 587 """ 588 index_name = index_name if index_name is not None else column --> 589 es_index = ElasticSearchIndex( 590 host=host, port=port, es_client=es_client, es_index_name=es_index_name, es_index_config=es_index_config 591 ) ~/opt/anaconda3/lib/python3.8/site-packages/datasets/search.py in __init__(self, host, port, es_client, es_index_name, es_index_config) 123 from elasticsearch import Elasticsearch # noqa: F811 124 --> 125 self.es_client = es_client if es_client is not None else Elasticsearch([{"host": host, "port": str(port)}]) 126 self.es_index_name = ( 127 es_index_name ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/__init__.py in __init__(self, hosts, cloud_id, api_key, basic_auth, bearer_auth, opaque_id, headers, connections_per_node, http_compress, verify_certs, ca_certs, client_cert, client_key, ssl_assert_hostname, ssl_assert_fingerprint, ssl_version, ssl_context, ssl_show_warn, transport_class, request_timeout, node_class, node_pool_class, randomize_nodes_in_pool, node_selector_class, dead_node_backoff_factor, max_dead_node_backoff, serializer, serializers, default_mimetype, max_retries, retry_on_status, retry_on_timeout, sniff_on_start, sniff_before_requests, sniff_on_node_failure, sniff_timeout, min_delay_between_sniffing, sniffed_node_callback, meta_header, timeout, randomize_hosts, host_info_callback, sniffer_timeout, sniff_on_connection_fail, http_auth, maxsize, _transport) 310 311 if _transport is None: --> 312 node_configs = client_node_configs( 313 hosts, 314 cloud_id=cloud_id, ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in client_node_configs(hosts, cloud_id, **kwargs) 99 else: 100 assert hosts is not None --> 101 node_configs = hosts_to_node_configs(hosts) 102 103 # Remove all values which are 'DEFAULT' to avoid overwriting actual defaults. ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in hosts_to_node_configs(hosts) 142 143 elif isinstance(host, Mapping): --> 144 node_configs.append(host_mapping_to_node_config(host)) 145 else: 146 raise ValueError( ~/opt/anaconda3/lib/python3.8/site-packages/elasticsearch/_sync/client/utils.py in host_mapping_to_node_config(host) 209 options["path_prefix"] = options.pop("url_prefix") 210 --> 211 return NodeConfig(**options) # type: ignore 212 213 TypeError: __init__() missing 1 required positional argument: 'scheme' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: Mac - Python version: 3.8.0 - PyArrow version: 7.0.0 - ElaticSearch Info: { "name" : "byname", "cluster_name" : "elasticsearch_brew", "cluster_uuid" : "9xkjrltiQIG0J95ciWhqRA", "version" : { "number" : "7.10.2-SNAPSHOT", "build_flavor" : "oss", "build_type" : "tar", "build_hash" : "unknown", "build_date" : "2021-01-16T01:41:27.115673Z", "build_snapshot" : true, "lucene_version" : "8.7.0", "minimum_wire_compatibility_version" : "6.8.0", "minimum_index_compatibility_version" : "6.0.0-beta1" }, "tagline" : "You Know, for Search" }
false
1,172,246,647
https://api.github.com/repos/huggingface/datasets/issues/3955
https://github.com/huggingface/datasets/pull/3955
3,955
Remove unncessary 'pylint disable' message in ReadMe
closed
0
2022-03-17T11:16:55
2022-04-12T14:28:35
2022-04-12T14:28:35
Datta0
[]
null
true
1,172,141,664
https://api.github.com/repos/huggingface/datasets/issues/3954
https://github.com/huggingface/datasets/issues/3954
3,954
The dataset preview is not available for tdklab/Hebrew_Squad_v1.1 dataset
closed
6
2022-03-17T09:38:11
2022-04-20T12:39:07
2022-04-20T12:39:07
MatanBenChorin
[]
## Dataset viewer issue for 'tdklab/Hebrew_Squad_v1.1' **Link:** https://huggingface.co/api/datasets/tdklab/Hebrew_Squad_v1.1?full=true The dataset preview is not available for this dataset. Am I the one who added this dataset ? Yes
false
1,172,123,736
https://api.github.com/repos/huggingface/datasets/issues/3953
https://github.com/huggingface/datasets/issues/3953
3,953
Add ImageNet Sketch
closed
2
2022-03-17T09:20:31
2022-05-23T18:05:29
2022-05-23T18:05:29
NielsRogge
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** ImageNet Sketch - **Description:** ImageNet-Sketch is a dataset consisting of sketch-like images, that matches the ImageNet classification validation set in categories and scale. - **Paper:** [Learning Robust Global Representations by Penalizing Local Predictive Power](https://arxiv.org/abs/1905.13549) - **Data:** https://github.com/HaohanWang/ImageNet-Sketch - **Motivation:** Allows for evaluating the robustness of vision models. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,171,895,531
https://api.github.com/repos/huggingface/datasets/issues/3952
https://github.com/huggingface/datasets/issues/3952
3,952
Checksum error for glue sst2, stsb, rte etc datasets
closed
1
2022-03-17T03:45:47
2022-03-17T07:10:15
2022-03-17T07:10:14
ravindra-ut
[ "bug" ]
## Describe the bug Checksum error for glue sst2, stsb, rte etc datasets ## Steps to reproduce the bug ```python >>> nlp.load_dataset('glue', 'sst2') Downloading and preparing dataset glue/sst2 (download: 7.09 MiB, generated: 4.81 MiB, post-processed: Unknown sizetotal: 11.90 MiB) to Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 73.0/73.0 [00:00<00:00, 18.2kB/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare self._download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare verify_checksums( File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8'] ``` ## Expected results dataset load should succeed without checksum error. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Library/Python/3.8/lib/python/site-packages/nlp/load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 462, in download_and_prepare self._download_and_prepare( File "/Library/Python/3.8/lib/python/site-packages/nlp/builder.py", line 521, in _download_and_prepare verify_checksums( File "/Library/Python/3.8/lib/python/site-packages/nlp/utils/info_utils.py", line 38, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) nlp.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2FSST-2.zip?alt=media&token=aabc5f6b-e466-44a2-b9b4-cf6337f84ac8'] ``` ## Environment info - `datasets` version: '1.18.3' - Platform: Mac OS - Python version: Python 3.8.9 - PyArrow version: '7.0.0'
false
1,171,568,814
https://api.github.com/repos/huggingface/datasets/issues/3951
https://github.com/huggingface/datasets/issues/3951
3,951
Forked streaming datasets try to `open` data urls rather than use network
closed
1
2022-03-16T21:21:02
2022-06-10T20:47:26
2022-06-10T20:47:26
dlwh
[ "bug" ]
## Describe the bug Building on #3950, if you bypass the pickling problem you still can't use the dataset. Somehow something gets confused and the forked processes try to `open` urls rather than anything else. ## Steps to reproduce the bug ```python from multiprocessing import freeze_support import transformers from transformers import Trainer, AutoModelForCausalLM, TrainingArguments import datasets import torch.utils.data # work around #3950 class TorchIterableDataset(datasets.IterableDataset, torch.utils.data.IterableDataset): pass def _ensure_format(v: datasets.IterableDataset) -> datasets.IterableDataset: return TorchIterableDataset(v._ex_iterable, v.info, v.split, "torch", v._shuffling) if __name__ == '__main__': freeze_support() ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) ds = _ensure_format(ds) model = AutoModelForCausalLM.from_pretrained("distilgpt2") Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() ``` ## Expected results I'd expect the dataset to load the url correctly and produce examples. ## Actual results ``` warnings.warn( ***** Running training ***** Num examples = 8000 Num Epochs = 9223372036854775807 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 1000 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last): File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 22, in <module> Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train for step, inputs in enumerate(epoch_iterator): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__ data = self._next_data() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise raise exception FileNotFoundError: Caught FileNotFoundError in DataLoader worker process 0. Original Traceback (most recent call last): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 497, in __iter__ for key, example in self._iter(): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 494, in _iter yield from ex_iterable File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/datasets/iterable_dataset.py", line 87, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/Users/dlwh/.cache/huggingface/modules/datasets_modules/datasets/oscar/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2/oscar.py", line 358, in _generate_examples with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: 'https://s3.amazonaws.com/datasets.huggingface.co/oscar/1.0/unshuffled/deduplicated/en/en_part_1.txt.gz' Error in atexit._run_exitfuncs: Traceback (most recent call last): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 27, in poll pid, sts = os.waitpid(self.pid, flag) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler _error_if_any_worker_fails() RuntimeError: DataLoader worker (pid 6932) is killed by signal: Terminated: 15. 0%| | 0/1000 [00:02<?, ?it/s] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.0.0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
false
1,171,560,585
https://api.github.com/repos/huggingface/datasets/issues/3950
https://github.com/huggingface/datasets/issues/3950
3,950
Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1
closed
1
2022-03-16T21:14:11
2022-06-10T20:47:26
2022-06-10T20:47:26
dlwh
[ "bug", "good first issue" ]
## Describe the bug Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash. ## Steps to reproduce the bug ```python import transformers from transformers import Trainer, AutoModelForCausalLM, TrainingArguments import datasets ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("torch") model = AutoModelForCausalLM.from_pretrained("distilgpt2") Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() ``` ## Expected results For this code I'd expect a crash related to not having preprocessed the data, but instead we get a pickling error. ## Actual results ``` 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last): File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 7, in <module> Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train for step, inputs in enumerate(epoch_iterator): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__ return self._get_iterator() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 918, in __init__ w.start() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'iterable_dataset.<locals>.TorchIterableDataset' 0%| | 0/1000 [00:00<?, ?it/s] ``` This immediate crash can be fixed by not using a local class to make the `TorchIterableDataset` (Note that you have to do with_format("torch") or you get an exception because the dataset has no len) However, any lambdas etc used as maps will also trigger this crash. A more permanent fix would be to move away from multiprocessing and instead use something like pathos or multiprocessing_on_dill (https://stackoverflow.com/questions/19984152/what-can-multiprocessing-and-dill-do-together) Note that if you bypass this crash you get another crash. (I'll file a separate bug). ## Environment info - `datasets` version: 2.0.0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
false
1,171,467,981
https://api.github.com/repos/huggingface/datasets/issues/3949
https://github.com/huggingface/datasets/pull/3949
3,949
Remove GLEU metric
closed
1
2022-03-16T19:35:31
2022-04-12T20:43:26
2022-04-12T20:37:09
emibaylor
[]
Remove the GLEU metric as it is not actually implemented.
true
1,171,460,560
https://api.github.com/repos/huggingface/datasets/issues/3948
https://github.com/huggingface/datasets/pull/3948
3,948
Google BLEU Metric Card
closed
1
2022-03-16T19:27:17
2022-03-21T16:04:26
2022-03-21T16:04:25
emibaylor
[]
Add metric card for Google BLEU (GLEU) metric One thing I noticed while writing this up is that, while this metric was made specifically to be better than BLEU at the sentence level instead of the corpus level, the current implementation only allows the calculation of the corpus-level statistic. I think changing this would be a good thing to put on the to do list for the future.
true
1,171,452,854
https://api.github.com/repos/huggingface/datasets/issues/3947
https://github.com/huggingface/datasets/pull/3947
3,947
BLEU metric card
closed
2
2022-03-16T19:20:07
2022-03-29T14:59:50
2022-03-29T14:54:14
emibaylor
[]
Add BLEU metric card
true
1,171,239,287
https://api.github.com/repos/huggingface/datasets/issues/3946
https://github.com/huggingface/datasets/pull/3946
3,946
Add newline to text dataset builder for controlling universal newlines mode
closed
3
2022-03-16T16:11:11
2023-09-24T10:10:50
2023-09-24T10:10:47
albertvillanova
[]
Fix #3804.
true
1,171,222,257
https://api.github.com/repos/huggingface/datasets/issues/3945
https://github.com/huggingface/datasets/pull/3945
3,945
Fix comet metric
closed
4
2022-03-16T15:56:47
2022-03-22T15:10:12
2022-03-22T15:05:30
lhoestq
[]
The COMET metric has been broken for a while since big breaking changes happened. We did not catch them in the CI because the slow test mocks the download_model function that was changed. This PR fixes the metric, updates the download_model mock and updates the doctest.
true
1,171,209,510
https://api.github.com/repos/huggingface/datasets/issues/3944
https://github.com/huggingface/datasets/pull/3944
3,944
Create README.md
closed
1
2022-03-16T15:46:26
2022-03-17T17:50:54
2022-03-17T17:47:05
sashavor
[]
Proposing COMET metric card
true
1,171,185,070
https://api.github.com/repos/huggingface/datasets/issues/3943
https://github.com/huggingface/datasets/pull/3943
3,943
[Doc] Don't use v for version tags on GitHub
closed
1
2022-03-16T15:28:30
2022-03-17T11:46:26
2022-03-17T11:46:25
sgugger
[]
This removes the `v` automatically used by `doc-builder` for versions.
true
1,171,177,122
https://api.github.com/repos/huggingface/datasets/issues/3942
https://github.com/huggingface/datasets/issues/3942
3,942
reddit_tifu dataset: Checksums didn't match for dataset source files
closed
3
2022-03-16T15:23:30
2022-03-16T15:57:43
2022-03-16T15:39:25
XingxingZhang
[ "bug", "duplicate" ]
## Describe the bug When loading the reddit_tifu dataset, it throws the exception "Checksums didn't match for dataset source files" ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset print(datasets.__version__) # load_dataset('billsum') load_dataset('reddit_tifu', 'short') ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: mac os - Python version: Python 3.7.6 - PyArrow version: 3.0.0
false
1,171,132,709
https://api.github.com/repos/huggingface/datasets/issues/3941
https://github.com/huggingface/datasets/issues/3941
3,941
billsum dataset: Checksums didn't match for dataset source files:
closed
3
2022-03-16T14:52:08
2024-03-13T12:11:35
2022-03-16T15:46:44
XingxingZhang
[ "bug" ]
## Describe the bug When loading the `billsum` dataset, it throws the exception "Checksums didn't match for dataset source files" ``` File "virtualenv_projects/codex/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1g89WgFHMRbr4QrvA0ngh26PY081Nv3lx'] ``` ## Steps to reproduce the bug ```python import datasets from datasets import load_dataset print(datasets.__version__) load_dataset('billsum') ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: mac os - Python version: Python 3.7.6 - PyArrow version: 3.0.0
false
1,171,106,853
https://api.github.com/repos/huggingface/datasets/issues/3940
https://github.com/huggingface/datasets/pull/3940
3,940
Create CoVAL metric card
closed
1
2022-03-16T14:31:49
2022-03-18T17:37:59
2022-03-18T17:35:14
sashavor
[]
Initial CoVAL metric card
true
1,170,882,331
https://api.github.com/repos/huggingface/datasets/issues/3939
https://github.com/huggingface/datasets/issues/3939
3,939
Source links broken
closed
8
2022-03-16T11:17:47
2022-03-19T04:41:32
2022-03-19T04:41:32
qqaatw
[ "bug" ]
## Describe the bug The source links of v2.0.0 docs are broken: For exmaple, clicking the source button of this [class](https://huggingface.co/docs/datasets/v2.0.0/en/package_reference/main_classes#datasets.ClassLabel) will direct users to `https://github.com/huggingface/datasets/blob/v2.0.0/src/datasets/features/features.py#L747` here, the `v2.0.0` should be `2.0.0`. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` ## Expected results Redirecting to this link: `https://github.com/huggingface/datasets/blob/2.0.0/src/datasets/features/features.py#L747` ## Actual results Described above. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: - Python version: - PyArrow version:
false
1,170,875,417
https://api.github.com/repos/huggingface/datasets/issues/3938
https://github.com/huggingface/datasets/pull/3938
3,938
Avoid info log messages from transformers in FrugalScore metric
closed
1
2022-03-16T11:11:29
2022-03-17T08:37:25
2022-03-17T08:37:24
albertvillanova
[]
Fix #3928.
true
1,170,832,006
https://api.github.com/repos/huggingface/datasets/issues/3937
https://github.com/huggingface/datasets/issues/3937
3,937
Missing languages in lvwerra/github-code dataset
closed
5
2022-03-16T10:32:03
2022-03-22T07:09:23
2022-03-21T14:50:47
Eytan-S
[ "Dataset discussion" ]
Hi, I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset! I've noticed that two languages are missing from the dataset: TypeScript and Scala. Looks like they're also omitted from the query you used to get the original code. Are there any plans to add them in the future? Thanks!
false
1,170,713,473
https://api.github.com/repos/huggingface/datasets/issues/3936
https://github.com/huggingface/datasets/pull/3936
3,936
Fix Wikipedia version and re-add tests
closed
1
2022-03-16T08:48:04
2022-03-16T17:04:07
2022-03-16T17:04:05
albertvillanova
[]
To keep backward compatibility when loading using "wikipedia" dataset ID (https://huggingface.co/datasets/wikipedia), we have created the pre-processed data for the same languages we were offering before, but with updated date "20220301": - de - en - fr - frr - it - simple These pre-processed data can be accessed, e.g.: ```python ds = load_dataset("wikipedia", "20220301.frr", split="train") ``` The next step will be to offer the pre-processed data for many other languages, but when loading using "wikimedia/wikipedia": https://huggingface.co/datasets/wikimedia/wikipedia
true
1,170,292,492
https://api.github.com/repos/huggingface/datasets/issues/3934
https://github.com/huggingface/datasets/pull/3934
3,934
Create MAUVE metric card
closed
1
2022-03-15T21:36:07
2022-03-18T17:38:14
2022-03-18T17:34:13
sashavor
[]
Proposing a MAUVE metric card
true
1,170,253,605
https://api.github.com/repos/huggingface/datasets/issues/3933
https://github.com/huggingface/datasets/pull/3933
3,933
Update README.md
closed
1
2022-03-15T20:52:05
2022-03-17T17:51:24
2022-03-17T17:47:37
sashavor
[]
Fixing missing triple quote
true
1,170,221,773
https://api.github.com/repos/huggingface/datasets/issues/3932
https://github.com/huggingface/datasets/pull/3932
3,932
Create SARI metric card
closed
1
2022-03-15T20:37:23
2022-03-18T17:37:01
2022-03-18T17:32:55
sashavor
[]
SARI metric card! (do we have an expert in text simplification to validate?.. :sweat_smile: )
true
1,170,097,208
https://api.github.com/repos/huggingface/datasets/issues/3931
https://github.com/huggingface/datasets/pull/3931
3,931
Add align_labels_with_mapping docs
closed
1
2022-03-15T19:24:57
2022-03-18T16:28:31
2022-03-18T16:24:33
stevhliu
[ "documentation" ]
This PR documents the `align_labels_with_mapping` function to ensure predicted labels are aligned with the dataset, or to assign a different mapping of labels to ids (requested by @mariosasko πŸŽ‰ ). For this specific code sample, the current dataset has a `mixed` label that the original [dataset](https://huggingface.co/datasets/poem_sentiment#data-fields) didn't. Is there a way to remove this label so it is completely aligned with the original dataset mappings? Otherwise, I'll just leave it as it is.
true
1,170,087,793
https://api.github.com/repos/huggingface/datasets/issues/3930
https://github.com/huggingface/datasets/pull/3930
3,930
Create README.md
closed
1
2022-03-15T19:16:59
2022-04-04T15:23:15
2022-04-04T15:17:28
sashavor
[]
Creating a README for IndicGLUE cc @mcmillanmajora for fact checking in terms of languages (also, are there any limitations of the dataset or eval metric that I'm not aware of?)
true
1,170,066,235
https://api.github.com/repos/huggingface/datasets/issues/3929
https://github.com/huggingface/datasets/issues/3929
3,929
Load a local dataset twice
closed
1
2022-03-15T18:59:26
2022-03-16T09:55:09
2022-03-16T09:54:06
caush
[ "bug" ]
## Describe the bug Load a local "dataset" composed of two csv files twice. ## Steps to reproduce the bug Put the two joined files in a repository named "Data". Then in python: import datasets as ds ds.load_dataset('Data', data_files = {'file1.csv', 'file2.csv'}) ## Expected results Should give something like (because files have only one data row): Title, clicks Truc et astuce, 123 Machin, 12 ## Actual results Gives Title, clicks Truc et astuce, 123 Machin, 12 Truc et astuce, 123 Machin, 12 ## Environment info [file1.csv](https://github.com/huggingface/datasets/files/8256322/file1.csv) [file2.csv](https://github.com/huggingface/datasets/files/8256323/file2.csv) - `datasets` version: 2.0.0 - Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.10 - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
false
1,170,017,132
https://api.github.com/repos/huggingface/datasets/issues/3928
https://github.com/huggingface/datasets/issues/3928
3,928
Frugal score deprecations
closed
1
2022-03-15T18:10:42
2022-03-17T08:37:24
2022-03-17T08:37:24
ierezell
[ "bug" ]
## Describe the bug The frugal score returns a really verbose output with warnings that can be easily changed. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets.load import load_metric frugal = load_metric("frugalscore") frugal.compute(predictions=["Do you like spinachis"], references=["Do you like spinach"]) ``` ## Expected results A clear and concise description of the expected results. ``` {'scores': [0.9946]} ``` ## Actual results Specify the actual results or traceback. ``` PyTorch: setting up devices The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-). 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 864.09ba/s] Using amp half precision backend The following columns in the test set don't have a corresponding argument in `BertForSequenceClassification.forward` and have been ignored: sentence2, sentence1. If sentence2, sentence1 are not expected by `BertForSequenceClassification.forward`, you can safely ignore this message. ***** Running Prediction ***** Num examples = 1 Batch size = 64 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 4644.85it/s] {'scores': [0.9946]} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.17.0 - Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 7.0.0
false
1,170,016,465
https://api.github.com/repos/huggingface/datasets/issues/3927
https://github.com/huggingface/datasets/pull/3927
3,927
Update main readme
closed
2
2022-03-15T18:09:59
2022-03-29T10:13:47
2022-03-29T10:08:20
lhoestq
[]
The main readme was still focused on text datasets - I extended it by mentioning that we also support image and audio datasets
true
1,169,945,052
https://api.github.com/repos/huggingface/datasets/issues/3926
https://github.com/huggingface/datasets/pull/3926
3,926
Doc maintenance
closed
1
2022-03-15T17:00:46
2022-03-15T19:27:15
2022-03-15T19:27:12
stevhliu
[ "documentation" ]
This PR adds some minor maintenance to the docs. The main fix is properly linking to pages in the callouts because some of the links would just redirect to a non-existent section on the same page.
true
1,169,913,769
https://api.github.com/repos/huggingface/datasets/issues/3925
https://github.com/huggingface/datasets/pull/3925
3,925
Fix main_classes docs index
closed
3
2022-03-15T16:33:46
2022-03-22T13:49:11
2022-03-22T13:44:04
lhoestq
[]
Currently the `main_classes` documentation has a wrong index. I believe this comes from issues in the examples of the Translation feature types ![image](https://user-images.githubusercontent.com/42851186/158426345-2ee1ceef-ddf3-4a6f-a93e-d1a8f38a44f5.png)
true
1,169,805,813
https://api.github.com/repos/huggingface/datasets/issues/3924
https://github.com/huggingface/datasets/pull/3924
3,924
Document cases for github datasets
closed
2
2022-03-15T15:10:10
2022-04-05T18:33:15
2022-03-15T15:41:23
lhoestq
[]
In general we recommend adding the new dataset under a username or organization in the Hugging Face Hub at [hf.co/datasets](hf.co/datasets), but users can still add a dataset on github in some cases. I added a paragraph in the documentation to explain in which cases it can make more sense to open a PR on github: - when you need the dataset to be reviewed - when you need long-term maintenance from the HF team - when there’s no clear org name / namespace that you can put the dataset under
true
1,169,773,869
https://api.github.com/repos/huggingface/datasets/issues/3923
https://github.com/huggingface/datasets/pull/3923
3,923
Add methods to IterableDatasetDict
closed
5
2022-03-15T14:46:03
2022-07-06T15:40:20
2022-03-15T16:45:06
lhoestq
[]
Following the new methods added in #3826 and https://github.com/huggingface/datasets/pull/3862 I added several methods to IterableDatasetDict: - map - filter - shuffle - with_format - cast - cast_column - remove_columns - rename_column - rename_columns
true
1,169,761,293
https://api.github.com/repos/huggingface/datasets/issues/3922
https://github.com/huggingface/datasets/pull/3922
3,922
Fix NonMatchingChecksumError in MultiWOZ 2.2 dataset
closed
2
2022-03-15T14:36:28
2022-03-15T16:07:04
2022-03-15T16:07:03
albertvillanova
[]
Fix #2957
true
1,169,749,338
https://api.github.com/repos/huggingface/datasets/issues/3921
https://github.com/huggingface/datasets/pull/3921
3,921
Fix NonMatchingChecksumError in CRD3 dataset
closed
2
2022-03-15T14:27:14
2022-03-15T15:54:27
2022-03-15T15:54:26
albertvillanova
[]
Fix #3051
true
1,169,532,807
https://api.github.com/repos/huggingface/datasets/issues/3920
https://github.com/huggingface/datasets/issues/3920
3,920
'datasets.features' is not a package
closed
2
2022-03-15T11:14:23
2022-03-16T09:17:12
2022-03-16T09:17:12
Arij-Aladel
[]
@albertvillanova python 3.9 os: ubuntu 20.04 In conda environment torch installed by ```/env/bin/pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html``` datasets package is installed by ``` /env/bin/pip install datasets==1.8.0 ``` During runing the code I have this error ``` [6]<stderr>: File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class [6]<stderr>: return super().find_class(mod_name, name) [6]<stderr>:ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package ``` precisely this error appears when torch.load('data_file.pt') ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 607, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load result = unpickler.load() File "/home/arij/Memory-transformer-with-hierarchical-attention_MLM/env/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class return super().find_class(mod_name, name) ModuleNotFoundError: No module named 'datasets.features.features'; 'datasets.features' is not a package ``` Why I am getting this error?
false
1,169,497,210
https://api.github.com/repos/huggingface/datasets/issues/3919
https://github.com/huggingface/datasets/issues/3919
3,919
AttributeError: 'DatasetDict' object has no attribute 'features'
closed
2
2022-03-15T10:46:59
2022-03-17T04:16:14
2022-03-17T04:16:14
jswapnil10
[ "bug" ]
## Describe the bug Receiving the error when trying to check for Dataset features ## Steps to reproduce the bug from datasets import Dataset dataset = Dataset.from_pandas(df[['id', 'words', 'bboxes', 'ner_tags', 'image_path']]) dataset.features ## Expected results A clear and concise description of the expected results. ## Actual results Getting the following errror AttributeError: 'DatasetDict' object has no attribute 'features' ## Environment info Copy-and-paste the text below in your GitHub issue. - `datasets` version: 1.18.4 - Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9 - Python version: 3.6.13 - PyArrow version: 6.0.1
false
1,169,366,117
https://api.github.com/repos/huggingface/datasets/issues/3918
https://github.com/huggingface/datasets/issues/3918
3,918
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files
closed
3
2022-03-15T08:53:45
2022-03-16T15:36:58
2022-03-15T14:01:25
willowdong
[ "bug", "duplicate" ]
## Describe the bug Can't load the dataset ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('multi_news') dataset_2=load_dataset("reddit_tifu", "long") ## Actual results raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF'] ## Environment info - `datasets` version: 1.18.4 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.0 - PyArrow version: 6.0.1
false
1,168,906,154
https://api.github.com/repos/huggingface/datasets/issues/3917
https://github.com/huggingface/datasets/pull/3917
3,917
Create README.md
closed
1
2022-03-14T21:08:10
2022-03-17T17:45:39
2022-03-17T17:45:39
sashavor
[]
This follows the same structure as the GLUE metric card, hope that works for everyone :)
true
1,168,869,191
https://api.github.com/repos/huggingface/datasets/issues/3916
https://github.com/huggingface/datasets/pull/3916
3,916
Create README.md for GLUE
closed
1
2022-03-14T20:27:22
2022-03-15T17:06:57
2022-03-15T17:06:56
sashavor
[]
I still have a hesitation regarding the format of inputs -- whether it's a list or a list of lists? -- hopefully @lhoestq will be able to clarify. Also tagging @yjernite for the Limitations section. Happy to hear your thoughts!
true
1,168,848,101
https://api.github.com/repos/huggingface/datasets/issues/3915
https://github.com/huggingface/datasets/pull/3915
3,915
Metric card template
closed
6
2022-03-14T20:07:08
2022-05-04T10:44:09
2022-05-04T10:37:06
emibaylor
[]
Adding a metric card template, based on ideas and edits from @sashavor and I, as well as from comments from @lhoestq and others (thank you!). All feedback is welcome, but am especially curious about feedback in terms of: - things that should be included but aren't - things that are included but should be changed or removed - the instructions I included, and whether they should be added to, clarified, or deleted altogether
true
1,168,777,880
https://api.github.com/repos/huggingface/datasets/issues/3914
https://github.com/huggingface/datasets/pull/3914
3,914
Use templates for doc-builidng jobs
closed
2
2022-03-14T18:53:06
2022-03-17T15:02:59
2022-03-17T15:02:58
sgugger
[]
This PR updates the jobs for all doc-building related things by using the templates introduced on `doc-builder`. By putting those once there, we make sure every repo gets the latest fixes on the doc-building github actions :-) Note: all libraries must share the same docker image for those doc-building jobs. For now, all the one used (`huggingface/transformers-doc-builder`) contains all extra steps of the datasets install for docbuling (mainly libsndfile) but if in the future some additional steps are necessary on top of `pip install -e .[dev]`, this docker image will need to be updated with the extra deps.
true
1,168,723,950
https://api.github.com/repos/huggingface/datasets/issues/3913
https://github.com/huggingface/datasets/pull/3913
3,913
Deterministic split order in DatasetDict.map
closed
3
2022-03-14T17:58:37
2023-09-24T09:55:10
2022-03-15T10:45:15
lhoestq
[]
The order in which the splits are processed by `map` is not deterministic in `DatasetDict.map`. This can cause caching issues when the processing function is stateful and sensible to the order in which examples are processed Close https://github.com/huggingface/datasets/issues/3847
true
1,168,720,098
https://api.github.com/repos/huggingface/datasets/issues/3912
https://github.com/huggingface/datasets/pull/3912
3,912
add draft of registering function for pandas
closed
3
2022-03-14T17:54:29
2023-09-24T09:55:01
2023-01-24T12:57:10
lvwerra
[]
This PR adds a register function for `pandas`. It allows to directly push `DataFrame` objects to the hub and in return loading datasets on the hub from `DataFrame`. The motivation for this integration is to enable the vast number of `pandas` users to be able to easily push `DataFrames` to the hub. Here is an example: ```python import pandas as pd from datasets import register_pandas register_pandas() # push to hub df = pd.DataFrame.from_dict({"test": [1,2,3]}) df.push_to_hub("my_test") # load from hub df_retrieved = pd.DataFrame.load_from_hub("lvwerra/my_test") ``` It follows a similar philosophy as the `tqdm` [integration](https://github.com/tqdm/tqdm#pandas-integration). Also see [this issue](https://github.com/pandas-dev/pandas/issues/46000) on the `pandas` repository. This is just a rough draft of what such integration could look like but I would like appreciate some feedback on this: is this something you would like to add the library and is this the way to go? cc @lhoestq @albertvillanova @julien-c
true
1,168,652,374
https://api.github.com/repos/huggingface/datasets/issues/3911
https://github.com/huggingface/datasets/pull/3911
3,911
Create README.md for CER metric
closed
1
2022-03-14T16:54:51
2022-03-17T17:49:40
2022-03-17T17:45:54
sashavor
[]
Initial proposal for a CER metric card cc @patrickvonplaten - wdyt this time around? :smile:
true
1,168,579,694
https://api.github.com/repos/huggingface/datasets/issues/3910
https://github.com/huggingface/datasets/pull/3910
3,910
Fix text loader to split only on universal newlines
closed
6
2022-03-14T15:54:58
2022-03-15T16:16:11
2022-03-15T16:16:09
albertvillanova
[]
Currently, `text` loader breaks on a superset of universal newlines, which also contains Unicode line boundaries. See: https://docs.python.org/3/library/stdtypes.html#str.splitlines However, the expected behavior is to get the lines splitted only on universal newlines: "\n", "\r\n" and "\r". See: oscar-corpus/corpus#18 Fix #3729.
true
1,168,578,058
https://api.github.com/repos/huggingface/datasets/issues/3909
https://github.com/huggingface/datasets/issues/3909
3,909
Error loading file audio when downloading the Common Voice dataset directly from the Hub
closed
8
2022-03-14T15:53:50
2023-03-02T15:31:27
2023-03-02T15:31:26
aliceinland
[ "bug" ]
## Describe the bug When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened. ## Steps to reproduce the bug ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "it", split="test") #test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'}) wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian") model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian") model.to("cuda") chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\β€œ\'\οΏ½]' resampler = torchaudio.transforms.Resample(48_000, 16_000) ``` ## Expected results The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library. ## Actual results The error is: ```python 0ex [00:00, ?ex/s] --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-48-ef87f4129e6e> in <module> 7 return batch 8 ----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn) /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2107 2108 if num_proc is None or num_proc == 1: -> 2109 return self._map_single( 2110 function=function, 2111 with_indices=with_indices, /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 516 self: "Dataset" = kwargs.pop("self") 517 # apply actual function --> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 520 for dataset in datasets: /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 483 } 484 # apply actual function --> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 487 # re-apply format to the output /opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 411 # Call actual function 412 --> 413 out = func(self, *args, **kwargs) 414 415 # Update fingerprint of in-place transforms + update in-place history of transforms /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only) 2465 if not batched: 2466 for i, example in enumerate(pbar): -> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset) 2468 if update_data: 2469 if i == 0: /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset) 2372 if with_rank: 2373 additional_args += (rank,) -> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) 2375 if update_data is None: 2376 # Check if the function returns updated examples /opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs) 2067 ) 2068 # Use the LazyDict internally, while mapping the function -> 2069 result = f(decorated_item, *args, **kwargs) 2070 # Return a standard dict 2071 return result.data if isinstance(result, LazyDict) else result <ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch) 3 def speech_file_to_array_fn(batch): 4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() ----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"]) 6 batch["speech"] = resampler(speech_array).squeeze().numpy() 7 return batch /opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format) 150 filepath, frame_offset, num_frames, normalize, channels_first, format) 151 filepath = os.fspath(filepath) --> 152 return torch.ops.torchaudio.sox_io_load_audio_file( 153 filepath, frame_offset, num_frames, normalize, channels_first, format) 154 RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ``` ## Environment info - `datasets` version: 1.18.4 - Platform: Linux-5.4.0-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 7.0.0
false
1,168,576,963
https://api.github.com/repos/huggingface/datasets/issues/3908
https://github.com/huggingface/datasets/pull/3908
3,908
Update README.md for SQuAD v2 metric
closed
1
2022-03-14T15:53:10
2022-03-15T17:04:11
2022-03-15T17:04:11
sashavor
[]
Putting "Values from popular papers" as a subsection of "Output values"
true
1,168,575,998
https://api.github.com/repos/huggingface/datasets/issues/3907
https://github.com/huggingface/datasets/pull/3907
3,907
Update README.md for SQuAD metric
closed
1
2022-03-14T15:52:31
2022-03-15T17:04:20
2022-03-15T17:04:19
sashavor
[]
Putting "Values from popular papers" as a subsection of "Output values"
true