id
int64 599M
3.29B
| url
stringlengths 58
61
| html_url
stringlengths 46
51
| number
int64 1
7.72k
| title
stringlengths 1
290
| state
stringclasses 2
values | comments
int64 0
70
| created_at
timestamp[s]date 2020-04-14 10:18:02
2025-08-05 09:28:51
| updated_at
timestamp[s]date 2020-04-27 16:04:17
2025-08-05 11:39:56
| closed_at
timestamp[s]date 2020-04-14 12:01:40
2025-08-01 05:15:45
⌀ | user_login
stringlengths 3
26
| labels
listlengths 0
4
| body
stringlengths 0
228k
⌀ | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1,336,040,168
|
https://api.github.com/repos/huggingface/datasets/issues/4828
|
https://github.com/huggingface/datasets/pull/4828
| 4,828
|
Support PIL Image objects in `add_item`/`add_column`
|
open
| 3
| 2022-08-11T14:25:45
| 2023-09-24T10:15:33
| null |
mariosasko
|
[] |
Fix #4796
PS: We should also improve the type inference in `OptimizedTypeSequence` to make it possible to also infer the complex types (only `Image` currently) in nested arrays (e.g. `[[pil_image], [pil_image, pil_image]]` or `[{"img": pil_image}`]), but I plan to address this in a separate PR.
| true
|
1,335,994,312
|
https://api.github.com/repos/huggingface/datasets/issues/4827
|
https://github.com/huggingface/datasets/pull/4827
| 4,827
|
Add license metadata to pg19
|
closed
| 1
| 2022-08-11T13:52:20
| 2022-08-11T15:01:03
| 2022-08-11T14:46:38
|
julien-c
|
[] |
As reported over email by Roy Rijkers
| true
|
1,335,987,583
|
https://api.github.com/repos/huggingface/datasets/issues/4826
|
https://github.com/huggingface/datasets/pull/4826
| 4,826
|
Fix language tags in dataset cards
|
closed
| 2
| 2022-08-11T13:47:14
| 2022-08-11T14:17:48
| 2022-08-11T14:03:12
|
albertvillanova
|
[] |
Fix language tags in all dataset cards, so that they are validated (aligned with our `languages.json` resource).
| true
|
1,335,856,882
|
https://api.github.com/repos/huggingface/datasets/issues/4825
|
https://github.com/huggingface/datasets/pull/4825
| 4,825
|
[Windows] Fix Access Denied when using os.rename()
|
closed
| 6
| 2022-08-11T11:57:15
| 2022-08-24T13:09:07
| 2022-08-24T13:09:07
|
DougTrajano
|
[] |
In this PR, we are including an additional step when `os.rename()` raises a PermissionError.
Basically, we will use `shutil.move()` on the temp files.
Fix #2937
| true
|
1,335,826,639
|
https://api.github.com/repos/huggingface/datasets/issues/4824
|
https://github.com/huggingface/datasets/pull/4824
| 4,824
|
Fix titles in dataset cards
|
closed
| 2
| 2022-08-11T11:27:48
| 2022-08-11T13:46:11
| 2022-08-11T12:56:49
|
albertvillanova
|
[] |
Fix all the titles in the dataset cards, so that they conform to the required format.
| true
|
1,335,687,033
|
https://api.github.com/repos/huggingface/datasets/issues/4823
|
https://github.com/huggingface/datasets/pull/4823
| 4,823
|
Update data URL in mkqa dataset
|
closed
| 1
| 2022-08-11T09:16:13
| 2022-08-11T09:51:50
| 2022-08-11T09:37:52
|
albertvillanova
|
[] |
Update data URL in mkqa dataset.
Fix #4817.
| true
|
1,335,664,588
|
https://api.github.com/repos/huggingface/datasets/issues/4821
|
https://github.com/huggingface/datasets/pull/4821
| 4,821
|
Fix train_test_split docs
|
closed
| 1
| 2022-08-11T08:55:45
| 2022-08-11T09:59:29
| 2022-08-11T09:45:40
|
NielsRogge
|
[] |
I saw that `stratify` is added to the `train_test_split` method as per #4322, hence the docs can be updated.
| true
|
1,335,117,132
|
https://api.github.com/repos/huggingface/datasets/issues/4820
|
https://github.com/huggingface/datasets/issues/4820
| 4,820
|
Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.
|
closed
| 1
| 2022-08-10T19:42:33
| 2022-08-10T19:53:10
| 2022-08-10T19:53:10
|
talhaanwarch
|
[
"bug"
] |
Hi, when i try to run prepare_dataset function in [fine tuning ASR tutorial 4](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb) , i got this error.
I got this error
Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.
There is no other logs available, so i have no clue what is the cause of it.
```
def prepare_dataset(batch):
audio = batch["path"]
# batched output is "un-batched"
batch["input_values"] = processor(audio["array"], sampling_rate=audio["sampling_rate"]).input_values[0]
batch["input_length"] = len(batch["input_values"])
with processor.as_target_processor():
batch["labels"] = processor(batch["text"]).input_ids
return batch
data = data.map(prepare_dataset, remove_columns=data.column_names["train"],
num_proc=4)
```
Specify the actual results or traceback.
There is no traceback except
`Terminating: fork() called from a process already using GNU OpenMP, this is unsafe.`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| false
|
1,335,064,449
|
https://api.github.com/repos/huggingface/datasets/issues/4819
|
https://github.com/huggingface/datasets/pull/4819
| 4,819
|
Add missing language tags to resources
|
closed
| 1
| 2022-08-10T19:06:42
| 2022-08-10T19:45:49
| 2022-08-10T19:32:15
|
albertvillanova
|
[] |
Add missing language tags to resources, required by existing datasets on GitHub.
| true
|
1,334,941,810
|
https://api.github.com/repos/huggingface/datasets/issues/4818
|
https://github.com/huggingface/datasets/pull/4818
| 4,818
|
Add add cc-by-sa-2.5 license tag
|
closed
| 2
| 2022-08-10T17:18:39
| 2022-10-04T13:47:24
| 2022-10-04T13:47:24
|
polinaeterna
|
[] |
- [ ] add it to moon-landing
- [ ] add it to hub-docs
| true
|
1,334,572,163
|
https://api.github.com/repos/huggingface/datasets/issues/4817
|
https://github.com/huggingface/datasets/issues/4817
| 4,817
|
Outdated Link for mkqa Dataset
|
closed
| 1
| 2022-08-10T12:45:45
| 2022-08-11T09:37:52
| 2022-08-11T09:37:52
|
liaeh
|
[
"bug"
] |
## Describe the bug
The URL used to download the mkqa dataset is outdated. It seems the URL to download the dataset is currently https://github.com/apple/ml-mkqa/blob/main/dataset/mkqa.jsonl.gz instead of https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz (master branch has been renamed to main).
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("mkqa")
```
## Expected results
downloads the dataset
## Actual results
```python
Downloading builder script:
4.79k/? [00:00<00:00, 201kB/s]
Downloading metadata:
13.2k/? [00:00<00:00, 504kB/s]
Downloading and preparing dataset mkqa/mkqa (download: 11.35 MiB, generated: 34.29 MiB, post-processed: Unknown size, total: 45.65 MiB) to /home/lhr/.cache/huggingface/datasets/mkqa/mkqa/1.0.0/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d...
Downloading data files: 0%
0/1 [00:00<?, ?it/s]
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
Input In [3], in <cell line: 3>()
1 from datasets import load_dataset
----> 3 dataset = load_dataset("mkqa")
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/load.py:1746, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1743 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1745 # Download and prepare data
-> 1746 builder_instance.download_and_prepare(
1747 download_config=download_config,
1748 download_mode=download_mode,
1749 ignore_verifications=ignore_verifications,
1750 try_from_hf_gcs=try_from_hf_gcs,
1751 use_auth_token=use_auth_token,
1752 )
1754 # Build dataset for splits
1755 keep_in_memory = (
1756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1757 )
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
707 # Sync info
708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:1227, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos)
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
769 split_dict = SplitDict(dataset_name=self.name)
770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
File ~/.cache/huggingface/modules/datasets_modules/datasets/mkqa/5401489c674c81257cf563417aaaa5de2c7e26a1090ce9b10eb0404f10003d4d/mkqa.py:130, in Mkqa._split_generators(self, dl_manager)
128 # download and extract URLs
129 urls_to_download = _URLS
--> 130 downloaded_files = dl_manager.download_and_extract(urls_to_download)
132 return [datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": downloaded_files["train"]})]
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:431, in DownloadManager.download_and_extract(self, url_or_urls)
415 def download_and_extract(self, url_or_urls):
416 """Download and extract given url_or_urls.
417
418 Is roughly equivalent to:
(...)
429 extracted_path(s): `str`, extracted paths of given URL(s).
430 """
--> 431 return self.extract(self.download(url_or_urls))
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:309, in DownloadManager.download(self, url_or_urls)
306 download_func = partial(self._download, download_config=download_config)
308 start_time = datetime.now()
--> 309 downloaded_path_or_paths = map_nested(
310 download_func,
311 url_or_urls,
312 map_tuple=True,
313 num_proc=download_config.num_proc,
314 disable_tqdm=not is_progress_bar_enabled(),
315 desc="Downloading data files",
316 )
317 duration = datetime.now() - start_time
318 logger.info(f"Downloading took {duration.total_seconds() // 60} min")
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:393, in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc)
391 num_proc = 1
392 if num_proc <= 1 or len(iterable) <= num_proc:
--> 393 mapped = [
394 _single_map_nested((function, obj, types, None, True, None))
395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
396 ]
397 else:
398 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:394, in <listcomp>(.0)
391 num_proc = 1
392 if num_proc <= 1 or len(iterable) <= num_proc:
393 mapped = [
--> 394 _single_map_nested((function, obj, types, None, True, None))
395 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
396 ]
397 else:
398 split_kwds = [] # We organize the splits ourselve (contiguous splits)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py:330, in _single_map_nested(args)
328 # Singleton first to spare some computation
329 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 330 return function(data_struct)
332 # Reduce logging to keep things readable in multiprocessing with tqdm
333 if rank is not None and logging.get_verbosity() < logging.WARNING:
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/download/download_manager.py:335, in DownloadManager._download(self, url_or_filename, download_config)
332 if is_relative_path(url_or_filename):
333 # append the relative path to the base_path
334 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 335 return cached_path(url_or_filename, download_config=download_config)
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:185, in cached_path(url_or_filename, download_config, **download_kwargs)
181 url_or_filename = str(url_or_filename)
183 if is_remote_url(url_or_filename):
184 # URL, so get it from the cache (downloading if necessary)
--> 185 output_path = get_from_cache(
186 url_or_filename,
187 cache_dir=cache_dir,
188 force_download=download_config.force_download,
189 proxies=download_config.proxies,
190 resume_download=download_config.resume_download,
191 user_agent=download_config.user_agent,
192 local_files_only=download_config.local_files_only,
193 use_etag=download_config.use_etag,
194 max_retries=download_config.max_retries,
195 use_auth_token=download_config.use_auth_token,
196 ignore_url_params=download_config.ignore_url_params,
197 download_desc=download_config.download_desc,
198 )
199 elif os.path.exists(url_or_filename):
200 # File, and it exists.
201 output_path = url_or_filename
File ~/repos/punc-cap/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py:530, in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
525 raise FileNotFoundError(
526 f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been"
527 " disabled. To enable file online look-ups, set 'local_files_only' to False."
528 )
529 elif response is not None and response.status_code == 404:
--> 530 raise FileNotFoundError(f"Couldn't find file at {url}")
531 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
532 if head_error is not None:
FileNotFoundError: Couldn't find file at https://github.com/apple/ml-mkqa/raw/master/dataset/mkqa.jsonl.gz
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.13.0-40-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
| false
|
1,334,099,454
|
https://api.github.com/repos/huggingface/datasets/issues/4816
|
https://github.com/huggingface/datasets/pull/4816
| 4,816
|
Update version of opus_paracrawl dataset
|
closed
| 1
| 2022-08-10T05:39:44
| 2022-08-12T14:32:29
| 2022-08-12T14:17:56
|
albertvillanova
|
[] |
This PR updates OPUS ParaCrawl from 7.1 to 9 version.
Fix #4815.
| true
|
1,334,078,303
|
https://api.github.com/repos/huggingface/datasets/issues/4815
|
https://github.com/huggingface/datasets/issues/4815
| 4,815
|
Outdated loading script for OPUS ParaCrawl dataset
|
closed
| 0
| 2022-08-10T05:12:34
| 2022-08-12T14:17:57
| 2022-08-12T14:17:57
|
albertvillanova
|
[
"dataset bug"
] |
## Describe the bug
Our loading script for OPUS ParaCrawl loads its 7.1 version. Current existing version is 9.
| false
|
1,333,356,230
|
https://api.github.com/repos/huggingface/datasets/issues/4814
|
https://github.com/huggingface/datasets/issues/4814
| 4,814
|
Support CSV as metadata file format in AudioFolder/ImageFolder
|
closed
| 0
| 2022-08-09T14:36:49
| 2022-08-31T11:59:08
| 2022-08-31T11:59:08
|
mariosasko
|
[
"enhancement"
] |
Requested here: https://discuss.huggingface.co/t/how-to-structure-an-image-dataset-repo-using-the-image-folder-approach/21004. CSV is also used in AutoTrain for specifying metadata in image datasets.
| false
|
1,333,287,756
|
https://api.github.com/repos/huggingface/datasets/issues/4813
|
https://github.com/huggingface/datasets/pull/4813
| 4,813
|
Fix loading example in opus dataset cards
|
closed
| 1
| 2022-08-09T13:47:38
| 2022-08-09T17:52:15
| 2022-08-09T17:38:18
|
albertvillanova
|
[] |
This PR:
- fixes the examples to load the datasets, with the corrected dataset name, in their dataset cards for:
- opus_dgt
- opus_paracrawl
- opus_wikipedia
- fixes their dataset cards with the missing required information: title, data instances/fields/splits
- enumerates the supported languages
- adds a missing citation reference for opus_wikipedia
Related to:
- #4806
| true
|
1,333,051,730
|
https://api.github.com/repos/huggingface/datasets/issues/4812
|
https://github.com/huggingface/datasets/pull/4812
| 4,812
|
Fix bug in function validate_type for Python >= 3.9
|
closed
| 1
| 2022-08-09T10:32:42
| 2022-08-12T13:41:23
| 2022-08-12T13:27:04
|
albertvillanova
|
[] |
Fix `validate_type` function, so that it uses `get_origin` instead. This makes the function forward compatible.
This fixes #4811 because:
```python
In [4]: typing.Optional[str]
Out[4]: typing.Optional[str]
In [5]: get_origin(typing.Optional[str])
Out[5]: typing.Union
```
Fix #4811.
| true
|
1,333,043,421
|
https://api.github.com/repos/huggingface/datasets/issues/4811
|
https://github.com/huggingface/datasets/issues/4811
| 4,811
|
Bug in function validate_type for Python >= 3.9
|
closed
| 0
| 2022-08-09T10:25:21
| 2022-08-12T13:27:05
| 2022-08-12T13:27:05
|
albertvillanova
|
[
"bug"
] |
## Describe the bug
The function `validate_type` assumes that the type `typing.Optional[str]` is automatically transformed to `typing.Union[str, NoneType]`.
```python
In [4]: typing.Optional[str]
Out[4]: typing.Union[str, NoneType]
```
However, this is not the case for Python 3.9:
```python
In [3]: typing.Optional[str]
Out[3]: typing.Optional[str]
```
| false
|
1,333,038,702
|
https://api.github.com/repos/huggingface/datasets/issues/4810
|
https://github.com/huggingface/datasets/pull/4810
| 4,810
|
Add description to hellaswag dataset
|
closed
| 2
| 2022-08-09T10:21:14
| 2022-09-23T11:35:38
| 2022-09-23T11:33:44
|
julien-c
|
[
"dataset contribution"
] | null | true
|
1,332,842,747
|
https://api.github.com/repos/huggingface/datasets/issues/4809
|
https://github.com/huggingface/datasets/pull/4809
| 4,809
|
Complete the mlqa dataset card
|
closed
| 4
| 2022-08-09T07:38:06
| 2022-08-09T16:26:21
| 2022-08-09T13:26:43
|
el2e10
|
[] |
I fixed the issue #4808
Details of PR:
- Added languages included in the dataset.
- Added task id and task category.
- Updated the citation information.
Fix #4808.
| true
|
1,332,840,217
|
https://api.github.com/repos/huggingface/datasets/issues/4808
|
https://github.com/huggingface/datasets/issues/4808
| 4,808
|
Add more information to the dataset card of mlqa dataset
|
closed
| 2
| 2022-08-09T07:35:42
| 2022-08-09T13:33:23
| 2022-08-09T13:33:23
|
el2e10
|
[] | null | false
|
1,332,784,110
|
https://api.github.com/repos/huggingface/datasets/issues/4807
|
https://github.com/huggingface/datasets/pull/4807
| 4,807
|
document fix in opus_gnome dataset
|
closed
| 1
| 2022-08-09T06:38:13
| 2022-08-09T07:28:03
| 2022-08-09T07:28:03
|
gojiteji
|
[] |
I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary).
| true
|
1,332,664,038
|
https://api.github.com/repos/huggingface/datasets/issues/4806
|
https://github.com/huggingface/datasets/pull/4806
| 4,806
|
Fix opus_gnome dataset card
|
closed
| 20
| 2022-08-09T03:40:15
| 2022-08-09T12:06:46
| 2022-08-09T11:52:04
|
gojiteji
|
[] |
I fixed a issue #4805.
I changed `"gnome"` to `"opus_gnome"` in[ README.md](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary).
Fix #4805
| true
|
1,332,653,531
|
https://api.github.com/repos/huggingface/datasets/issues/4805
|
https://github.com/huggingface/datasets/issues/4805
| 4,805
|
Wrong example in opus_gnome dataset card
|
closed
| 0
| 2022-08-09T03:21:27
| 2022-08-09T11:52:05
| 2022-08-09T11:52:05
|
gojiteji
|
[
"bug"
] |
## Describe the bug
I found that [the example on opus_gone dataset ](https://github.com/huggingface/datasets/tree/main/datasets/opus_gnome#dataset-summary) doesn't work.
## Steps to reproduce the bug
```python
load_dataset("gnome", lang1="it", lang2="pl")
```
`"gnome"` should be `"opus_gnome"`
## Expected results
```bash
100%
1/1 [00:00<00:00, 42.09it/s]
DatasetDict({
train: Dataset({
features: ['id', 'translation'],
num_rows: 8368
})
})
```
## Actual results
```bash
Couldn't find 'gnome' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/gnome/gnome.py
```
## Environment info
- `datasets` version: 2.4.0
- Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.27
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.3
| false
|
1,332,630,358
|
https://api.github.com/repos/huggingface/datasets/issues/4804
|
https://github.com/huggingface/datasets/issues/4804
| 4,804
|
streaming dataset with concatenating splits raises an error
|
open
| 4
| 2022-08-09T02:41:56
| 2023-11-25T14:52:09
| null |
Bing-su
|
[
"bug"
] |
## Describe the bug
streaming dataset with concatenating splits raises an error
## Steps to reproduce the bug
```python
from datasets import load_dataset
# no error
repo = "nateraw/ade20k-tiny"
dataset = load_dataset(repo, split="train+validation")
```
```python
from datasets import load_dataset
# error
repo = "nateraw/ade20k-tiny"
dataset = load_dataset(repo, split="train+validation", streaming=True)
```
```sh
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-4-a6ae02d63899>](https://localhost:8080/#) in <module>()
3 # error
4 repo = "nateraw/ade20k-tiny"
----> 5 dataset = load_dataset(repo, split="train+validation", streaming=True)
1 frames
[/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path)
1030 splits_generator = splits_generators[split]
1031 else:
-> 1032 raise ValueError(f"Bad split: {split}. Available splits: {list(splits_generators)}")
1033
1034 # Create a dataset for each of the given splits
ValueError: Bad split: train+validation. Available splits: ['validation', 'train']
```
[Colab](https://colab.research.google.com/drive/1wMj08_0bym9jnGgByib4lsBPu8NCZBG9?usp=sharing)
## Expected results
load successfully or throws an error saying it is not supported.
## Actual results
above
## Environment info
- `datasets` version: 2.4.0
- Platform: Windows-10-10.0.22000-SP0 (windows11 x64)
- Python version: 3.9.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| false
|
1,332,079,562
|
https://api.github.com/repos/huggingface/datasets/issues/4803
|
https://github.com/huggingface/datasets/issues/4803
| 4,803
|
Support `pipeline` argument in inspect.py functions
|
open
| 1
| 2022-08-08T16:01:24
| 2023-09-25T12:21:35
| null |
severo
|
[
"enhancement"
] |
**Is your feature request related to a problem? Please describe.**
The `wikipedia` dataset requires a `pipeline` argument to build the list of splits:
https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py#L937
But this is currently not supported in `get_dataset_config_info`:
https://github.com/huggingface/datasets/blob/main/src/datasets/inspect.py#L373-L375
which is called by other functions, e.g. `get_dataset_split_names`.
**Additional context**
The dataset viewer is not working out-of-the-box on `wikipedia` for this reason:
https://huggingface.co/datasets/wikipedia/viewer
<img width="637" alt="Capture d’écran 2022-08-08 à 12 01 16" src="https://user-images.githubusercontent.com/1676121/183461838-5330783b-0269-4ba7-a999-314cde2023d8.png">
| false
|
1,331,676,691
|
https://api.github.com/repos/huggingface/datasets/issues/4802
|
https://github.com/huggingface/datasets/issues/4802
| 4,802
|
`with_format` behavior is inconsistent on different datasets
|
open
| 1
| 2022-08-08T10:41:34
| 2022-08-09T16:49:09
| null |
fxmarty
|
[
"bug"
] |
## Describe the bug
I found a case where `with_format` does not transform the dataset to the requested format.
## Steps to reproduce the bug
Run:
```python
from transformers import AutoTokenizer, AutoFeatureExtractor
from datasets import load_dataset
raw = load_dataset("glue", "sst2", split="train")
raw = raw.select(range(100))
tokenizer = AutoTokenizer.from_pretrained("philschmid/tiny-bert-sst2-distilled")
def preprocess_func(examples):
return tokenizer(examples["sentence"], padding=True, max_length=256, truncation=True)
data = raw.map(preprocess_func, batched=True)
print(type(data[0]["input_ids"]))
data = data.with_format("torch", columns=["input_ids"])
print(type(data[0]["input_ids"]))
```
printing as expected:
```python
<class 'list'>
<class 'torch.Tensor'>
```
Then run:
```python
raw = load_dataset("beans", split="train")
raw = raw.select(range(100))
preprocessor = AutoFeatureExtractor.from_pretrained("nateraw/vit-base-beans")
def preprocess_func(examples):
imgs = [img.convert("RGB") for img in examples["image"]]
return preprocessor(imgs)
data = raw.map(preprocess_func, batched=True)
print(type(data[0]["pixel_values"]))
data = data.with_format("torch", columns=["pixel_values"])
print(type(data[0]["pixel_values"]))
```
Printing, unexpectedly
```python
<class 'list'>
<class 'list'>
```
## Expected results
`with_format` should transform into the requested format; it's not the case.
## Actual results
`type(data[0]["pixel_values"])` should be `torch.Tensor` in the example above
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: dev version, commit 44af3fafb527302282f6b6507b952de7435f0979
- Platform: Linux
- Python version: 3.9.12
- PyArrow version: 7.0.0
| false
|
1,331,337,418
|
https://api.github.com/repos/huggingface/datasets/issues/4801
|
https://github.com/huggingface/datasets/pull/4801
| 4,801
|
Fix fine classes in trec dataset
|
closed
| 1
| 2022-08-08T05:11:02
| 2022-08-22T16:29:14
| 2022-08-22T16:14:15
|
albertvillanova
|
[] |
This PR:
- replaces the fine labels, so that there are 50 instead of 47
- once more labels are added, all they (fine and coarse) have been re-ordered, so that they align with the order in: https://cogcomp.seas.upenn.edu/Data/QA/QC/definition.html
- the feature names have been fixed: `fine_label` instead of `label-fine`
- to sneak-case (underscores instead of hyphens)
- words have been reordered
Fix #4790.
| true
|
1,331,288,128
|
https://api.github.com/repos/huggingface/datasets/issues/4800
|
https://github.com/huggingface/datasets/pull/4800
| 4,800
|
support LargeListArray in pyarrow
|
closed
| 22
| 2022-08-08T03:58:46
| 2024-09-27T09:54:17
| 2024-08-12T14:43:46
|
Jiaxin-Wen
|
[] |
```python
import numpy as np
import datasets
a = np.zeros((5000000, 768))
res = datasets.Dataset.from_dict({'embedding': a})
'''
File '/home/wenjiaxin/anaconda3/envs/data/lib/python3.8/site-packages/datasets/arrow_writer.py', line 178, in __arrow_array__
out = numpy_to_pyarrow_listarray(data)
File "/home/wenjiaxin/anaconda3/envs/data/lib/python3.8/site-packages/datasets/features/features.py", line 1173, in numpy_to_pyarrow_listarray
offsets = pa.array(np.arange(n_offsets + 1) * step_offsets, type=pa.int32())
File "pyarrow/array.pxi", line 312, in pyarrow.lib.array
File "pyarrow/array.pxi", line 83, in pyarrow.lib._ndarray_to_array
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Integer value 2147483904 not in range: -2147483648 to 2147483647
'''
```
Loading a large numpy array currently raises the error above as the type of offsets is `int32`.
And pyarrow has supported [LargeListArray](https://arrow.apache.org/docs/python/generated/pyarrow.LargeListArray.html) for this case.
| true
|
1,330,889,854
|
https://api.github.com/repos/huggingface/datasets/issues/4799
|
https://github.com/huggingface/datasets/issues/4799
| 4,799
|
video dataset loader/parser
|
closed
| 3
| 2022-08-07T01:54:12
| 2023-10-01T00:08:31
| 2022-08-09T16:42:51
|
verbiiyo
|
[
"enhancement"
] |
you know how you can [use `load_dataset` with any arbitrary csv file](https://huggingface.co/docs/datasets/loading#csv)? and you can also [use it to load a local image dataset](https://huggingface.co/docs/datasets/image_load#local-files)?
could you please add functionality to load a video dataset? it would be really cool if i could point it to a bunch of video files and use pytorch to start looping through batches of videos. like if my batch size is 16, each sample in the batch is a frame from a video. i'm competing in the [minerl challenge](https://www.aicrowd.com/challenges/neurips-2022-minerl-basalt-competition), and it would be awesome to use the HF ecosystem.
| false
|
1,330,699,942
|
https://api.github.com/repos/huggingface/datasets/issues/4798
|
https://github.com/huggingface/datasets/pull/4798
| 4,798
|
Shard generator
|
closed
| 6
| 2022-08-06T09:14:06
| 2022-10-03T15:35:10
| 2022-10-03T15:35:10
|
marianna13
|
[] |
Hi everyone! I was using Hugging Face datasets to process some very large datasets and found that it would be quite handy to have a feature that will allow to "split" these large datasets into chunks with equal size. Even better - be able to run through these chunks one by one in simple and convenient way. So I decided to add the method called shard_generator() to the main Dataset class. It works similar to shard method but it returns a generator of datasets with equal size (defined by shard_size attribute).
Example:
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("rotten_tomatoes", split="validation")
>>> ds
Dataset({
features: ['text', 'label'],
num_rows: 1066
})
>>> next(ds.shard_generator(300))
Dataset({
features: ['text', 'label'],
num_rows: 300
})
```
I hope it can be helpful to someone. Thanks!
| true
|
1,330,000,998
|
https://api.github.com/repos/huggingface/datasets/issues/4797
|
https://github.com/huggingface/datasets/pull/4797
| 4,797
|
Torgo dataset creation
|
closed
| 1
| 2022-08-05T14:18:26
| 2022-08-09T18:46:00
| 2022-08-09T18:46:00
|
YingLi001
|
[] | null | true
|
1,329,887,810
|
https://api.github.com/repos/huggingface/datasets/issues/4796
|
https://github.com/huggingface/datasets/issues/4796
| 4,796
|
ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB when adding image to Dataset
|
open
| 19
| 2022-08-05T12:41:19
| 2024-11-29T16:35:17
| null |
NielsRogge
|
[
"bug"
] |
## Describe the bug
When adding a Pillow image to an existing Dataset on the hub, `add_item` fails due to the Pillow image not being automatically converted into the Image feature.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from PIL import Image
dataset = load_dataset("hf-internal-testing/example-documents")
# load any random Pillow image
image = Image.open("/content/cord_example.png").convert("RGB")
new_image = {'image': image}
dataset['test'] = dataset['test'].add_item(new_image)
```
## Expected results
The image should be automatically casted to the Image feature when using `add_item`. For now, this can be fixed by using `encode_example`:
```
import datasets
feature = datasets.Image(decode=False)
new_image = {'image': feature.encode_example(image)}
dataset['test'] = dataset['test'].add_item(new_image)
```
## Actual results
```
ArrowInvalid: Could not convert <PIL.Image.Image image mode=RGB size=576x864 at 0x7F7CCC4589D0> with type Image: did not recognize Python value type when inferring an Arrow data type
```
| false
|
1,329,525,732
|
https://api.github.com/repos/huggingface/datasets/issues/4795
|
https://github.com/huggingface/datasets/issues/4795
| 4,795
|
Missing MBPP splits
|
closed
| 4
| 2022-08-05T06:51:01
| 2022-09-13T12:27:24
| 2022-09-13T12:27:24
|
stadlerb
|
[
"bug"
] |
(@albertvillanova)
The [MBPP dataset on the Hub](https://huggingface.co/datasets/mbpp) has only a test split for both its "full" and its "sanitized" subset, while the [paper](https://arxiv.org/abs/2108.07732) states in subsection 2.1 regarding the full split:
> In the experiments described later in the paper, we hold out 10 problems for **few-shot prompting**, another 500 as our **test** dataset (which is used to evaluate both few-shot inference and fine-tuned models), 374 problems for **fine-tuning**, and the rest for **validation**.
If the dataset on the Hub should reproduce most closely what the original authors use, I guess this four-way split should be reflected.
The paper doesn't explicitly state the task_id ranges of the splits, but the [GitHub readme](https://github.com/google-research/google-research/tree/master/mbpp) referenced in the paper specifies exact task_id ranges, although it misstates the total number of samples:
> We specify a train and test split to use for evaluation. Specifically:
>
> * Task IDs 11-510 are used for evaluation.
> * Task IDs 1-10 and 511-1000 are used for training and/or prompting. We typically used 1-10 for few-shot prompting, although you can feel free to use any of the training examples.
I.e. the few-shot, train and validation splits are combined into one split, with a soft suggestion of using the first ten for few-shot prompting. It is not explicitly stated whether the 374 fine-tuning samples mentioned in the paper have task_id 511 to 784 or 601 to 974 or are randomly sampled from task_id 511 to 974.
Regarding the "sanitized" split the paper states the following:
> For evaluations involving the edited dataset, we perform comparisons with 100 problems that appear in both the original and edited dataset, using the same held out 10 problems for few-shot prompting and 374 problems for fine-tuning.
The statement doesn't appear to be very precise, as among the 10 few-shot problems, those with task_id 1, 5 and 10 are not even part of the sanitized variant, and many from the task_id range from 511 to 974 are missing (e.g. task_id 511 to 553). I suppose the idea the task_id ranges for each split remain the same, even if some of the task_ids are not present. That would result in 7 few-shot, 257 test, 141 train and 22 validation examples in the sanitized split.
| false
|
1,328,593,929
|
https://api.github.com/repos/huggingface/datasets/issues/4792
|
https://github.com/huggingface/datasets/issues/4792
| 4,792
|
Add DocVQA
|
open
| 1
| 2022-08-04T13:07:26
| 2022-08-08T05:31:20
| null |
NielsRogge
|
[
"dataset request"
] |
## Adding a Dataset
- **Name:** DocVQA
- **Description:** Document Visual Question Answering (DocVQA) seeks to inspire a “purpose-driven” point of view in Document Analysis and Recognition research, where the document content is extracted and used to respond to high-level tasks defined by the human consumers of this information.
- **Paper:** https://arxiv.org/abs/2007.00398
- **Data:** https://www.docvqa.org/datasets/docvqa
- **Motivation:** Models like LayoutLM and Donut in the Transformers library are fine-tuned on DocVQA. Would be very handy to directly load this dataset from the hub.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
| false
|
1,328,571,064
|
https://api.github.com/repos/huggingface/datasets/issues/4791
|
https://github.com/huggingface/datasets/issues/4791
| 4,791
|
Dataset Viewer issue for Team-PIXEL/rendered-wikipedia-english
|
closed
| 1
| 2022-08-04T12:49:16
| 2022-08-04T13:43:16
| 2022-08-04T13:43:16
|
xplip
|
[
"dataset-viewer"
] |
### Link
https://huggingface.co/datasets/Team-PIXEL/rendered-wikipedia-english/viewer/rendered-wikipedia-en/train
### Description
The dataset can be loaded fine but the viewer shows this error:
```
Server Error
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
I'm guessing this is because I recently renamed the dataset. Based on related issues (e.g. https://github.com/huggingface/datasets/issues/4759) , is there something server-side that needs to be refreshed?
### Owner
Yes
| false
|
1,328,546,904
|
https://api.github.com/repos/huggingface/datasets/issues/4790
|
https://github.com/huggingface/datasets/issues/4790
| 4,790
|
Issue with fine classes in trec dataset
|
closed
| 0
| 2022-08-04T12:28:51
| 2022-08-22T16:14:16
| 2022-08-22T16:14:16
|
albertvillanova
|
[
"bug"
] |
## Describe the bug
According to their paper, the TREC dataset contains 2 kinds of classes:
- 6 coarse classes: TREC-6
- 50 fine classes: TREC-50
However, our implementation only has 47 (instead of 50) fine classes. The reason for this is that we only considered the last segment of the label, which is repeated for several coarse classes:
- We have one `desc` fine label instead of 2:
- `DESC:desc`
- `HUM:desc`
- We have one `other` fine label instead of 3:
- `ENTY:other`
- `LOC:other`
- `NUM:other`
From their paper:
> We define a two-layered taxonomy, which represents a natural semantic classification for typical answers in the TREC task. The hierarchy contains 6 coarse classes and 50 fine classes,
> Each coarse class contains a non-overlapping set of fine classes.
| false
|
1,328,409,253
|
https://api.github.com/repos/huggingface/datasets/issues/4789
|
https://github.com/huggingface/datasets/pull/4789
| 4,789
|
Update doc upload_dataset.mdx
|
closed
| 1
| 2022-08-04T10:24:00
| 2022-09-09T16:37:10
| 2022-09-09T16:34:58
|
mishig25
|
[] | null | true
|
1,328,246,021
|
https://api.github.com/repos/huggingface/datasets/issues/4788
|
https://github.com/huggingface/datasets/pull/4788
| 4,788
|
Fix NonMatchingChecksumError in mbpp dataset
|
closed
| 4
| 2022-08-04T08:17:40
| 2022-08-04T17:34:00
| 2022-08-04T17:21:01
|
albertvillanova
|
[] |
Fix issue reported on the Hub: https://huggingface.co/datasets/mbpp/discussions/1
Fix #4787.
| true
|
1,328,243,911
|
https://api.github.com/repos/huggingface/datasets/issues/4787
|
https://github.com/huggingface/datasets/issues/4787
| 4,787
|
NonMatchingChecksumError in mbpp dataset
|
closed
| 0
| 2022-08-04T08:15:51
| 2022-08-04T17:21:01
| 2022-08-04T17:21:01
|
albertvillanova
|
[
"bug"
] |
## Describe the bug
As reported on the Hub [Fix Checksum Mismatch](https://huggingface.co/datasets/mbpp/discussions/1), there is a `NonMatchingChecksumError` when loading mbpp dataset
## Steps to reproduce the bug
```python
ds = load_dataset("mbpp", "full")
```
## Expected results
Loading of the dataset without any exception raised.
## Actual results
```
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-1-a3fbdd3ed82e> in <module>
----> 1 ds = load_dataset("mbpp", "full")
.../huggingface/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1791
1792 # Download and prepare data
-> 1793 builder_instance.download_and_prepare(
1794 download_config=download_config,
1795 download_mode=download_mode,
.../huggingface/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
702 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
703 if not downloaded_from_gcs:
--> 704 self._download_and_prepare(
705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
706 )
.../huggingface/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
1225
1226 def _download_and_prepare(self, dl_manager, verify_infos):
-> 1227 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
1228
1229 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable:
.../huggingface/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
773 # Checksums verification
774 if verify_infos and dl_manager.record_checksums:
--> 775 verify_checksums(
776 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
777 )
.../huggingface/datasets/src/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://raw.githubusercontent.com/google-research/google-research/master/mbpp/mbpp.jsonl']
```
| false
|
1,327,340,828
|
https://api.github.com/repos/huggingface/datasets/issues/4786
|
https://github.com/huggingface/datasets/issues/4786
| 4,786
|
.save_to_disk('path', fs=s3) TypeError
|
closed
| 0
| 2022-08-03T14:49:29
| 2022-08-03T15:23:00
| 2022-08-03T15:23:00
|
h-k-dev
|
[
"bug"
] |
The following code:
```python
import datasets
train_dataset, test_dataset = load_dataset("imdb", split=["train", "test"])
s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)
train_dataset.save_to_disk("s3://datasets/", fs=s3)
```
produces following traceback:
```shell
File "C:\Users\Hong Knop\AppData\Local\Programs\Python\Python310\lib\site-packages\botocore\auth.py", line 374, in scope
return '/'.join(scope)
```
I invoke print(scope) in <auth.py> (line 373) and find this:
```python
[('4VA08VLL3VTKQJKCAI8M',), '20220803', 'us-east-1', 's3', 'aws4_request']
```
| false
|
1,327,225,826
|
https://api.github.com/repos/huggingface/datasets/issues/4785
|
https://github.com/huggingface/datasets/pull/4785
| 4,785
|
Require torchaudio<0.12.0 in docs
|
closed
| 1
| 2022-08-03T13:32:00
| 2022-08-03T15:07:43
| 2022-08-03T14:52:16
|
albertvillanova
|
[] |
This PR adds to docs the requirement of torchaudio<0.12.0 to avoid RuntimeError.
Subsequent to PR:
- #4777
| true
|
1,326,395,280
|
https://api.github.com/repos/huggingface/datasets/issues/4784
|
https://github.com/huggingface/datasets/issues/4784
| 4,784
|
Add Multiface dataset
|
open
| 3
| 2022-08-02T21:00:22
| 2022-08-08T14:42:36
| null |
osanseviero
|
[
"dataset request",
"vision"
] |
## Adding a Dataset
- **Name:** Multiface dataset
- **Description:** f high quality recordings of the faces of 13 identities, each captured in a multi-view capture stage performing various facial expressions. An average of 12,200 (v1 scripts) to 23,000 (v2 scripts) frames per subject with capture rate at 30 fps
- **Data:** https://github.com/facebookresearch/multiface
The whole dataset is 65TB though, so I'm not sure
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
| false
|
1,326,375,011
|
https://api.github.com/repos/huggingface/datasets/issues/4783
|
https://github.com/huggingface/datasets/pull/4783
| 4,783
|
Docs for creating a loading script for image datasets
|
closed
| 7
| 2022-08-02T20:36:03
| 2022-09-09T17:08:14
| 2022-09-07T19:07:34
|
stevhliu
|
[
"documentation"
] |
This PR is a first draft of creating a loading script for image datasets. Feel free to let me know if there are any specificities I'm missing for this. 🙂
To do:
- [x] Document how to create different configurations.
| true
|
1,326,247,158
|
https://api.github.com/repos/huggingface/datasets/issues/4782
|
https://github.com/huggingface/datasets/issues/4782
| 4,782
|
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2147483648
|
closed
| 5
| 2022-08-02T18:36:05
| 2022-08-22T09:46:28
| 2022-08-20T02:11:53
|
conceptofmind
|
[
"bug"
] |
## Describe the bug
Following the example in CodeParrot, I receive an array size limitation error when deduplicating larger datasets.
## Steps to reproduce the bug
```python
dataset_name = "the_pile"
ds = load_dataset(dataset_name, split="train")
ds = ds.map(preprocess, num_proc=num_workers)
uniques = set(ds.unique("hash"))
```
Gists for minimum reproducible example:
https://gist.github.com/conceptofmind/c5804428ea1bd89767815f9cd5f02d9a
https://gist.github.com/conceptofmind/feafb07e236f28d79c2d4b28ffbdb6e2
## Expected results
Chunking and writing out a deduplicated dataset.
## Actual results
```
return dataset._data.column(column).unique().to_pylist()
File "pyarrow/table.pxi", line 394, in pyarrow.lib.ChunkedArray.unique
File "pyarrow/_compute.pyx", line 531, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 330, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 124, in pyarrow.lib.check_status
pyarrow.lib.ArrowCapacityError: array cannot contain more than 2147483646 bytes, have 2147483648
```
| false
|
1,326,114,161
|
https://api.github.com/repos/huggingface/datasets/issues/4781
|
https://github.com/huggingface/datasets/pull/4781
| 4,781
|
Fix label renaming and add a battery of tests
|
closed
| 12
| 2022-08-02T16:42:07
| 2022-09-12T11:27:06
| 2022-09-12T11:24:45
|
Rocketknight1
|
[] |
This PR makes some changes to label renaming in `to_tf_dataset()`, both to fix some issues when users input something we weren't expecting, and also to make it easier to deprecate label renaming in future, if/when we want to move this special-casing logic to a function in `transformers`.
The main changes are:
- Label renaming now only happens when the `auto_rename_labels` argument is set. For backward compatibility, this defaults to `True` for now.
- If the user requests "label" but the data collator renames that column to "labels", the label renaming logic will now handle that case correctly.
- Added a battery of tests to make this more reliable in future.
- Adds an optimization to loading in `to_tf_dataset()` for unshuffled datasets (uses slicing instead of a list of indices)
Fixes #4772
| true
|
1,326,034,767
|
https://api.github.com/repos/huggingface/datasets/issues/4780
|
https://github.com/huggingface/datasets/pull/4780
| 4,780
|
Remove apache_beam import from module level in natural_questions dataset
|
closed
| 1
| 2022-08-02T15:34:54
| 2022-08-02T16:16:33
| 2022-08-02T16:03:17
|
albertvillanova
|
[] |
Instead of importing `apache_beam` at the module level, import it in the method `_build_pcollection`.
Fix #4779.
| true
|
1,325,997,225
|
https://api.github.com/repos/huggingface/datasets/issues/4779
|
https://github.com/huggingface/datasets/issues/4779
| 4,779
|
Loading natural_questions requires apache_beam even with existing preprocessed data
|
closed
| 0
| 2022-08-02T15:06:57
| 2022-08-02T16:03:18
| 2022-08-02T16:03:18
|
albertvillanova
|
[
"bug"
] |
## Describe the bug
When loading "natural_questions", the package "apache_beam" is required:
```
ImportError: To be able to use natural_questions, you need to install the following dependency: apache_beam.
Please install it using 'pip install apache_beam' for instance'
```
This requirement is unnecessary, once there exists preprocessed data and the script just needs to download it.
## Steps to reproduce the bug
```python
load_dataset("natural_questions", "dev", split="validation", revision="main")
```
## Expected results
No ImportError raised.
## Actual results
```
ImportError Traceback (most recent call last)
[<ipython-input-3-c938e7c05d02>](https://localhost:8080/#) in <module>()
----> 1 from datasets import load_dataset; ds = load_dataset("natural_questions", "dev", split="validation", revision="main")
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1732 revision=revision,
1733 use_auth_token=use_auth_token,
-> 1734 **config_kwargs,
1735 )
1736
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs)
1504 download_mode=download_mode,
1505 data_dir=data_dir,
-> 1506 data_files=data_files,
1507 )
1508
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1245 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
1246 ) from None
-> 1247 raise e1 from None
1248 else:
1249 raise FileNotFoundError(
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1180 download_config=download_config,
1181 download_mode=download_mode,
-> 1182 dynamic_modules_path=dynamic_modules_path,
1183 ).get_module()
1184 elif path.count("/") == 1: # community dataset on the Hub
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in get_module(self)
490 base_path=hf_github_url(path=self.name, name="", revision=revision),
491 imports=imports,
--> 492 download_config=self.download_config,
493 )
494 additional_files = [(config.DATASETDICT_INFOS_FILENAME, dataset_infos_path)] if dataset_infos_path else []
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in _download_additional_modules(name, base_path, imports, download_config)
214 _them_str = "them" if len(needs_to_be_installed) > 1 else "it"
215 raise ImportError(
--> 216 f"To be able to use {name}, you need to install the following {_depencencies_str}: "
217 f"{', '.join(needs_to_be_installed)}.\nPlease install {_them_str} using 'pip install "
218 f"{' '.join(needs_to_be_installed.values())}' for instance'"
ImportError: To be able to use natural_questions, you need to install the following dependency: apache_beam.
Please install it using 'pip install apache_beam' for instance'
```
## Environment info
Colab notebook.
| false
|
1,324,928,750
|
https://api.github.com/repos/huggingface/datasets/issues/4778
|
https://github.com/huggingface/datasets/pull/4778
| 4,778
|
Update local loading script docs
|
closed
| 5
| 2022-08-01T20:21:07
| 2022-08-23T16:32:26
| 2022-08-23T16:32:22
|
stevhliu
|
[
"documentation"
] |
This PR clarifies the local loading script section to include how to load a dataset after you've modified the local loading script (closes #4732).
| true
|
1,324,548,784
|
https://api.github.com/repos/huggingface/datasets/issues/4777
|
https://github.com/huggingface/datasets/pull/4777
| 4,777
|
Require torchaudio<0.12.0 to avoid RuntimeError
|
closed
| 1
| 2022-08-01T14:50:50
| 2022-08-02T17:35:14
| 2022-08-02T17:21:39
|
albertvillanova
|
[] |
Related to:
- https://github.com/huggingface/transformers/issues/18379
Fix partially #4776.
| true
|
1,324,493,860
|
https://api.github.com/repos/huggingface/datasets/issues/4776
|
https://github.com/huggingface/datasets/issues/4776
| 4,776
|
RuntimeError when using torchaudio 0.12.0 to load MP3 audio file
|
closed
| 3
| 2022-08-01T14:11:23
| 2023-03-02T15:58:16
| 2023-03-02T15:58:15
|
albertvillanova
|
[] |
Current version of `torchaudio` (0.12.0) raises a RuntimeError when trying to use `sox_io` backend but non-Python dependency `sox` is not installed:
https://github.com/pytorch/audio/blob/2e1388401c434011e9f044b40bc8374f2ddfc414/torchaudio/backend/sox_io_backend.py#L21-L29
```python
def _fail_load(
filepath: str,
frame_offset: int = 0,
num_frames: int = -1,
normalize: bool = True,
channels_first: bool = True,
format: Optional[str] = None,
) -> Tuple[torch.Tensor, int]:
raise RuntimeError("Failed to load audio from {}".format(filepath))
```
Maybe we should raise a more actionable error message so that the user knows how to fix it.
UPDATE:
- this is an incompatibility of latest torchaudio (0.12.0) and the sox backend
TODO:
- [x] as a temporary solution, we should recommend installing torchaudio<0.12.0
- #4777
- #4785
- [ ] however, a stable solution must be found for torchaudio>=0.12.0
Related to:
- https://github.com/huggingface/transformers/issues/18379
| false
|
1,324,136,486
|
https://api.github.com/repos/huggingface/datasets/issues/4775
|
https://github.com/huggingface/datasets/issues/4775
| 4,775
|
Streaming not supported in Theivaprakasham/wildreceipt
|
closed
| 1
| 2022-08-01T09:46:17
| 2022-08-01T10:30:29
| 2022-08-01T10:30:29
|
NitishkKarra
|
[
"streaming"
] |
### Link
_No response_
### Description
_No response_
### Owner
_No response_
| false
|
1,323,375,844
|
https://api.github.com/repos/huggingface/datasets/issues/4774
|
https://github.com/huggingface/datasets/issues/4774
| 4,774
|
Training hangs at the end of epoch, with set_transform/with_transform+multiple workers
|
open
| 0
| 2022-07-31T06:32:28
| 2022-07-31T06:36:43
| null |
memray
|
[
"bug"
] |
## Describe the bug
I use load_dataset() (I tried with [wiki](https://huggingface.co/datasets/wikipedia) and my own json data) and use set_transform/with_transform for preprocessing. But it hangs at the end of the 1st epoch if dataloader_num_workers>=1. No problem with single worker.
## Steps to reproduce the bug
```python
train_dataset = datasets.load_dataset("wikipedia", "20220301.en",
split='train',
cache_dir=model_args.cache_dir,
streaming=False)
train_dataset.set_transform(psg_parse_fn)
train_dataloader = DataLoader(
train_dataset,
batch_size=args.train_batch_size,
sampler=DistributedSampler(train_dataset),
collate_fn=data_collator,
drop_last=args.dataloader_drop_last,
num_workers=args.dataloader_num_workers,
)
```
## Expected results
## Actual results
It simply hangs. The ending step is num_example/batch_size (one epoch).
## Environment info
- `datasets` version: 2.4.1.dev0
- Platform: Linux-5.4.170+-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 8.0.0
- Pandas version: 1.4.1
| false
|
1,322,796,721
|
https://api.github.com/repos/huggingface/datasets/issues/4773
|
https://github.com/huggingface/datasets/pull/4773
| 4,773
|
Document loading from relative path
|
closed
| 5
| 2022-07-29T23:32:21
| 2022-08-25T18:36:45
| 2022-08-25T18:34:23
|
stevhliu
|
[
"documentation"
] |
This PR describes loading a dataset from the Hub by specifying a relative path in `data_dir` or `data_files` in `load_dataset` (see #4757).
| true
|
1,322,693,123
|
https://api.github.com/repos/huggingface/datasets/issues/4772
|
https://github.com/huggingface/datasets/issues/4772
| 4,772
|
AssertionError when using label_cols in to_tf_dataset
|
closed
| 5
| 2022-07-29T21:32:12
| 2022-09-12T11:24:46
| 2022-09-12T11:24:46
|
lehrig
|
[
"bug"
] |
## Describe the bug
An incorrect `AssertionError` is raised when using `label_cols` in `to_tf_dataset` and the label's key name is `label`.
The assertion is in this line:
https://github.com/huggingface/datasets/blob/2.4.0/src/datasets/arrow_dataset.py#L475
## Steps to reproduce the bug
```python
from datasets import load_dataset
from transformers import DefaultDataCollator
dataset = load_dataset('glue', 'mrpc', split='train')
tf_dataset = dataset.to_tf_dataset(
columns=["sentence1", "sentence2", "idx"],
label_cols=["label"],
batch_size=16,
collate_fn=DefaultDataCollator(return_tensors="tf"),
)
```
## Expected results
No assertion error.
## Actual results
```
AssertionError: in user code:
File "/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 475, in split_features_and_labels *
assert set(features.keys()).union(labels.keys()) == set(input_batch.keys())
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.18.0-305.45.1.el8_4.ppc64le-ppc64le-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 7.0.0
- Pandas version: 1.4.3
| false
|
1,322,600,725
|
https://api.github.com/repos/huggingface/datasets/issues/4771
|
https://github.com/huggingface/datasets/pull/4771
| 4,771
|
Remove dummy data generation docs
|
closed
| 1
| 2022-07-29T19:20:46
| 2022-08-03T00:04:01
| 2022-08-02T23:50:29
|
stevhliu
|
[
"documentation"
] |
This PR removes instructions to generate dummy data since that is no longer necessary for datasets that are uploaded to the Hub instead of our GitHub repo.
Close #4744
| true
|
1,322,147,855
|
https://api.github.com/repos/huggingface/datasets/issues/4770
|
https://github.com/huggingface/datasets/pull/4770
| 4,770
|
fix typo
|
closed
| 2
| 2022-07-29T11:46:12
| 2022-07-29T16:02:07
| 2022-07-29T16:02:07
|
Jiaxin-Wen
|
[] |
By defaul -> By default
| true
|
1,322,121,554
|
https://api.github.com/repos/huggingface/datasets/issues/4769
|
https://github.com/huggingface/datasets/issues/4769
| 4,769
|
Fail to process SQuADv1.1 datasets with max_seq_length=128, doc_stride=96.
|
open
| 0
| 2022-07-29T11:18:24
| 2022-07-29T11:18:24
| null |
zhuango
|
[
"bug"
] |
## Describe the bug
datasets fail to process SQuADv1.1 with max_seq_length=128, doc_stride=96 when calling datasets["train"].train_dataset.map().
## Steps to reproduce the bug
I used huggingface[ TF2 question-answering examples](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering). And my scripts are as follows:
```
python run_qa.py \
--model_name_or_path $BERT_DIR \
--dataset_name $SQUAD_DIR \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 128 \
--doc_stride 96 \
--output_dir $OUTPUT \
--save_steps 10000 \
--overwrite_cache \
--overwrite_output_dir \
```
## Expected results
Normally process SQuADv1.1 datasets with max_seq_length=128, doc_stride=96.
## Actual results
```
INFO:__main__:Padding all batches to max length because argument was set or we're on TPU.
WARNING:datasets.fingerprint:Parameter 'function'=<function main.<locals>.prepare_train_features at 0x7f15bc2d07a0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.
0%| | 0/88 [00:00<?, ?ba/s]thread '<unnamed>' panicked at 'assertion failed: stride < max_len', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/encoding.rs:311:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
0%| | 0/88 [00:00<?, ?ba/s]
Traceback (most recent call last):
File "run_qa.py", line 743, in <module>
main()
File "run_qa.py", line 485, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2394, in map
desc=desc,
File "/anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 551, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 518, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/anaconda3/envs/py37/lib/python3.7/site-packages/datasets/fingerprint.py", line 458, in wrapper
out = func(self, *args, **kwargs)
File "anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2768, in _map_single
offset=offset,
File "anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2644, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "anaconda3/envs/py37/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2336, in decorated
result = f(decorated_item, *args, **kwargs)
File "run_qa.py", line 410, in prepare_train_features
padding=padding,
File "anaconda3/envs/py37/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2512, in __call__
**kwargs,
File "anaconda3/envs/py37/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2703, in batch_encode_plus
**kwargs,
File "anaconda3/envs/py37/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 429, in _batch_encode_plus
is_pretokenized=is_split_into_words,
pyo3_runtime.PanicException: assertion failed: stride < max_len
Traceback (most recent call last):
File "./data/SQuADv1.1/evaluate-v1.1.py", line 92, in <module>
with open(args.prediction_file) as prediction_file:
FileNotFoundError: [Errno 2] No such file or directory: './output/bert_base_squadv1.1_tf2/eval_predictions.json'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Ubuntu, pytorch=1.11.0, tensorflow-gpu=2.9.1
- Python version: 2.7
- PyArrow version: 8.0.0
| false
|
1,321,913,645
|
https://api.github.com/repos/huggingface/datasets/issues/4768
|
https://github.com/huggingface/datasets/pull/4768
| 4,768
|
Unpin rouge_score test dependency
|
closed
| 1
| 2022-07-29T08:17:40
| 2022-07-29T16:42:28
| 2022-07-29T16:29:17
|
albertvillanova
|
[] |
Once `rouge-score` has made the 0.1.2 release to fix their issue https://github.com/google-research/google-research/issues/1212, we can unpin it.
Related to:
- #4735
| true
|
1,321,843,538
|
https://api.github.com/repos/huggingface/datasets/issues/4767
|
https://github.com/huggingface/datasets/pull/4767
| 4,767
|
Add 2.4.0 version added to docstrings
|
closed
| 1
| 2022-07-29T07:01:56
| 2022-07-29T11:16:49
| 2022-07-29T11:03:58
|
albertvillanova
|
[] | null | true
|
1,321,787,428
|
https://api.github.com/repos/huggingface/datasets/issues/4765
|
https://github.com/huggingface/datasets/pull/4765
| 4,765
|
Fix version in map_nested docstring
|
closed
| 1
| 2022-07-29T05:44:32
| 2022-07-29T11:51:25
| 2022-07-29T11:38:36
|
albertvillanova
|
[] |
After latest release, `map_nested` docstring needs being updated with the right version for versionchanged and versionadded.
| true
|
1,321,295,961
|
https://api.github.com/repos/huggingface/datasets/issues/4764
|
https://github.com/huggingface/datasets/pull/4764
| 4,764
|
Update CI badge
|
closed
| 1
| 2022-07-28T18:04:20
| 2022-07-29T11:36:37
| 2022-07-29T11:23:51
|
mariosasko
|
[] |
Replace the old CircleCI badge with a new one for GH Actions.
| true
|
1,321,295,876
|
https://api.github.com/repos/huggingface/datasets/issues/4763
|
https://github.com/huggingface/datasets/pull/4763
| 4,763
|
More rigorous shape inference in to_tf_dataset
|
closed
| 1
| 2022-07-28T18:04:15
| 2022-09-08T19:17:54
| 2022-09-08T19:15:41
|
Rocketknight1
|
[] |
`tf.data` needs to know the shape of tensors emitted from a `tf.data.Dataset`. Although `None` dimensions are possible, overusing them can cause problems - Keras uses the dataset tensor spec at compile-time, and so saying that a dimension is `None` when it's actually constant can hurt performance, or even cause training to fail for dimensions that are needed to determine the shape of weight tensors!
The compromise I used here was to sample several batches from the underlying dataset and apply the `collate_fn` to them, and then to see which dimensions were "empirically variable". There's an obvious problem here, though - if you sample 10 batches and they all have the same shape on a certain dimension, there's still a small chance that the 11th batch will be different, and Keras will throw an error if a dataset tries to emit a tensor whose shape doesn't match the spec.
I encountered this bug in practice once or twice for datasets that were mostly-but-not-totally constant on a given dimension, and I still don't have a perfect solution, but this PR should greatly reduce the risk. It samples many more batches, and also samples very small batches (size 2) - this increases the variability, making it more likely that a few outlier samples will be detected.
Ideally, of course, we'd determine the full output shape analytically, but that's surprisingly tricky when the `collate_fn` can be any arbitrary Python code!
| true
|
1,321,261,733
|
https://api.github.com/repos/huggingface/datasets/issues/4762
|
https://github.com/huggingface/datasets/pull/4762
| 4,762
|
Improve features resolution in streaming
|
closed
| 2
| 2022-07-28T17:28:11
| 2022-09-09T17:17:39
| 2022-09-09T17:15:30
|
lhoestq
|
[] |
`IterableDataset._resolve_features` was returning the features sorted alphabetically by column name, which is not consistent with non-streaming. I changed this and used the order of columns from the data themselves. It was causing some inconsistencies in the dataset viewer as well.
I also fixed `interleave_datasets` that was not filling missing columns with None, because it was not using the columns from `IterableDataset._resolve_features`
cc @severo
| true
|
1,321,068,411
|
https://api.github.com/repos/huggingface/datasets/issues/4761
|
https://github.com/huggingface/datasets/issues/4761
| 4,761
|
parallel searching in multi-gpu setting using faiss
|
open
| 26
| 2022-07-28T14:57:03
| 2023-07-21T02:07:10
| null |
Jiaxin-Wen
|
[] |
While I notice that `add_faiss_index` has supported assigning multiple GPUs, I am still confused about how it works.
Does the `search-batch` function automatically parallelizes the input queries to different gpus?https://github.com/huggingface/datasets/blob/d76599bdd4d186b2e7c4f468b05766016055a0a5/src/datasets/search.py#L360
| false
|
1,320,878,223
|
https://api.github.com/repos/huggingface/datasets/issues/4760
|
https://github.com/huggingface/datasets/issues/4760
| 4,760
|
Issue with offline mode
|
closed
| 17
| 2022-07-28T12:45:14
| 2025-05-04T16:44:59
| 2024-01-23T10:58:22
|
SaulLu
|
[
"bug"
] |
## Describe the bug
I can't retrieve a cached dataset with offline mode enabled
## Steps to reproduce the bug
To reproduce my issue, first, you'll need to run a script that will cache the dataset
```python
import os
os.environ["HF_DATASETS_OFFLINE"] = "0"
import datasets
datasets.logging.set_verbosity_info()
ds_name = "SaulLu/toy_struc_dataset"
ds = datasets.load_dataset(ds_name)
print(ds)
```
then, you can try to reload it in offline mode:
```python
import os
os.environ["HF_DATASETS_OFFLINE"] = "1"
import datasets
datasets.logging.set_verbosity_info()
ds_name = "SaulLu/toy_struc_dataset"
ds = datasets.load_dataset(ds_name)
print(ds)
```
## Expected results
I would have expected the 2nd snippet not to return any errors
## Actual results
The 2nd snippet returns:
```
Traceback (most recent call last):
File "/home/lucile_huggingface_co/sandbox/evaluate/test_cache_datasets.py", line 8, in <module>
ds = datasets.load_dataset(ds_name)
File "/home/lucile_huggingface_co/anaconda3/envs/evaluate-dev/lib/python3.8/site-packages/datasets/load.py", line 1723, in load_dataset
builder_instance = load_dataset_builder(
File "/home/lucile_huggingface_co/anaconda3/envs/evaluate-dev/lib/python3.8/site-packages/datasets/load.py", line 1500, in load_dataset_builder
dataset_module = dataset_module_factory(
File "/home/lucile_huggingface_co/anaconda3/envs/evaluate-dev/lib/python3.8/site-packages/datasets/load.py", line 1241, in dataset_module_factory
raise ConnectionError(f"Couln't reach the Hugging Face Hub for dataset '{path}': {e1}") from None
ConnectionError: Couln't reach the Hugging Face Hub for dataset 'SaulLu/toy_struc_dataset': Offline mode is enabled.
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.17
- Python version: 3.8.13
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
Maybe I'm misunderstanding something in the use of the offline mode (see [doc](https://huggingface.co/docs/datasets/v2.4.0/en/loading#offline)), is that the case?
| false
|
1,320,783,300
|
https://api.github.com/repos/huggingface/datasets/issues/4759
|
https://github.com/huggingface/datasets/issues/4759
| 4,759
|
Dataset Viewer issue for Toygar/turkish-offensive-language-detection
|
closed
| 1
| 2022-07-28T11:21:43
| 2022-07-28T13:17:56
| 2022-07-28T13:17:48
|
tanyelai
|
[
"dataset-viewer"
] |
### Link
https://huggingface.co/datasets/Toygar/turkish-offensive-language-detection
### Description
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
Hi, I provided train.csv, test.csv and valid.csv files. However, viewer says dataset does not exist.
Should I need to do anything else?
### Owner
Yes
| false
|
1,320,602,532
|
https://api.github.com/repos/huggingface/datasets/issues/4757
|
https://github.com/huggingface/datasets/issues/4757
| 4,757
|
Document better when relative paths are transformed to URLs
|
closed
| 0
| 2022-07-28T08:46:27
| 2022-08-25T18:34:24
| 2022-08-25T18:34:24
|
albertvillanova
|
[
"documentation"
] |
As discussed with @ydshieh, when passing a relative path as `data_dir` to `load_dataset` of a dataset hosted on the Hub, the relative path is transformed to the corresponding URL of the Hub dataset.
Currently, we mention this in our docs here: [Create a dataset loading script > Download data files and organize splits](https://huggingface.co/docs/datasets/v2.4.0/en/dataset_script#download-data-files-and-organize-splits)
> If the data files live in the same folder or repository of the dataset script, you can just pass the relative paths to the files instead of URLs.
Maybe we should document better how relative paths are handled, not only when creating a dataset loading script, but also when passing to `load_dataset`:
- `data_dir`
- `data_files`
CC: @stevhliu
| false
|
1,319,687,044
|
https://api.github.com/repos/huggingface/datasets/issues/4755
|
https://github.com/huggingface/datasets/issues/4755
| 4,755
|
Datasets.map causes incorrect overflow_to_sample_mapping when used with tokenizers and small batch size
|
open
| 3
| 2022-07-27T14:54:11
| 2023-12-13T19:34:43
| null |
srobertjames
|
[
"bug"
] |
## Describe the bug
When using `tokenizer`, we can retrieve the field `overflow_to_sample_mapping`, since long samples will be overflown into multiple token sequences.
However, when tokenizing is done via `Dataset.map`, with `n_proc > 1`, the `overflow_to_sample_mapping` field is wrong. This seems to be because each tokenizer only looks at its share of the samples, and maps to the index _within its share_, but then `Dataset.map` collates them together.
## Steps to reproduce the bug
1. Make a dataset of 3 strings.
2. Tokenize via Dataset.map with n_proc = 8
3. Inspect the `overflow_to_sample_mapping` field
## Expected results
`[0, 1, 2]`
## Actual results
`[0, 0, 0]`
Notes:
1. I have not yet extracted a minimal example, but the above works reliably
2. If the dataset is large, I've yet to determine if this bug still happens a. not at all b. always c. on the small, leftover batch at the end.
| false
|
1,319,681,541
|
https://api.github.com/repos/huggingface/datasets/issues/4754
|
https://github.com/huggingface/datasets/pull/4754
| 4,754
|
Remove "unkown" language tags
|
closed
| 1
| 2022-07-27T14:50:12
| 2022-07-27T15:03:00
| 2022-07-27T14:51:06
|
lhoestq
|
[] |
Following https://github.com/huggingface/datasets/pull/4753 there was still a "unknown" langauge tag in `wikipedia` so the job at https://github.com/huggingface/datasets/runs/7542567336?check_suite_focus=true failed for wikipedia
| true
|
1,319,571,745
|
https://api.github.com/repos/huggingface/datasets/issues/4753
|
https://github.com/huggingface/datasets/pull/4753
| 4,753
|
Add `language_bcp47` tag
|
closed
| 1
| 2022-07-27T13:31:16
| 2022-07-27T14:50:03
| 2022-07-27T14:37:56
|
lhoestq
|
[] |
Following (internal) https://github.com/huggingface/moon-landing/pull/3509, we need to move the bcp47 tags to `language_bcp47` and keep the `language` tag for iso 639 1-2-3 codes. In particular I made sure that all the tags in `languages` are not longer than 3 characters. I moved the rest to `language_bcp47` and fixed some of them.
After this PR is merged I think we can simplify the language validation from the DatasetMetadata class (and keep it bare-bone just for the tagging app)
PS: the CI is failing because of missing content in dataset cards that are unrelated to this PR
| true
|
1,319,464,409
|
https://api.github.com/repos/huggingface/datasets/issues/4752
|
https://github.com/huggingface/datasets/issues/4752
| 4,752
|
DatasetInfo issue when testing multiple configs: mixed task_templates
|
open
| 3
| 2022-07-27T12:04:54
| 2022-08-08T18:20:50
| null |
BramVanroy
|
[
"bug"
] |
## Describe the bug
When running the `datasets-cli test` it would seem that some config properties in a DatasetInfo get mangled, leading to issues, e.g., about the ClassLabel.
## Steps to reproduce the bug
In summary, what I want to do is create three configs:
- unfiltered: no classlabel, no tasks. Gets data from unfiltered.json.gz (I'd want this without splits, just one chunk of data, but that does not seem possible?)
- filtered_sentiment: `review_sentiment` as ClassLabel, TextClassification task with `review_sentiment` as label. Gets train/test split from respective json.gz files
- filtered_rating: `review_rating0` as ClassLabel, TextClassification task with `review_rating0` as label. Gets train/test split from respective json.gz files
This might be a bit tedious to reproduce, so I am sorry, but these are the steps:
- Clone datasets -> `datasets/` and install it
- Clone `https://huggingface.co/datasets/BramVanroy/hebban-reviews` into `datasets/datasets` so that you have a new folder `datasets/datasets/hebban-reviews/`.
- Replace the HebbanReviews class with this new one:
```python
class HebbanReviews(datasets.GeneratorBasedBuilder):
"""The Hebban book reviews dataset."""
BUILDER_CONFIGS = [
HebbanReviewsConfig(
name="unfiltered",
description=_HEBBAN_REVIEWS_UNFILTERED_DESCRIPTION,
version=datasets.Version(_HEBBAN_VERSION)
),
HebbanReviewsConfig(
name="filtered_sentiment",
description=f"This config has the negative, neutral, and positive sentiment scores as ClassLabel in the 'review_sentiment' column.\n{_HEBBAN_REVIEWS_FILTERED_DESCRIPTION}",
version=datasets.Version(_HEBBAN_VERSION)
),
HebbanReviewsConfig(
name="filtered_rating",
description=f"This config has the 5-class ratings as ClassLabel in the 'review_rating0' column (which is a variant of 'review_rating' that starts counting from 0 instead of 1).\n{_HEBBAN_REVIEWS_FILTERED_DESCRIPTION}",
version=datasets.Version(_HEBBAN_VERSION)
)
]
DEFAULT_CONFIG_NAME = "filtered_sentiment"
_URLS = {
"train": "train.jsonl.gz",
"test": "test.jsonl.gz",
"unfiltered": "unfiltered.jsonl.gz",
}
def _info(self):
features = {
"review_title": datasets.Value("string"),
"review_text": datasets.Value("string"),
"review_text_without_quotes": datasets.Value("string"),
"review_n_quotes": datasets.Value("int32"),
"review_n_tokens": datasets.Value("int32"),
"review_rating": datasets.Value("int32"),
"review_rating0": datasets.Value("int32"),
"review_author_url": datasets.Value("string"),
"review_author_type": datasets.Value("string"),
"review_n_likes": datasets.Value("int32"),
"review_n_comments": datasets.Value("int32"),
"review_url": datasets.Value("string"),
"review_published_date": datasets.Value("string"),
"review_crawl_date": datasets.Value("string"),
"lid": datasets.Value("string"),
"lid_probability": datasets.Value("float32"),
"review_sentiment": datasets.features.ClassLabel(names=["negative", "neutral", "positive"]),
"review_sentiment_label": datasets.Value("string"),
"book_id": datasets.Value("int32"),
}
if self.config.name == "filtered_sentiment":
task_templates = [datasets.TextClassification(text_column="review_text_without_quotes", label_column="review_sentiment")]
elif self.config.name == "filtered_rating":
# For CrossEntropy, our classes need to start at index 0 -- not 1
features["review_rating0"] = datasets.features.ClassLabel(names=["1", "2", "3", "4", "5"])
features["review_sentiment"] = datasets.Value("int32")
task_templates = [datasets.TextClassification(text_column="review_text_without_quotes", label_column="review_rating0")]
elif self.config.name == "unfiltered": # no ClassLabels in unfiltered
features["review_sentiment"] = datasets.Value("int32")
task_templates = None
else:
raise ValueError(f"Unsupported config {self.config.name}. Expected one of 'filtered_sentiment' (default),"
f" 'filtered_rating', or 'unfiltered'")
print("AT INFO", self.config.name, task_templates)
return datasets.DatasetInfo(
description=self.config.description,
features=datasets.Features(features),
homepage="https://huggingface.co/datasets/BramVanroy/hebban-reviews",
citation=_HEBBAN_REVIEWS_CITATION,
task_templates=task_templates,
license="cc-by-4.0"
)
def _split_generators(self, dl_manager):
if self.config.name.startswith("filtered"):
files = dl_manager.download_and_extract({"train": "train.jsonl.gz",
"test": "test.jsonl.gz"})
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"data_file": files["train"]
},
),
datasets.SplitGenerator(
name=datasets.Split.TEST,
gen_kwargs={
"data_file": files["test"]
},
),
]
elif self.config.name == "unfiltered":
files = dl_manager.download_and_extract({"train": "unfiltered.jsonl.gz"})
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"data_file": files["train"]
},
),
]
else:
raise ValueError(f"Unsupported config {self.config.name}. Expected one of 'filtered_sentiment' (default),"
f" 'filtered_rating', or 'unfiltered'")
def _generate_examples(self, data_file):
lines = Path(data_file).open(encoding="utf-8").readlines()
for line_idx, line in enumerate(lines):
row = json.loads(line)
yield line_idx, row
```
- finally, run `datasets-cli test ./datasets/hebban-reviews/ --save_infos --all_configs` from within the topmost `datasets` directory
## Expected results
Succeeding tests for three different configs.
## Actual results
I printed out the values that are given to `DatasetInfo` for config name and task_templates, as you can see. There, as expected, I get `unfiltered None`. I also modified datasets/info.py and added this line [at L.170](https://github.com/huggingface/datasets/blob/f5847a304aa1b38b3a3c54a8318b4df60f1299bc/src/datasets/info.py#L170):
```python
print("INTERNALLY AT INFO.PY", self.config_name, self.task_templates)
```
to my surprise, here I get `unfiltered [TextClassification(task='text-classification', text_column='review_text_without_quotes', label_column='review_sentiment')]`. So one way or another, here I suddenly see that `unfiltered` now does have a task_template -- even though that is not what is written in the data loading script, as the first print statement correctly shows.
I do not quite understand how, but it seems that the config name and task_templates get mixed.
This ultimately leads to the following error, but this trace may not be very useful in itself:
```
Traceback (most recent call last):
File "C:\Users\bramv\.virtualenvs\hebban-U6poXNQd\Scripts\datasets-cli-script.py", line 33, in <module>
sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())
File "c:\dev\python\hebban\datasets\src\datasets\commands\datasets_cli.py", line 39, in main
service.run()
File "c:\dev\python\hebban\datasets\src\datasets\commands\test.py", line 144, in run
builder.as_dataset()
File "c:\dev\python\hebban\datasets\src\datasets\builder.py", line 899, in as_dataset
datasets = map_nested(
File "c:\dev\python\hebban\datasets\src\datasets\utils\py_utils.py", line 393, in map_nested
mapped = [
File "c:\dev\python\hebban\datasets\src\datasets\utils\py_utils.py", line 394, in <listcomp>
_single_map_nested((function, obj, types, None, True, None))
File "c:\dev\python\hebban\datasets\src\datasets\utils\py_utils.py", line 330, in _single_map_nested
return function(data_struct)
File "c:\dev\python\hebban\datasets\src\datasets\builder.py", line 930, in _build_single_dataset
ds = self._as_dataset(
File "c:\dev\python\hebban\datasets\src\datasets\builder.py", line 1006, in _as_dataset
return Dataset(fingerprint=fingerprint, **dataset_kwargs)
File "c:\dev\python\hebban\datasets\src\datasets\arrow_dataset.py", line 661, in __init__
info = info.copy() if info is not None else DatasetInfo()
File "c:\dev\python\hebban\datasets\src\datasets\info.py", line 286, in copy
return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
File "<string>", line 20, in __init__
File "c:\dev\python\hebban\datasets\src\datasets\info.py", line 176, in __post_init__
self.task_templates = [
File "c:\dev\python\hebban\datasets\src\datasets\info.py", line 177, in <listcomp>
template.align_with_features(self.features) for template in (self.task_templates)
File "c:\dev\python\hebban\datasets\src\datasets\tasks\text_classification.py", line 22, in align_with_features
raise ValueError(f"Column {self.label_column} is not a ClassLabel.")
ValueError: Column review_sentiment is not a ClassLabel.
```
## Environment info
- `datasets` version: 2.4.1.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.8
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| false
|
1,319,440,903
|
https://api.github.com/repos/huggingface/datasets/issues/4751
|
https://github.com/huggingface/datasets/pull/4751
| 4,751
|
Added dataset information in clinic oos dataset card
|
closed
| 1
| 2022-07-27T11:44:28
| 2022-07-28T10:53:21
| 2022-07-28T10:40:37
|
arnav-ladkat
|
[] |
This PR aims to add relevant information like the Description, Language and citation information of the clinic oos dataset card.
| true
|
1,319,333,645
|
https://api.github.com/repos/huggingface/datasets/issues/4750
|
https://github.com/huggingface/datasets/issues/4750
| 4,750
|
Easily create loading script for benchmark comprising multiple huggingface datasets
|
closed
| 2
| 2022-07-27T10:13:38
| 2022-07-27T13:58:07
| 2022-07-27T13:58:07
|
JoelNiklaus
|
[] |
Hi,
I would like to create a loading script for a benchmark comprising multiple huggingface datasets.
The function _split_generators needs to return the files for the respective dataset. However, the files are not always in the same location for each dataset. I want to just make a wrapper dataset that provides a single interface to all the underlying datasets.
I thought about downloading the files with the load_dataset function and then providing the link to the cached file. But this seems a bit inelegant to me. What approach would you propose to do this?
Please let me know if you have any questions.
Cheers,
Joel
| false
|
1,318,874,913
|
https://api.github.com/repos/huggingface/datasets/issues/4748
|
https://github.com/huggingface/datasets/pull/4748
| 4,748
|
Add image classification processing guide
|
closed
| 1
| 2022-07-27T00:11:11
| 2022-07-27T17:28:21
| 2022-07-27T17:16:12
|
stevhliu
|
[
"documentation"
] |
This PR follows up on #4710 to separate the object detection and image classification guides. It expands a little more on the original guide to include a more complete example of loading and transforming a whole dataset.
| true
|
1,318,586,932
|
https://api.github.com/repos/huggingface/datasets/issues/4747
|
https://github.com/huggingface/datasets/pull/4747
| 4,747
|
Shard parquet in `download_and_prepare`
|
closed
| 2
| 2022-07-26T18:05:01
| 2022-09-15T13:43:55
| 2022-09-15T13:41:26
|
lhoestq
|
[] |
Following https://github.com/huggingface/datasets/pull/4724 (needs to be merged first)
It's good practice to shard parquet files to enable parallelism with spark/dask/etc.
I added the `max_shard_size` parameter to `download_and_prepare` (default to 500MB for parquet, and None for arrow).
```python
from datasets import *
output_dir = "./output_dir" # also supports "s3://..."
builder = load_dataset_builder("squad")
builder.download_and_prepare(output_dir, file_format="parquet", max_shard_size="5MB")
```
### Implementation details
The examples are written to a parquet file until `ParquetWriter._num_bytes > max_shard_size`. When this happens, a new writer is instantiated to start writing the next shard. At the end, all the shards are renamed to include the total number of shards in their names: `{builder.name}-{split}-{shard_id:05d}-of-{num_shards:05d}.parquet`
I also added the `MAX_SHARD_SIZE` config variable (default to 500MB)
TODO:
- [x] docstrings
- [x] docs
- [x] tests
cc @severo
| true
|
1,318,486,599
|
https://api.github.com/repos/huggingface/datasets/issues/4746
|
https://github.com/huggingface/datasets/issues/4746
| 4,746
|
Dataset Viewer issue for yanekyuk/wikikey
|
closed
| 2
| 2022-07-26T16:25:16
| 2022-09-08T08:15:22
| 2022-09-08T08:15:22
|
ai-ashok
|
[
"dataset-viewer"
] |
### Link
_No response_
### Description
_No response_
### Owner
_No response_
| false
|
1,318,016,655
|
https://api.github.com/repos/huggingface/datasets/issues/4745
|
https://github.com/huggingface/datasets/issues/4745
| 4,745
|
Allow `list_datasets` to include private datasets
|
closed
| 4
| 2022-07-26T10:16:08
| 2023-07-25T15:01:49
| 2023-07-25T15:01:49
|
ola13
|
[
"enhancement"
] |
I am working with a large collection of private datasets, it would be convenient for me to be able to list them.
I would envision extending the convention of using `use_auth_token` keyword argument to `list_datasets` function, then calling:
```
list_datasets(use_auth_token="my_token")
```
would return the list of all datasets I have permissions to view, including private ones. The only current alternative I see is to use the hub website to manually obtain the list of dataset names - this is in the context of BigScience where respective private spaces contain hundreds of datasets, so not very convenient to list manually.
| false
|
1,317,822,345
|
https://api.github.com/repos/huggingface/datasets/issues/4744
|
https://github.com/huggingface/datasets/issues/4744
| 4,744
|
Remove instructions to generate dummy data from our docs
|
closed
| 2
| 2022-07-26T07:32:58
| 2022-08-02T23:50:30
| 2022-08-02T23:50:30
|
albertvillanova
|
[
"documentation"
] |
In our docs, we indicate to generate the dummy data: https://huggingface.co/docs/datasets/dataset_script#testing-data-and-checksum-metadata
However:
- dummy data makes sense only for datasets in our GitHub repo: so that we can test their loading with our CI
- for datasets on the Hub:
- they do not pass any CI test requiring dummy data
- there are no instructions on how they can test their dataset locally using the dummy data
- the generation of the dummy data assumes our GitHub directory structure:
- the dummy data will be generated under `./datasets/<dataset_name>/dummy` even if locally there is no `./datasets` directory (which is the usual case). See issue:
- #4742
CC: @stevhliu
| false
|
1,317,362,561
|
https://api.github.com/repos/huggingface/datasets/issues/4743
|
https://github.com/huggingface/datasets/pull/4743
| 4,743
|
Update map docs
|
closed
| 1
| 2022-07-25T20:59:35
| 2022-07-27T16:22:04
| 2022-07-27T16:10:04
|
stevhliu
|
[
"documentation"
] |
This PR updates the `map` docs for processing text to include `return_tensors="np"` to make it run faster (see #4676).
| true
|
1,317,260,663
|
https://api.github.com/repos/huggingface/datasets/issues/4742
|
https://github.com/huggingface/datasets/issues/4742
| 4,742
|
Dummy data nowhere to be found
|
closed
| 3
| 2022-07-25T19:18:42
| 2022-11-04T14:04:24
| 2022-11-04T14:04:10
|
BramVanroy
|
[
"bug"
] |
## Describe the bug
To finalize my dataset, I wanted to create dummy data as per the guide and I ran
```shell
datasets-cli dummy_data datasets/hebban-reviews --auto_generate
```
where hebban-reviews is [this repo](https://huggingface.co/datasets/BramVanroy/hebban-reviews). And even though the scripts runs and shows a message at the end that it succeeded, I cannot find the dummy data anywhere. Where is it?
## Expected results
To see the dummy data in the datasets' folder or in the folder where I ran the command.
## Actual results
I see the following message but I cannot find the dummy data anywhere.
```
Dummy data generation done and dummy data test succeeded for config 'filtered''.
Automatic dummy data generation succeeded for all configs of '.\datasets\hebban-reviews\'
```
## Environment info
- `datasets` version: 2.4.1.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.8.8
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| false
|
1,316,621,272
|
https://api.github.com/repos/huggingface/datasets/issues/4741
|
https://github.com/huggingface/datasets/pull/4741
| 4,741
|
Fix to dict conversion of `DatasetInfo`/`Features`
|
closed
| 1
| 2022-07-25T10:41:27
| 2022-07-25T12:50:36
| 2022-07-25T12:37:53
|
mariosasko
|
[] |
Fix #4681
| true
|
1,316,478,007
|
https://api.github.com/repos/huggingface/datasets/issues/4740
|
https://github.com/huggingface/datasets/pull/4740
| 4,740
|
Fix multiprocessing in map_nested
|
closed
| 3
| 2022-07-25T08:44:19
| 2022-07-28T10:53:23
| 2022-07-28T10:40:31
|
albertvillanova
|
[] |
As previously discussed:
Before, multiprocessing was not used in `map_nested` if `num_proc` was greater than or equal to `len(iterable)`.
- Multiprocessing was not used e.g. when passing `num_proc=20` but having 19 files to download
- As by default, `DownloadManager` sets `num_proc=16`, before multiprocessing was only used when `len(iterable)>16` by default
Now, if `num_proc` is greater than or equal to ``len(iterable)``, `num_proc` is set to ``len(iterable)`` and multiprocessing is used.
- We pass the variable `parallel_min_length=16`, so that multiprocessing is only used if at least 16 files to be downloaded
- ~As by default, `DownloadManager` sets `num_proc=16`, now multiprocessing is used when `len(iterable)>1` by default~
See discussion below.
~After having had to fix some tests (87602ac), I am wondering:~
- ~do we want to have multiprocessing by default?~
- ~please note that `DownloadManager.download` sets `num_proc=16` by default~
- ~or would it be better to ask the user to set it explicitly if they want multiprocessing (and default to `num_proc=1`)?~
Fix #4636.
CC: @nateraw
| true
|
1,316,400,915
|
https://api.github.com/repos/huggingface/datasets/issues/4739
|
https://github.com/huggingface/datasets/pull/4739
| 4,739
|
Deprecate metrics
|
closed
| 4
| 2022-07-25T07:35:55
| 2022-07-28T11:44:27
| 2022-07-28T11:32:16
|
albertvillanova
|
[] |
Deprecate metrics:
- deprecate public functions: `load_metric`, `list_metrics` and `inspect_metric`: docstring and warning
- test deprecation warnings are issues
- deprecate metrics in all docs
- remove mentions to metrics in docs and README
- deprecate internal functions/classes
Maybe we should also stop testing metrics?
| true
|
1,315,222,166
|
https://api.github.com/repos/huggingface/datasets/issues/4738
|
https://github.com/huggingface/datasets/pull/4738
| 4,738
|
Use CI unit/integration tests
|
closed
| 2
| 2022-07-22T16:48:00
| 2022-07-26T20:19:22
| 2022-07-26T20:07:05
|
albertvillanova
|
[] |
This PR:
- Implements separate unit/integration tests
- A fail in integration tests does not cancel the rest of the jobs
- We should implement more robust integration tests: work in progress in a subsequent PR
- For the moment, test involving network requests are marked as integration: to be evolved
| true
|
1,315,011,004
|
https://api.github.com/repos/huggingface/datasets/issues/4737
|
https://github.com/huggingface/datasets/issues/4737
| 4,737
|
Download error on scene_parse_150
|
closed
| 2
| 2022-07-22T13:28:28
| 2022-09-01T15:37:11
| 2022-09-01T15:37:11
|
juliensimon
|
[
"bug"
] |
```
from datasets import load_dataset
dataset = load_dataset("scene_parse_150", "scene_parsing")
FileNotFoundError: Couldn't find file at http://data.csail.mit.edu/places/ADEchallenge/ADEChallengeData2016.zip
```
| false
|
1,314,931,996
|
https://api.github.com/repos/huggingface/datasets/issues/4736
|
https://github.com/huggingface/datasets/issues/4736
| 4,736
|
Dataset Viewer issue for deepklarity/huggingface-spaces-dataset
|
closed
| 1
| 2022-07-22T12:14:18
| 2022-07-22T13:46:38
| 2022-07-22T13:46:38
|
dk-crazydiv
|
[
"dataset-viewer"
] |
### Link
https://huggingface.co/datasets/deepklarity/huggingface-spaces-dataset/viewer/deepklarity--huggingface-spaces-dataset/train
### Description
Hi Team,
I'm getting the following error on a uploaded dataset. I'm getting the same status for a couple of hours now. The dataset size is `<1MB` and the format is csv, so I'm not sure if it's supposed to take this much time or not.
```
Status code: 400
Exception: Status400Error
Message: The split is being processed. Retry later.
```
Is there any explicit step to be taken to get the viewer to work?
### Owner
Yes
| false
|
1,314,501,641
|
https://api.github.com/repos/huggingface/datasets/issues/4735
|
https://github.com/huggingface/datasets/pull/4735
| 4,735
|
Pin rouge_score test dependency
|
closed
| 1
| 2022-07-22T07:18:21
| 2022-07-22T07:58:14
| 2022-07-22T07:45:18
|
albertvillanova
|
[] |
Temporarily pin `rouge_score` (to avoid latest version 0.7.0) until the issue is fixed.
Fix #4734
| true
|
1,314,495,382
|
https://api.github.com/repos/huggingface/datasets/issues/4734
|
https://github.com/huggingface/datasets/issues/4734
| 4,734
|
Package rouge-score cannot be imported
|
closed
| 1
| 2022-07-22T07:15:05
| 2022-07-22T07:45:19
| 2022-07-22T07:45:18
|
albertvillanova
|
[
"bug"
] |
## Describe the bug
After the today release of `rouge_score-0.0.7` it seems no longer importable. Our CI fails: https://github.com/huggingface/datasets/runs/7463218591?check_suite_focus=true
```
FAILED tests/test_dataset_common.py::LocalDatasetTest::test_builder_class_bigbench
FAILED tests/test_dataset_common.py::LocalDatasetTest::test_builder_configs_bigbench
FAILED tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_bigbench
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_rouge
```
with errors:
```
> from rouge_score import rouge_scorer
E ModuleNotFoundError: No module named 'rouge_score'
```
```
E ImportError: To be able to use rouge, you need to install the following dependency: rouge_score.
E Please install it using 'pip install rouge_score' for instance'
```
| false
|
1,314,479,616
|
https://api.github.com/repos/huggingface/datasets/issues/4733
|
https://github.com/huggingface/datasets/issues/4733
| 4,733
|
rouge metric
|
closed
| 1
| 2022-07-22T07:06:51
| 2022-07-22T09:08:02
| 2022-07-22T09:05:35
|
asking28
|
[
"bug"
] |
## Describe the bug
A clear and concise description of what the bug is.
Loading Rouge metric gives error after latest rouge-score==0.0.7 release.
Downgrading rougemetric==0.0.4 works fine.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
A clear and concise description of the expected results.
from rouge_score import rouge_scorer, scoring
should run
## Actual results
Specify the actual results or traceback.
File "/root/.cache/huggingface/modules/datasets_modules/metrics/rouge/0ffdb60f436bdb8884d5e4d608d53dbe108e82dac4f494a66f80ef3f647c104f/rouge.py", line 21, in <module>
from rouge_score import rouge_scorer, scoring
ImportError: cannot import name 'rouge_scorer' from 'rouge_score' (unknown location)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version:3.9
- PyArrow version:
| false
|
1,314,371,566
|
https://api.github.com/repos/huggingface/datasets/issues/4732
|
https://github.com/huggingface/datasets/issues/4732
| 4,732
|
Document better that loading a dataset passing its name does not use the local script
|
closed
| 3
| 2022-07-22T06:07:31
| 2022-08-23T16:32:23
| 2022-08-23T16:32:23
|
albertvillanova
|
[
"documentation"
] |
As reported by @TrentBrick here https://github.com/huggingface/datasets/issues/4725#issuecomment-1191858596, it could be more clear that loading a dataset by passing its name does not use the (modified) local script of it.
What he did:
- he installed `datasets` from source
- he modified locally `datasets/the_pile/the_pile.py` loading script
- he tried to load it but using `load_dataset("the_pile")` instead of `load_dataset("datasets/the_pile")`
- as explained here https://github.com/huggingface/datasets/issues/4725#issuecomment-1191040245:
- the former does not use the local script, but instead it downloads a copy of `the_pile.py` from our GitHub, caches it locally (inside `~/.cache/huggingface/modules`) and uses that.
He suggests adding a more clear explanation about this. He suggests adding it maybe in [Installation > source](https://huggingface.co/docs/datasets/installation))
CC: @stevhliu
| false
|
1,313,773,348
|
https://api.github.com/repos/huggingface/datasets/issues/4731
|
https://github.com/huggingface/datasets/pull/4731
| 4,731
|
docs: ✏️ fix TranslationVariableLanguages example
|
closed
| 1
| 2022-07-21T20:35:41
| 2022-07-22T07:01:00
| 2022-07-22T06:48:42
|
severo
|
[] | null | true
|
1,313,421,263
|
https://api.github.com/repos/huggingface/datasets/issues/4730
|
https://github.com/huggingface/datasets/issues/4730
| 4,730
|
Loading imagenet-1k validation split takes much more RAM than expected
|
closed
| 1
| 2022-07-21T15:14:06
| 2022-07-21T16:41:04
| 2022-07-21T16:41:04
|
fxmarty
|
[
"bug"
] |
## Describe the bug
Loading into memory the validation split of imagenet-1k takes much more RAM than expected. Assuming ImageNet-1k is 150 GB, split is 50000 validation images and 1,281,167 train images, I would expect only about 6 GB loaded in RAM.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("imagenet-1k", split="validation")
print(dataset)
"""prints
Dataset({
features: ['image', 'label'],
num_rows: 50000
})
"""
pipe_inputs = dataset["image"]
# and wait :-)
```
## Expected results
Use only < 10 GB RAM when loading the images.
## Actual results

```
Using custom data configuration default
Reusing dataset imagenet-1k (/home/fxmarty/.cache/huggingface/datasets/imagenet-1k/default/1.0.0/a1e9bfc56c3a7350165007d1176b15e9128fcaf9ab972147840529aed3ae52bc)
Killed
```
## Environment info
- `datasets` version: 2.3.3.dev0
- Platform: Linux-5.15.0-41-generic-x86_64-with-glibc2.35
- Python version: 3.9.12
- PyArrow version: 7.0.0
- Pandas version: 1.3.5
- datasets commit: 4e4222f1b6362c2788aec0dd2cd8cede6dd17b80
| false
|
1,313,374,015
|
https://api.github.com/repos/huggingface/datasets/issues/4729
|
https://github.com/huggingface/datasets/pull/4729
| 4,729
|
Refactor Hub tests
|
closed
| 1
| 2022-07-21T14:43:13
| 2022-07-22T15:09:49
| 2022-07-22T14:56:29
|
albertvillanova
|
[] |
This PR refactors `test_upstream_hub` by removing unittests and using the following pytest Hub fixtures:
- `ci_hub_config`
- `set_ci_hub_access_token`: to replace setUp/tearDown
- `temporary_repo` context manager: to replace `try... finally`
- `cleanup_repo`: to delete repo accidentally created if one of the tests fails
This is a preliminary work done to manage unit/integration tests separately.
| true
|
1,312,897,454
|
https://api.github.com/repos/huggingface/datasets/issues/4728
|
https://github.com/huggingface/datasets/issues/4728
| 4,728
|
load_dataset gives "403" error when using Financial Phrasebank
|
closed
| 3
| 2022-07-21T08:43:32
| 2022-08-04T08:32:35
| 2022-08-04T08:32:35
|
rohitvincent
|
[] |
I tried both codes below to download the financial phrasebank dataset (https://huggingface.co/datasets/financial_phrasebank) with the sentences_allagree subset. However, the code gives a 403 error when executed from multiple machines locally or on the cloud.
```
from datasets import load_dataset, DownloadMode
load_dataset(path='financial_phrasebank',name='sentences_allagree',download_mode=DownloadMode.FORCE_REDOWNLOAD)
```
```
from datasets import load_dataset, DownloadMode
load_dataset(path='financial_phrasebank',name='sentences_allagree')
```
**Error**
ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403)
| false
|
1,312,645,391
|
https://api.github.com/repos/huggingface/datasets/issues/4727
|
https://github.com/huggingface/datasets/issues/4727
| 4,727
|
Dataset Viewer issue for TheNoob3131/mosquito-data
|
closed
| 1
| 2022-07-21T05:24:48
| 2022-07-21T07:51:56
| 2022-07-21T07:45:01
|
thenerd31
|
[
"dataset-viewer"
] |
### Link
https://huggingface.co/datasets/TheNoob3131/mosquito-data/viewer/TheNoob3131--mosquito-data/test
### Description
Dataset preview not showing with large files. Says 'split cache is empty' even though there are train and test splits.
### Owner
_No response_
| false
|
1,312,082,175
|
https://api.github.com/repos/huggingface/datasets/issues/4726
|
https://github.com/huggingface/datasets/pull/4726
| 4,726
|
Fix broken link to the Hub
|
closed
| 1
| 2022-07-20T22:57:27
| 2022-07-21T14:33:18
| 2022-07-21T08:00:54
|
stevhliu
|
[] |
The Markdown link fails to render if it is in the same line as the `<span>`. This PR implements @mishig25's fix by using `<a href=" ">` instead.

| true
|
1,311,907,096
|
https://api.github.com/repos/huggingface/datasets/issues/4725
|
https://github.com/huggingface/datasets/issues/4725
| 4,725
|
the_pile datasets URL broken.
|
closed
| 5
| 2022-07-20T20:57:30
| 2022-07-22T06:09:46
| 2022-07-21T07:38:19
|
TrentBrick
|
[
"bug"
] |
https://github.com/huggingface/datasets/pull/3627 changed the Eleuther AI Pile dataset URL from https://the-eye.eu/ to https://mystic.the-eye.eu/ but the latter is now broken and the former works again.
Note that when I git clone the repo and use `pip install -e .` and then edit the URL back the codebase doesn't seem to use this edit so the mystic URL is also cached somewhere else that I can't find?
| false
|
1,311,127,404
|
https://api.github.com/repos/huggingface/datasets/issues/4724
|
https://github.com/huggingface/datasets/pull/4724
| 4,724
|
Download and prepare as Parquet for cloud storage
|
closed
| 8
| 2022-07-20T13:39:02
| 2022-09-05T17:27:25
| 2022-09-05T17:25:27
|
lhoestq
|
[] |
Download a dataset as Parquet in a cloud storage can be useful for streaming mode and to use with spark/dask/ray.
This PR adds support for `fsspec` URIs like `s3://...`, `gcs://...` etc. and ads the `file_format` to save as parquet instead of arrow:
```python
from datasets import *
cache_dir = "s3://..."
builder = load_dataset_builder("crime_and_punish", cache_dir=cache_dir)
builder.download_and_prepare(file_format="parquet")
```
EDIT: actually changed the API to
```python
from datasets import *
builder = load_dataset_builder("crime_and_punish")
builder.download_and_prepare("s3://...", file_format="parquet")
```
credentials to cloud storage can be passed using the `storage_options` argument in
For consistency with the BeamBasedBuilder, I name the parquet files `{builder.name}-{split}-xxxxx-of-xxxxx.parquet`. I think this is fine since we'll need to implement parquet sharding after this PR, so that a dataset can be used efficiently with dask for example.
Note that images/audio files are not embedded yet in the parquet files, this will added in a subsequent PR
TODO:
- [x] docs
- [x] tests
| true
|
1,310,970,604
|
https://api.github.com/repos/huggingface/datasets/issues/4723
|
https://github.com/huggingface/datasets/pull/4723
| 4,723
|
Refactor conftest fixtures
|
closed
| 1
| 2022-07-20T12:15:22
| 2022-07-21T14:37:11
| 2022-07-21T14:24:18
|
albertvillanova
|
[] |
Previously, fixture modules `hub_fixtures` and `s3_fixtures`:
- were both at the root test directory
- were imported using `import *`
- as a side effect, the modules `os` and `pytest` were imported from `s3_fixtures` into `conftest`
This PR:
- puts both fixture modules in a dedicated directory `fixtures`
- renames both to: `fixtures.hub` and `fixtures.s3`
- imports them into `conftest` as plugins, using the `pytest_plugins`: this avoids the `import *`
- additionally creates a new fixture module `fixtures.files` with all file-related fixtures
| true
|
1,310,785,916
|
https://api.github.com/repos/huggingface/datasets/issues/4722
|
https://github.com/huggingface/datasets/pull/4722
| 4,722
|
Docs: Fix same-page haslinks
|
closed
| 1
| 2022-07-20T10:04:37
| 2022-07-20T17:02:33
| 2022-07-20T16:49:36
|
mishig25
|
[] |
`href="/docs/datasets/quickstart#audio"` implicitly goes to `href="/docs/datasets/{$LATEST_STABLE_VERSION}/quickstart#audio"`. Therefore, https://huggingface.co/docs/datasets/quickstart#audio #audio hashlink does not work since the new docs were not added to v2.3.2 (LATEST_STABLE_VERSION)
to preserve the version, it should be just `href="#audio"`, which will implicilty go to curren_page + #audio element
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.