id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
βŒ€
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
βŒ€
is_pull_request
bool
2 classes
1,292,797,878
https://api.github.com/repos/huggingface/datasets/issues/4620
https://github.com/huggingface/datasets/issues/4620
4,620
Data type is not recognized when using datetime.time
closed
2
2022-07-04T08:13:38
2022-07-07T13:57:11
2022-07-07T13:57:11
severo
[ "bug" ]
## Describe the bug Creating a dataset from a pandas dataframe with `datetime.time` format generates an error. ## Steps to reproduce the bug ```python import pandas as pd from datetime import time from datasets import Dataset df = pd.DataFrame({"feature_name": [time(1, 1, 1)]}) dataset = Dataset.from_pandas(df) ``` ## Expected results The dataset should be created. ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 823, in from_pandas return cls(table, info=info, split=split) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 679, in __init__ inferred_features = Features.from_arrow_schema(arrow_table.schema) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1551, in from_arrow_schema obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1551, in <dictcomp> obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1315, in generate_from_arrow_type return Value(dtype=_arrow_to_datasets_dtype(pa_type)) File "/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 83, in _arrow_to_datasets_dtype return f"time64[{arrow_type.unit}]" AttributeError: 'pyarrow.lib.DataType' object has no attribute 'unit' ``` ## Environment info - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.13.0-1031-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.4.2
false
1,292,107,275
https://api.github.com/repos/huggingface/datasets/issues/4619
https://github.com/huggingface/datasets/issues/4619
4,619
np arrays get turned into native lists
open
3
2022-07-02T17:54:57
2022-07-03T20:27:07
null
ZhaofengWu
[ "bug" ]
## Describe the bug When attaching an `np.array` field, it seems that it automatically gets turned into a list (see below). Why is this happening? Could it lose precision? Is there a way to make sure this doesn't happen? ## Steps to reproduce the bug ```python >>> import datasets, numpy as np >>> dataset = datasets.load_dataset("glue", "mrpc")["validation"] Reusing dataset glue (...) 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 1360.61it/s] >>> dataset2 = dataset.map(lambda x: {"tmp": np.array([0.5])}, batched=False) 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 408/408 [00:00<00:00, 10819.97ex/s] >>> dataset2[0]["tmp"] [0.5] >>> type(dataset2[0]["tmp"]) <class 'list'> ``` ## Expected results `dataset2[0]["tmp"]` should be an `np.ndarray`. ## Actual results It's a list. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: mac, though I'm pretty sure it happens on a linux machine too - Python version: 3.9.7 - PyArrow version: 6.0.1
false
1,292,078,225
https://api.github.com/repos/huggingface/datasets/issues/4618
https://github.com/huggingface/datasets/issues/4618
4,618
contribute data loading for object detection datasets with yolo data format
open
4
2022-07-02T15:21:59
2022-07-21T14:10:44
null
faizankshaikh
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** At the moment, HF datasets loads [image classification datasets](https://huggingface.co/docs/datasets/image_process) out-of-the-box. There could be a data loader for loading standard object detection datasets ([original discussion here](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/discussions/2)) **Describe the solution you'd like** I wrote a [custom script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_chess_pieces.py) to load dataset which has YOLO data format. **Describe alternatives you've considered** The script can either be a standalone dataset builder, or a modified version of `ImageFolder` **Additional context** I would be happy to contribute to this, but I would do it at a very slow pace (maybe a month or two) as I have my exams approaching πŸ˜„
false
1,291,307,428
https://api.github.com/repos/huggingface/datasets/issues/4615
https://github.com/huggingface/datasets/pull/4615
4,615
Fix `embed_storage` on features inside lists/sequences
closed
1
2022-07-01T11:52:08
2022-07-08T12:13:10
2022-07-08T12:01:36
mariosasko
[]
Add a dedicated function for embed_storage to always preserve the embedded/casted arrays (and to have more control over `embed_storage` in general). Fix #4591 ~~(Waiting for #4608 to be merged to mark this PR as ready for review - required for fixing `xgetsize` in private repos)~~ Done!
true
1,291,218,020
https://api.github.com/repos/huggingface/datasets/issues/4614
https://github.com/huggingface/datasets/pull/4614
4,614
Ensure ConcatenationTable.cast uses target_schema metadata
closed
2
2022-07-01T10:22:08
2022-07-19T13:48:45
2022-07-19T13:36:24
dtuit
[]
Currently, `ConcatenationTable.cast` does not use target_schema metadata when casting subtables. This causes an issue when using cast_column and the underlying table is a ConcatenationTable. Code example of where issue arrises: ``` from datasets import Dataset, Image column1 = [0, 1] image_paths = ['/images/image1.jpg', '/images/image2.jpg'] ds = Dataset.from_dict({"column1": column1}) ds = ds.add_column("image", image_paths) ds.cast_column("image", Image()) # Fails here ``` Output ``` ... TypeError: Couldn't cast array of type string to {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)} ```
true
1,291,181,193
https://api.github.com/repos/huggingface/datasets/issues/4613
https://github.com/huggingface/datasets/pull/4613
4,613
Align/fix license metadata info
closed
3
2022-07-01T09:50:50
2022-07-01T12:53:57
2022-07-01T12:42:47
julien-c
[]
fix bad "other-*" licenses and add the corresponding "license_details" when relevant
true
1,290,984,660
https://api.github.com/repos/huggingface/datasets/issues/4612
https://github.com/huggingface/datasets/issues/4612
4,612
Release 2.3.0 broke custom iterable datasets
closed
3
2022-07-01T06:46:07
2022-07-05T15:08:21
2022-07-05T15:08:21
aapot
[ "bug" ]
## Describe the bug Trying to iterate examples from custom iterable dataset fails to bug introduced in `torch_iterable_dataset.py` since the release of 2.3.0. ## Steps to reproduce the bug ```python next(iter(custom_iterable_dataset)) ``` ## Expected results `next(iter(custom_iterable_dataset))` should return examples from the dataset ## Actual results ``` /usr/local/lib/python3.7/dist-packages/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py in _set_fsspec_for_multiprocess() 16 See https://github.com/fsspec/gcsfs/issues/379 17 """ ---> 18 fsspec.asyn.iothread[0] = None 19 fsspec.asyn.loop[0] = None 20 AttributeError: module 'fsspec' has no attribute 'asyn' ``` ## Environment info - `datasets` version: 2.3.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
false
1,290,940,874
https://api.github.com/repos/huggingface/datasets/issues/4611
https://github.com/huggingface/datasets/pull/4611
4,611
Preserve member order by MockDownloadManager.iter_archive
closed
1
2022-07-01T05:48:20
2022-07-01T16:59:11
2022-07-01T16:48:28
albertvillanova
[]
Currently, `MockDownloadManager.iter_archive` yields paths to archive members in an order given by `path.rglob("*")`, which migh not be the same order as in the original archive. See issue in: - https://github.com/huggingface/datasets/pull/4579#issuecomment-1172135027 This PR fixes the order of the members yielded by `MockDownloadManager.iter_archive` so that it is the same as in the original archive.
true
1,290,603,827
https://api.github.com/repos/huggingface/datasets/issues/4610
https://github.com/huggingface/datasets/issues/4610
4,610
codeparrot/github-code failing to load
closed
8
2022-06-30T20:24:48
2022-07-05T14:24:13
2022-07-05T09:19:56
PyDataBlog
[ "bug" ]
## Describe the bug codeparrot/github-code fails to load with a `TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path'` ## Steps to reproduce the bug ```python from datasets import load_dataset ``` ## Expected results loaded dataset object ## Actual results ```python [3]: dataset = load_dataset("codeparrot/github-code") No config specified, defaulting to: github-code/all-all Downloading and preparing dataset github-code/all-all to /home/bebr/.cache/huggingface/datasets/codeparrot___github-code/all-all/0.0.0/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817... --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Input In [3], in <cell line: 1>() ----> 1 dataset = load_dataset("codeparrot/github-code") File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1678 # Download and prepare data -> 1679 builder_instance.download_and_prepare( 1680 download_config=download_config, 1681 download_mode=download_mode, 1682 ignore_verifications=ignore_verifications, 1683 try_from_hf_gcs=try_from_hf_gcs, 1684 use_auth_token=use_auth_token, 1685 ) 1687 # Build dataset for splits 1688 keep_in_memory = ( 1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1690 ) File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:704, in DatasetBuilder.download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 702 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 703 if not downloaded_from_gcs: --> 704 self._download_and_prepare( 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 706 ) 707 # Sync info 708 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:1221, in GeneratorBasedBuilder._download_and_prepare(self, dl_manager, verify_infos) 1220 def _download_and_prepare(self, dl_manager, verify_infos): -> 1221 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File ~/miniconda3/envs/fastapi-kube/lib/python3.10/site-packages/datasets/builder.py:771, in DatasetBuilder._download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 769 split_dict = SplitDict(dataset_name=self.name) 770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 773 # Checksums verification 774 if verify_infos and dl_manager.record_checksums: File ~/.cache/huggingface/modules/datasets_modules/datasets/codeparrot--github-code/a55513bc0f81db773f9896c7aac225af0cff5b323bb9d2f68124f0a8cc3fb817/github-code.py:169, in GithubCode._split_generators(self, dl_manager) 162 def _split_generators(self, dl_manager): 164 hfh_dataset_info = HfApi(datasets.config.HF_ENDPOINT).dataset_info( 165 _REPO_NAME, 166 timeout=100.0, 167 ) --> 169 patterns = datasets.data_files.get_patterns_in_dataset_repository(hfh_dataset_info) 170 data_files = datasets.data_files.DataFilesDict.from_hf_repo( 171 patterns, 172 dataset_info=hfh_dataset_info, 173 ) 175 files = dl_manager.download_and_extract(data_files["train"]) TypeError: get_patterns_in_dataset_repository() missing 1 required positional argument: 'base_path' ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.18.7-arch1-1-x86_64-with-glibc2.35 - Python version: 3.10.5 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
false
1,290,392,083
https://api.github.com/repos/huggingface/datasets/issues/4609
https://github.com/huggingface/datasets/issues/4609
4,609
librispeech dataset has to download whole subset when specifing the split to use
closed
2
2022-06-30T16:38:24
2022-07-12T21:44:32
2022-07-12T21:44:32
sunhaozhepy
[ "bug" ]
## Describe the bug librispeech dataset has to download whole subset when specifing the split to use ## Steps to reproduce the bug see below # Sample code to reproduce the bug ``` !pip install datasets from datasets import load_dataset raw_dataset = load_dataset("librispeech_asr", "clean", split="train.100") ``` ## Expected results The split "train.clean.100" is downloaded. ## Actual results All four splits in "clean" subset is downloaded. ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
false
1,290,298,002
https://api.github.com/repos/huggingface/datasets/issues/4608
https://github.com/huggingface/datasets/pull/4608
4,608
Fix xisfile, xgetsize, xisdir, xlistdir in private repo
closed
2
2022-06-30T15:23:21
2022-07-06T12:45:59
2022-07-06T12:34:19
lhoestq
[]
`xisfile` is working in a private repository when passing a chained URL to a file inside an archive, e.g. `zip://a.txt::https://huggingface/datasets/username/dataset_name/resolve/main/data.zip`. However it's not working when passing a simple file `https://huggingface/datasets/username/dataset_name/resolve/main/data.zip`. This is because the authentication headers are not passed correctly in this case. This is causing dataset streaming to fail in private parquet repositories, as noted in https://github.com/huggingface/datasets/issues/4605 I fixed `xisfile` and the other functions that behave the same way: xgetsize, xisdir and xlistdir TODO: - [x] tests fix https://github.com/huggingface/datasets/issues/4605
true
1,290,171,941
https://api.github.com/repos/huggingface/datasets/issues/4607
https://github.com/huggingface/datasets/pull/4607
4,607
Align more metadata with other repo types (models,spaces)
closed
5
2022-06-30T13:52:12
2022-07-01T12:00:37
2022-07-01T11:49:14
julien-c
[]
see also associated PR on the `datasets-tagging` Space: https://huggingface.co/spaces/huggingface/datasets-tagging/discussions/2 (to merge after this one is merged)
true
1,290,083,534
https://api.github.com/repos/huggingface/datasets/issues/4606
https://github.com/huggingface/datasets/issues/4606
4,606
evaluation result changes after `datasets` version change
closed
1
2022-06-30T12:43:26
2023-07-25T15:05:26
2023-07-25T15:05:26
thnkinbtfly
[ "bug" ]
## Describe the bug evaluation result changes after `datasets` version change ## Steps to reproduce the bug 1. Train a model on WikiAnn 2. reload the ckpt -> test accuracy becomes same as eval accuracy 3. such behavior is gone after downgrading `datasets` https://colab.research.google.com/drive/1kYz7-aZRGdayaq-gDTt30tyEgsKlpYOw?usp=sharing ## Expected results evaluation result shouldn't change before/after `datasets` version changes ## Actual results evaluation result changes before/after `datasets` version changes ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: colab - Python version: 3.7.13 - PyArrow version: 6.0.1 Q. How could the evaluation result change before/after `datasets` version changes?
false
1,290,058,970
https://api.github.com/repos/huggingface/datasets/issues/4605
https://github.com/huggingface/datasets/issues/4605
4,605
Dataset Viewer issue for boris/gis_filtered
closed
5
2022-06-30T12:23:34
2022-07-06T12:34:19
2022-07-06T12:34:19
WaterKnight1998
[ "streaming" ]
### Link https://huggingface.co/datasets/boris/gis_filtered/viewer/boris--gis_filtered/train ### Description When I try to access this from the website I get this error: Status code: 400 Exception: ClientResponseError Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/boris/gis_filtered/resolve/80b805053ce61d4eb487b6b8d9095d775c2c466e/data/train/0000.parquet') If I try to load with code I also get the same issue: ```python dataset2_train=load_dataset("boris/gis_filtered", use_auth_token=os.environ["HF_TOKEN"],split="train",streaming=True) dataset2_validation=load_dataset("boris/gis_filtered", use_auth_token=os.environ["HF_TOKEN"], split="validation",streaming=True) ``` ### Owner No
false
1,289,963,962
https://api.github.com/repos/huggingface/datasets/issues/4604
https://github.com/huggingface/datasets/pull/4604
4,604
Update CI Windows orb
closed
1
2022-06-30T11:00:31
2022-06-30T13:33:11
2022-06-30T13:22:26
albertvillanova
[]
This PR tries to fix recurrent random CI failures on Windows. After 2 runs, it seems to have fixed the issue. Fix #4603.
true
1,289,963,331
https://api.github.com/repos/huggingface/datasets/issues/4603
https://github.com/huggingface/datasets/issues/4603
4,603
CI fails recurrently and randomly on Windows
closed
0
2022-06-30T10:59:58
2022-06-30T13:22:25
2022-06-30T13:22:25
albertvillanova
[ "bug" ]
As reported by @lhoestq, The windows CI is currently flaky: some dependencies like `aiobotocore`, `multiprocess` and `seqeval` sometimes fail to install. In particular it seems that building the wheels fail. Here is an example of logs: ``` Building wheel for seqeval (setup.py): started Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6' No parent package detected, impossible to derive `name` running bdist_wheel running build running build_py package init file 'seqeval\__init__.py' not found (or not a regular file) package init file 'seqeval\metrics\__init__.py' not found (or not a regular file) C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, installing to build\bdist.win-amd64\wheel running install running install_lib warning: install_lib: 'build\lib' does not exist -- no Python modules to install running install_egg_info running egg_info creating UNKNOWN.egg-info writing UNKNOWN.egg-info\PKG-INFO writing dependency_links to UNKNOWN.egg-info\dependency_links.txt writing top-level names to UNKNOWN.egg-info\top_level.txt writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' reading manifest file 'UNKNOWN.egg-info\SOURCES.txt' writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info running install_scripts creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it adding 'UNKNOWN-0.0.0.dist-info/METADATA' adding 'UNKNOWN-0.0.0.dist-info/WHEEL' adding 'UNKNOWN-0.0.0.dist-info/top_level.txt' adding 'UNKNOWN-0.0.0.dist-info/RECORD' removing build\bdist.win-amd64\wheel Building wheel for seqeval (setup.py): finished with status 'done' Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1 Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7 WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN' ```
false
1,289,950,379
https://api.github.com/repos/huggingface/datasets/issues/4602
https://github.com/huggingface/datasets/pull/4602
4,602
Upgrade setuptools in windows CI
closed
1
2022-06-30T10:48:41
2023-09-24T10:05:10
2022-06-30T12:46:17
lhoestq
[]
The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install. In particular it seems that building the wheels fail. Here is an example of logs ``` Building wheel for seqeval (setup.py): started Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6' No parent package detected, impossible to derive `name` running bdist_wheel running build running build_py package init file 'seqeval\__init__.py' not found (or not a regular file) package init file 'seqeval\metrics\__init__.py' not found (or not a regular file) C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, installing to build\bdist.win-amd64\wheel running install running install_lib warning: install_lib: 'build\lib' does not exist -- no Python modules to install running install_egg_info running egg_info creating UNKNOWN.egg-info writing UNKNOWN.egg-info\PKG-INFO writing dependency_links to UNKNOWN.egg-info\dependency_links.txt writing top-level names to UNKNOWN.egg-info\top_level.txt writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' reading manifest file 'UNKNOWN.egg-info\SOURCES.txt' writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info running install_scripts creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it adding 'UNKNOWN-0.0.0.dist-info/METADATA' adding 'UNKNOWN-0.0.0.dist-info/WHEEL' adding 'UNKNOWN-0.0.0.dist-info/top_level.txt' adding 'UNKNOWN-0.0.0.dist-info/RECORD' removing build\bdist.win-amd64\wheel Building wheel for seqeval (setup.py): finished with status 'done' Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1 Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7 WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN' ``` hopefully this fixes the issue
true
1,289,924,715
https://api.github.com/repos/huggingface/datasets/issues/4601
https://github.com/huggingface/datasets/pull/4601
4,601
Upgrade pip in WIN CI
closed
2
2022-06-30T10:25:42
2023-09-24T10:04:25
2022-06-30T10:43:38
lhoestq
[]
The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install. In particular it seems that building the wheels fail. Here is an example of logs ``` Building wheel for seqeval (setup.py): started Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6' No parent package detected, impossible to derive `name` running bdist_wheel running build running build_py package init file 'seqeval\__init__.py' not found (or not a regular file) package init file 'seqeval\metrics\__init__.py' not found (or not a regular file) C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools. setuptools.SetuptoolsDeprecationWarning, installing to build\bdist.win-amd64\wheel running install running install_lib warning: install_lib: 'build\lib' does not exist -- no Python modules to install running install_egg_info running egg_info creating UNKNOWN.egg-info writing UNKNOWN.egg-info\PKG-INFO writing dependency_links to UNKNOWN.egg-info\dependency_links.txt writing top-level names to UNKNOWN.egg-info\top_level.txt writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' reading manifest file 'UNKNOWN.egg-info\SOURCES.txt' writing manifest file 'UNKNOWN.egg-info\SOURCES.txt' Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info running install_scripts creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it adding 'UNKNOWN-0.0.0.dist-info/METADATA' adding 'UNKNOWN-0.0.0.dist-info/WHEEL' adding 'UNKNOWN-0.0.0.dist-info/top_level.txt' adding 'UNKNOWN-0.0.0.dist-info/RECORD' removing build\bdist.win-amd64\wheel Building wheel for seqeval (setup.py): finished with status 'done' Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1 Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7 WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN' ``` I tried to update pip and re-run the CI several times and I couldn't re-experience this issue for now, so I think upgrading pip may solve the issue
true
1,289,177,042
https://api.github.com/repos/huggingface/datasets/issues/4600
https://github.com/huggingface/datasets/pull/4600
4,600
Remove multiple config section
closed
1
2022-06-29T19:09:21
2022-07-04T17:41:20
2022-07-04T17:29:41
stevhliu
[ "documentation" ]
This PR removes docs for a future feature and redirects to #4578 instead. See this [discussion](https://huggingface.slack.com/archives/C034N0A7H09/p1656107063801969) for more details :)
true
1,288,849,933
https://api.github.com/repos/huggingface/datasets/issues/4599
https://github.com/huggingface/datasets/pull/4599
4,599
Smooth-BLEU bug fixed
closed
1
2022-06-29T14:51:42
2022-09-23T07:42:40
2022-09-23T07:42:40
Aktsvigun
[ "transfer-to-evaluate" ]
Hi, the current implementation of smooth-BLEU contains a bug: it smoothes unigrams as well. Consequently, when both the reference and translation consist of totally different tokens, it anyway returns a non-zero value (please see the attached image). This however contradicts the source paper suggesting the smooth-BLEU _(Chin-Yew Lin, Franz Josef Och. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. COLING 2004.)_ : > Add one count to the n-gram hit and total ngram count for n > 1. Therefore, for candidate translations with less than n words, they can still get a positive smoothed BLEU score from shorter n-gram matches; however if nothing matches then they will get zero scores. This pull request aims at fixing this bug. I made a pull request in the target repository `tensorflow/nmt`, which implements this script, yet the last commit there is dating 19.02.2019 and I doubt whether this will be fixed promptly. Yet, this bug is critical, for instance for summarization datasets with short summaries (e.g. AESLC), since smoothing needs to be applied there. Therefore, the easiest solution that I found is to fork the repo and download this script directly from the forked fixed repo. Kind, Akim Tsvigun <img width="516" alt="Π‘Π½ΠΈΠΌΠΎΠΊ экрана 2022-06-29 Π² 17 49 27" src="https://user-images.githubusercontent.com/36672861/176466935-ac579e6d-6a93-4111-ab41-9b33056e7d47.png">
true
1,288,774,514
https://api.github.com/repos/huggingface/datasets/issues/4598
https://github.com/huggingface/datasets/pull/4598
4,598
Host financial_phrasebank data on the Hub
closed
1
2022-06-29T13:59:31
2022-07-01T09:41:14
2022-07-01T09:29:36
albertvillanova
[]
Fix #4597.
true
1,288,672,007
https://api.github.com/repos/huggingface/datasets/issues/4597
https://github.com/huggingface/datasets/issues/4597
4,597
Streaming issue for financial_phrasebank
closed
3
2022-06-29T12:45:43
2022-07-01T09:29:36
2022-07-01T09:29:36
lewtun
[ "hosted-on-google-drive" ]
### Link https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree/train ### Description As reported by a community member using [AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/5#62bc217436d0e5d316a768f0), there seems to be a problem streaming this dataset: ``` Server error Status code: 400 Exception: Exception Message: Give up after 5 attempts with ConnectionError ``` ### Owner No
false
1,288,381,735
https://api.github.com/repos/huggingface/datasets/issues/4596
https://github.com/huggingface/datasets/issues/4596
4,596
Dataset Viewer issue for universal_dependencies
closed
2
2022-06-29T08:50:29
2022-09-07T11:29:28
2022-09-07T11:29:27
Jordy-VL
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/universal_dependencies ### Description invalid json response body at https://datasets-server.huggingface.co/splits?dataset=universal_dependencies reason: Unexpected token I in JSON at position 0 ### Owner _No response_
false
1,288,275,976
https://api.github.com/repos/huggingface/datasets/issues/4595
https://github.com/huggingface/datasets/issues/4595
4,595
Dataset Viewer issue with False positive PII redaction
closed
2
2022-06-29T07:15:57
2022-06-29T08:29:41
2022-06-29T08:27:49
cakiki
[]
### Link https://huggingface.co/datasets/cakiki/rosetta-code ### Description Hello, I just noticed an entry being redacted that shouldn't have been: `RootMeanSquare@Range[10]` is being displayed as `[email protected][10]` ### Owner _No response_
false
1,288,070,023
https://api.github.com/repos/huggingface/datasets/issues/4594
https://github.com/huggingface/datasets/issues/4594
4,594
load_from_disk suggests incorrect fix when used to load DatasetDict
closed
0
2022-06-29T01:40:01
2022-06-29T04:03:44
2022-06-29T04:03:44
dvsth
[ "bug" ]
Edit: Please feel free to remove this issue. The problem was not the error message but the fact that the DatasetDict.load_from_disk does not support loading nested splits, i.e. if one of the splits is itself a DatasetDict. If nesting splits is an antipattern, perhaps the load_from_disk function can throw a warning indicating that?
false
1,288,067,699
https://api.github.com/repos/huggingface/datasets/issues/4593
https://github.com/huggingface/datasets/pull/4593
4,593
Fix error message when using load_from_disk to load DatasetDict
closed
0
2022-06-29T01:34:27
2022-06-29T04:01:59
2022-06-29T04:01:39
dvsth
[]
Issue #4594 Issue: When `datasets.load_from_disk` is wrongly used to load a `DatasetDict`, the error message suggests using `datasets.load_from_disk`, which is the same function that generated the error. Fix: The appropriate function which should be suggested instead is `datasets.dataset_dict.load_from_disk`. Changes: Change the suggestion to say "Please use `datasets.dataset_dict.load_from_disk` instead."
true
1,288,029,377
https://api.github.com/repos/huggingface/datasets/issues/4592
https://github.com/huggingface/datasets/issues/4592
4,592
Issue with jalFaizy/detect_chess_pieces when running datasets-cli test
closed
3
2022-06-29T00:15:54
2022-06-29T10:30:03
2022-06-29T07:49:27
faizankshaikh
[]
### Link https://huggingface.co/datasets/jalFaizy/detect_chess_pieces ### Description I am trying to write a appropriate data loader for [a custom dataset](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces) using [this script](https://huggingface.co/datasets/jalFaizy/detect_chess_pieces/blob/main/detect_chess_pieces.py) When I run the command `$ datasets-cli test "D:\workspace\HF\detect_chess_pieces" --save_infos --all_configs` It gives the following error ``` Using custom data configuration default Traceback (most recent call last): File "c:\users\faiza\anaconda3\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "c:\users\faiza\anaconda3\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\faiza\anaconda3\Scripts\datasets-cli.exe\__main__.py", line 7, in <module> File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\datasets_cli.py", line 39, in main service.run() File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\test.py", line 132, in run for j, builder in enumerate(get_builders()): File "c:\users\faiza\anaconda3\lib\site-packages\datasets\commands\test.py", line 125, in get_builders yield builder_cls( File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 1148, in __init__ super().__init__(*args, **kwargs) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 306, in __init__ info = self.get_exported_dataset_info() File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 405, in get_exported_dataset_info return self.get_all_exported_dataset_infos().get(self.config.name, DatasetInfo()) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\builder.py", line 390, in get_all_exported_dataset_infos return DatasetInfosDict.from_directory(cls.get_imported_module_dir()) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 309, in from_directory dataset_infos_dict = { File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 310, in <dictcomp> config_name: DatasetInfo.from_dict(dataset_info_dict) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 272, in from_dict return cls(**{k: v for k, v in dataset_info_dict.items() if k in field_names}) File "<string>", line 20, in __init__ File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 160, in __post_init__ templates = [ File "c:\users\faiza\anaconda3\lib\site-packages\datasets\info.py", line 161, in <listcomp> template if isinstance(template, TaskTemplate) else task_template_from_dict(template) File "c:\users\faiza\anaconda3\lib\site-packages\datasets\tasks\__init__.py", line 43, in task_template_from_dict return template.from_dict(task_template_dict) AttributeError: 'NoneType' object has no attribute 'from_dict' ``` My assumption is that there is some kind of issue in how the "task_templates" are read, because even if I keep them as None, or not include the argument at all, the same error occurs ### Owner Yes
false
1,288,021,332
https://api.github.com/repos/huggingface/datasets/issues/4591
https://github.com/huggingface/datasets/issues/4591
4,591
Can't push Images to hub with manual Dataset
closed
1
2022-06-29T00:01:23
2022-07-08T12:01:36
2022-07-08T12:01:35
cceyda
[ "bug" ]
## Describe the bug If I create a dataset including an 'Image' feature manually, when pushing to hub decoded images are not pushed, instead it looks for image where image local path is/used to be. This doesn't (at least didn't used to) happen with imagefolder. I want to build dataset manually because it is complicated. This happens even though the dataset is looking like decoded images: ![image](https://user-images.githubusercontent.com/15624271/176322689-2cc819cf-9d5c-4a8f-9f3d-83ae8ec06f20.png) and I use `embed_external_files=True` while `push_to_hub` (same with false) ## Steps to reproduce the bug ```python from PIL import Image from datasets import Image as ImageFeature from datasets import Features,Dataset #manually create dataset feats=Features( { "images": [ImageFeature()], #same even if explicitly ImageFeature(decode=True) "input_image": ImageFeature(), } ) test_data={"images":[[Image.open("test.jpg"),Image.open("test.jpg"),Image.open("test.jpg")]], "input_image":[Image.open("test.jpg")]} test_dataset=Dataset.from_dict(test_data,features=feats) print(test_dataset) test_dataset.push_to_hub("ceyda/image_test_public",private=False,token="",embed_external_files=True) # clear cache rm -r ~/.cache/huggingface # remove "test.jpg" # remove to see that it is looking for image on the local path test_dataset=load_dataset("ceyda/image_test_public",use_auth_token="") print(test_dataset) print(test_dataset['train'][0]) ``` ## Expected results should be able to push image bytes if dataset has `Image(decode=True)` ## Actual results errors because it is trying to decode file from the non existing local path. ``` ----> print(test_dataset['train'][0]) File ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py:2154, in Dataset.__getitem__(self, key) 2152 def __getitem__(self, key): # noqa: F811 2153 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2154 return self._getitem( 2155 key, 2156 ) File ~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py:2139, in Dataset._getitem(self, key, decoded, **kwargs) 2137 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 2138 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2139 formatted_output = format_table( 2140 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2141 ) 2142 return formatted_output File ~/.local/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: ... -> 3068 fp = builtins.open(filename, "rb") 3069 exclusive_fp = True 3071 try: FileNotFoundError: [Errno 2] No such file or directory: 'test.jpg' ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.0-1074-azure-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
false
1,287,941,058
https://api.github.com/repos/huggingface/datasets/issues/4590
https://github.com/huggingface/datasets/pull/4590
4,590
Generalize meta_path json file creation in load.py [#4540]
closed
4
2022-06-28T21:48:06
2022-07-08T14:55:13
2022-07-07T13:17:45
VijayKalmath
[]
# What does this PR do? ## Summary *In function `_copy_script_and_other_resources_in_importable_dir`, using string split when generating `meta_path` throws error in the edge case raised in #4540.* ## Additions - ## Changes - Changed meta_path to use `os.path.splitext` instead of using `str.split` to generalize code. ## Deletions - ## Issues Addressed : Fixes #4540
true
1,287,600,029
https://api.github.com/repos/huggingface/datasets/issues/4589
https://github.com/huggingface/datasets/issues/4589
4,589
Permission denied: '/home/.cache' when load_dataset with local script
closed
0
2022-06-28T16:26:03
2022-06-29T06:26:28
2022-06-29T06:25:08
jiangh0
[ "bug" ]
null
false
1,287,368,751
https://api.github.com/repos/huggingface/datasets/issues/4588
https://github.com/huggingface/datasets/pull/4588
4,588
Host head_qa data on the Hub and fix NonMatchingChecksumError
closed
3
2022-06-28T13:39:28
2022-07-05T16:01:15
2022-07-05T15:49:52
albertvillanova
[]
This PR: - Hosts head_qa data on the Hub instead of Google Drive - Fixes NonMatchingChecksumError Fix https://huggingface.co/datasets/head_qa/discussions/1
true
1,287,291,494
https://api.github.com/repos/huggingface/datasets/issues/4587
https://github.com/huggingface/datasets/pull/4587
4,587
Validate new_fingerprint passed by user
closed
1
2022-06-28T12:46:21
2022-06-28T14:11:57
2022-06-28T14:00:44
lhoestq
[]
Users can pass the dataset fingerprint they want in `map` and other dataset transforms. However the fingerprint is used to name cache files so we need to make sure it doesn't contain bad characters as mentioned in https://github.com/huggingface/datasets/issues/1718, and that it's not too long
true
1,287,105,636
https://api.github.com/repos/huggingface/datasets/issues/4586
https://github.com/huggingface/datasets/pull/4586
4,586
Host pn_summary data on the Hub instead of Google Drive
closed
1
2022-06-28T10:05:05
2022-06-28T14:52:56
2022-06-28T14:42:03
albertvillanova
[]
Fix #4581.
true
1,287,064,929
https://api.github.com/repos/huggingface/datasets/issues/4585
https://github.com/huggingface/datasets/pull/4585
4,585
Host multi_news data on the Hub instead of Google Drive
closed
1
2022-06-28T09:32:06
2022-06-28T14:19:35
2022-06-28T14:08:48
albertvillanova
[]
Host data files of multi_news dataset on the Hub. They were on Google Drive. Fix #4580.
true
1,286,911,993
https://api.github.com/repos/huggingface/datasets/issues/4584
https://github.com/huggingface/datasets/pull/4584
4,584
Add binary classification task IDs
closed
4
2022-06-28T07:30:39
2023-09-24T10:04:04
2023-01-26T09:27:52
lewtun
[]
As a precursor to aligning the task IDs in `datasets` and AutoTrain, we need a way to distinguish binary vs multiclass vs multilabel classification. This PR adds binary classification to the task IDs to enable this. Related AutoTrain issue: https://github.com/huggingface/autonlp-backend/issues/597 cc @abhishekkrthakur @SBrandeis
true
1,286,790,871
https://api.github.com/repos/huggingface/datasets/issues/4583
https://github.com/huggingface/datasets/pull/4583
4,583
<code> implementation of FLAC support using torchaudio
closed
0
2022-06-28T05:24:21
2022-06-28T05:47:02
2022-06-28T05:47:02
rafael-ariascalles
[]
I had added Audio FLAC support with torchaudio given that Librosa and SoundFile can give problems. Also, FLAC is been used as audio from https://mlcommons.org/en/peoples-speech/
true
1,286,517,060
https://api.github.com/repos/huggingface/datasets/issues/4582
https://github.com/huggingface/datasets/pull/4582
4,582
add_column should preserve _indexes
open
1
2022-06-27T22:35:47
2022-07-06T15:19:54
null
cceyda
[]
https://github.com/huggingface/datasets/issues/3769#issuecomment-1167146126 doing `.add_column("x",x_data)` also removed any `_indexes` on the dataset, decided this shouldn't be the case. This was because `add_column` was creating a new `Dataset(...)` and wasn't possible to pass indexes on init. with this PR now can pass 'indexes' on init through `IndexableMixin` - [x] Added test
true
1,286,362,907
https://api.github.com/repos/huggingface/datasets/issues/4581
https://github.com/huggingface/datasets/issues/4581
4,581
Dataset Viewer issue for pn_summary
closed
3
2022-06-27T20:56:12
2022-06-28T14:42:03
2022-06-28T14:42:03
lewtun
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/pn_summary/viewer/1.0.0/validation ### Description Getting an index error on the `validation` and `test` splits: ``` Server error Status code: 400 Exception: IndexError Message: list index out of range ``` ### Owner No
false
1,286,312,912
https://api.github.com/repos/huggingface/datasets/issues/4580
https://github.com/huggingface/datasets/issues/4580
4,580
Dataset Viewer issue for multi_news
closed
2
2022-06-27T20:25:25
2022-06-28T14:08:48
2022-06-28T14:08:48
lewtun
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/multi_news ### Description Not sure what the index error is referring to here: ``` Status code: 400 Exception: IndexError Message: list index out of range ``` ### Owner No
false
1,286,106,285
https://api.github.com/repos/huggingface/datasets/issues/4579
https://github.com/huggingface/datasets/pull/4579
4,579
Support streaming cfq dataset
closed
6
2022-06-27T17:11:23
2022-07-04T19:35:01
2022-07-04T19:23:57
albertvillanova
[]
Support streaming cfq dataset.
true
1,286,086,400
https://api.github.com/repos/huggingface/datasets/issues/4578
https://github.com/huggingface/datasets/issues/4578
4,578
[Multi Configs] Use directories to differentiate between subsets/configurations
open
3
2022-06-27T16:55:11
2023-06-14T15:43:05
null
lhoestq
[ "enhancement" ]
Currently to define several subsets/configurations of your dataset, you need to use a dataset script. However it would be nice to have a no-code way to to this. For example we could specify different configurations of a dataset (for example, if a dataset contains different languages) with one directory per configuration. These structures are not supported right now, but would be nice to have: ``` my_dataset_repository/ β”œβ”€β”€ README.md β”œβ”€β”€ en/ β”‚ β”œβ”€β”€ train.csv β”‚ └── test.csv └── fr/ β”œβ”€β”€ train.csv └── test.csv ``` Or with one directory per split: ``` my_dataset_repository/ β”œβ”€β”€ README.md β”œβ”€β”€ en/ β”‚ β”œβ”€β”€ train/ β”‚ β”‚ β”œβ”€β”€ shard_0.csv β”‚ β”‚ └── shard_1.csv β”‚ └── test/ β”‚ β”œβ”€β”€ shard_0.csv β”‚ └── shard_1.csv └── fr/ β”œβ”€β”€ train/ β”‚ β”œβ”€β”€ shard_0.csv β”‚ └── shard_1.csv └── test/ β”œβ”€β”€ shard_0.csv └── shard_1.csv ``` cc @stevhliu @albertvillanova This can be specified in the README as YAML with ``` configs: - config_name: en data_dir: en - config_name: fr data_dir: fr ```
false
1,285,703,775
https://api.github.com/repos/huggingface/datasets/issues/4577
https://github.com/huggingface/datasets/pull/4577
4,577
Add authentication tip to `load_dataset`
closed
1
2022-06-27T12:05:34
2022-07-04T13:13:15
2022-07-04T13:01:30
mariosasko
[]
Add an authentication tip similar to the one in transformers' `PreTrainedModel.from_pretrained` to `load_dataset`/`load_dataset_builder`.
true
1,285,698,576
https://api.github.com/repos/huggingface/datasets/issues/4576
https://github.com/huggingface/datasets/pull/4576
4,576
Include `metadata.jsonl` in resolved data files
closed
5
2022-06-27T12:01:29
2022-07-01T12:44:55
2022-06-30T10:15:32
mariosasko
[]
Include `metadata.jsonl` in resolved data files. Fix #4548 @lhoestq ~~https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 adds support for metadata files placed at the root, and https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 accounts for nested metadata files also, but this results in more complex code. Let me know which one of these two approaches you prefer.~~ Maybe https://github.com/huggingface/datasets/commit/d94336d30eef17fc9abc67f67fa1c139661f4e75 is good enough for now (for the sake of simplicity). https://github.com/huggingface/datasets/commit/4d20618ea7a19bc143ddc5fdff9d79e671fcbb95 breaks the imagefolder tests due to duplicates in the resolved metadata files. One way to fix this would be to resolve the metadata pattern only on parent directories, but this adds even more logic to `_get_data_files_patterns`, so not sure if this is what we should do.
true
1,285,446,700
https://api.github.com/repos/huggingface/datasets/issues/4575
https://github.com/huggingface/datasets/issues/4575
4,575
Problem about wmt17 zh-en dataset
closed
5
2022-06-27T08:35:42
2022-08-23T10:01:02
2022-08-23T10:00:21
winterfell2021
[ "bug" ]
It seems that in subset casia2015, some samples are like `{'c[hn]':'xxx', 'en': 'aa'}`. So when using `data = load_dataset('wmt17', "zh-en")` to load the wmt17 zh-en dataset, which will raise the exception: ``` Traceback (most recent call last): File "train.py", line 78, in <module> data = load_dataset(args.dataset, "zh-en") File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 1684, in load_dataset use_auth_token=use_auth_token, File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 705, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1221, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 1215, in _prepare_split num_examples, num_bytes = writer.finalize() File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 533, in finalize self.write_examples_on_file() File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 410, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 503, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 230, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 198, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1846, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1675, in wrapper return func(array, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/table.py", line 1756, in array_cast raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") TypeError: Couldn't cast array of type struct<c[hn]: string, en: string, zh: string> to struct<en: string, zh: string> ``` So the solution of this problem is to change the original array manually: ``` if 'c[hn]' in str(array.type): py_array = array.to_pylist() data_list = [] for vo in py_array: tmp = { 'en': vo['en'], } if 'zh' not in vo: tmp['zh'] = vo['c[hn]'] else: tmp['zh'] = vo['zh'] data_list.append(tmp) array = pa.array(data_list, type=pa.struct([ pa.field('en', pa.string()), pa.field('zh', pa.string()), ])) ``` Therefore, maybe a correct version of original casia2015 file need to be updated
false
1,285,380,616
https://api.github.com/repos/huggingface/datasets/issues/4574
https://github.com/huggingface/datasets/pull/4574
4,574
Support streaming mlsum dataset
closed
7
2022-06-27T07:37:03
2022-07-21T13:37:30
2022-07-21T12:40:00
albertvillanova
[]
Support streaming mlsum dataset. This PR: - pins `fsspec` min version with fixed BlockSizeError: `fsspec[http]>=2021.11.1` - https://github.com/fsspec/filesystem_spec/pull/830 - unpins `s3fs==2021.08.1` to align it with `fsspec` requirement: `s3fs>=2021.11.1` > s3fs 2021.8.1 requires fsspec==2021.08.1 - see discussion: https://github.com/huggingface/datasets/pull/2858/files#r700027326 - updates the following requirements to be compatible with the previous ones and one with each other: - `aiobotocore==1.4.2` to `aiobotocore>=2.0.1` (required by s3fs>=2021.11.1) - `boto3==1.17.106` to `boto3>=1.19.8` (to be compatible with aiobotocore>=2.0.1) - `botocore==1.20.106` to `botocore>=1.22.8` (to be compatible with aiobotocore and boto3) Fix #4572.
true
1,285,023,629
https://api.github.com/repos/huggingface/datasets/issues/4573
https://github.com/huggingface/datasets/pull/4573
4,573
Fix evaluation metadata for ncbi_disease
closed
2
2022-06-26T20:29:32
2023-09-24T09:35:07
2022-09-23T09:38:02
lewtun
[ "dataset contribution" ]
This PR fixes the task in the evaluation metadata and removes the metrics info as we've decided this is not a great way to propagate this information downstream.
true
1,285,022,499
https://api.github.com/repos/huggingface/datasets/issues/4572
https://github.com/huggingface/datasets/issues/4572
4,572
Dataset Viewer issue for mlsum
closed
1
2022-06-26T20:24:17
2022-07-21T12:40:01
2022-07-21T12:40:01
lewtun
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/mlsum/viewer/de/train ### Description There's seems to be a problem with the download / streaming of this dataset: ``` Server error Status code: 400 Exception: BadZipFile Message: File is not a zip file ``` ### Owner No
false
1,284,883,289
https://api.github.com/repos/huggingface/datasets/issues/4571
https://github.com/huggingface/datasets/issues/4571
4,571
move under the facebook org?
open
3
2022-06-26T11:19:09
2023-09-25T12:05:18
null
lewtun
[]
### Link https://huggingface.co/datasets/gsarti/flores_101 ### Description It seems like streaming isn't supported for this dataset: ``` Server Error Status code: 400 Exception: NotImplementedError Message: Extraction protocol for TAR archives like 'https://dl.fbaipublicfiles.com/flores101/dataset/flores101_dataset.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. ``` ### Owner No
false
1,284,846,168
https://api.github.com/repos/huggingface/datasets/issues/4570
https://github.com/huggingface/datasets/issues/4570
4,570
Dataset sharding non-contiguous?
closed
5
2022-06-26T08:34:05
2022-06-30T11:00:47
2022-06-26T14:36:20
cakiki
[ "bug" ]
## Describe the bug I'm not sure if this is a bug; more likely normal behavior but i wanted to double check. Is it normal that `datasets.shard` does not produce chunks that, when concatenated produce the original ordering of the sharded dataset? This might be related to this pull request (https://github.com/huggingface/datasets/pull/4466) but I have to admit I did not properly look into the changes made. ## Steps to reproduce the bug ```python max_shard_size = convert_file_size_to_int('300MB') dataset_nbytes = dataset.data.nbytes num_shards = int(dataset_nbytes / max_shard_size) + 1 num_shards = max(num_shards, 1) print(f"{num_shards=}") for shard_index in range(num_shards): shard = dataset.shard(num_shards=num_shards, index=shard_index) shard.to_parquet(f"tokenized/tokenized-{shard_index:03d}.parquet") os.listdir('tokenized/') ``` ## Expected results I expected the shards to match the order of the data of the original dataset; i.e. `dataset[10]` being the same as `shard_1[10]` for example ## Actual results Only the first element is the same; i.e. `dataset[0]` is the same as `shard_1[0]` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31 - Python version: 3.10.4 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
false
1,284,833,694
https://api.github.com/repos/huggingface/datasets/issues/4569
https://github.com/huggingface/datasets/issues/4569
4,569
Dataset Viewer issue for sst2
closed
2
2022-06-26T07:32:54
2022-06-27T06:37:48
2022-06-27T06:37:48
lewtun
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/sst2 ### Description Not sure what is causing this, however it seems that `load_dataset("sst2")` also hangs (even though it downloads the files without problem): ``` Status code: 400 Exception: Exception Message: Give up after 5 attempts with ConnectionError ``` ### Owner No
false
1,284,655,624
https://api.github.com/repos/huggingface/datasets/issues/4568
https://github.com/huggingface/datasets/issues/4568
4,568
XNLI cache reload is very slow
closed
3
2022-06-25T16:43:56
2022-07-04T14:29:40
2022-07-04T14:29:40
Muennighoff
[ "bug" ]
### Reproduce Using `2.3.3.dev0` `from datasets import load_dataset` `load_dataset("xnli", "en")` Turn off Internet `load_dataset("xnli", "en")` I cancelled the second `load_dataset` eventually cuz it took super long. It would be great to have something to specify e.g. `only_load_from_cache` and avoid the library trying to download when there is no Internet. If I leave it running it works but takes way longer than when there is Internet. I would expect loading from cache to take the same amount of time regardless of whether there is Internet. ``` --------------------------------------------------------------------------- gaierror Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self) 174 conn = connection.create_connection( --> 175 (self._dns_host, self.port), self.timeout, **extra_kw 176 ) /opt/conda/lib/python3.7/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 71 ---> 72 for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM): 73 af, socktype, proto, canonname, sa = res /opt/conda/lib/python3.7/socket.py in getaddrinfo(host, port, family, type, proto, flags) 751 addrlist = [] --> 752 for res in _socket.getaddrinfo(host, port, family, type, proto, flags): 753 af, socktype, proto, canonname, sa = res gaierror: [Errno -3] Temporary failure in name resolution During handling of the above exception, another exception occurred: KeyboardInterrupt Traceback (most recent call last) /tmp/ipykernel_33/3594208039.py in <module> ----> 1 load_dataset("xnli", "en") /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1673 revision=revision, 1674 use_auth_token=use_auth_token, -> 1675 **config_kwargs, 1676 ) 1677 /opt/conda/lib/python3.7/site-packages/datasets/load.py in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1494 download_mode=download_mode, 1495 data_dir=data_dir, -> 1496 data_files=data_files, 1497 ) 1498 /opt/conda/lib/python3.7/site-packages/datasets/load.py in dataset_module_factory(path, revision, download_config, download_mode, force_local_path, dynamic_modules_path, data_dir, data_files, **download_kwargs) 1182 download_config=download_config, 1183 download_mode=download_mode, -> 1184 dynamic_modules_path=dynamic_modules_path, 1185 ).get_module() 1186 elif path.count("/") == 1: # community dataset on the Hub /opt/conda/lib/python3.7/site-packages/datasets/load.py in __init__(self, name, revision, download_config, download_mode, dynamic_modules_path) 506 self.dynamic_modules_path = dynamic_modules_path 507 assert self.name.count("/") == 0 --> 508 increase_load_count(name, resource_type="dataset") 509 510 def download_loading_script(self, revision: Optional[str]) -> str: /opt/conda/lib/python3.7/site-packages/datasets/load.py in increase_load_count(name, resource_type) 166 if not config.HF_DATASETS_OFFLINE and config.HF_UPDATE_DOWNLOAD_COUNTS: 167 try: --> 168 head_hf_s3(name, filename=name + ".py", dataset=(resource_type == "dataset")) 169 except Exception: 170 pass /opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in head_hf_s3(identifier, filename, use_cdn, dataset, max_retries) 93 return http_head( 94 hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset), ---> 95 max_retries=max_retries, 96 ) 97 /opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_head(url, proxies, headers, cookies, allow_redirects, timeout, max_retries) 445 allow_redirects=allow_redirects, 446 timeout=timeout, --> 447 max_retries=max_retries, 448 ) 449 return response /opt/conda/lib/python3.7/site-packages/datasets/utils/file_utils.py in _request_with_retry(method, url, max_retries, base_wait_time, max_wait_time, timeout, **params) 366 tries += 1 367 try: --> 368 response = requests.request(method=method.upper(), url=url, timeout=timeout, **params) 369 success = True 370 except (requests.exceptions.ConnectTimeout, requests.exceptions.ConnectionError) as err: /opt/conda/lib/python3.7/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 /opt/conda/lib/python3.7/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 527 } 528 send_kwargs.update(settings) --> 529 resp = self.send(prep, **send_kwargs) 530 531 return resp /opt/conda/lib/python3.7/site-packages/requests/sessions.py in send(self, request, **kwargs) 643 644 # Send the request --> 645 r = adapter.send(request, **kwargs) 646 647 # Total elapsed time of the request (approximately) /opt/conda/lib/python3.7/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 decode_content=False, 449 retries=self.max_retries, --> 450 timeout=timeout 451 ) 452 /opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 708 body=body, 709 headers=headers, --> 710 chunked=chunked, 711 ) 712 /opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 384 # Trigger any extra validation we need to do. 385 try: --> 386 self._validate_conn(conn) 387 except (SocketTimeout, BaseSSLError) as e: 388 # Py2 raises this as a BaseSSLError, Py3 raises it as socket timeout. /opt/conda/lib/python3.7/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 1038 # Force connect early to allow us to validate the connection. 1039 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` -> 1040 conn.connect() 1041 1042 if not conn.is_verified: /opt/conda/lib/python3.7/site-packages/urllib3/connection.py in connect(self) 356 def connect(self): 357 # Add certificate verification --> 358 self.sock = conn = self._new_conn() 359 hostname = self.host 360 tls_in_tls = False /opt/conda/lib/python3.7/site-packages/urllib3/connection.py in _new_conn(self) 173 try: 174 conn = connection.create_connection( --> 175 (self._dns_host, self.port), self.timeout, **extra_kw 176 ) 177 KeyboardInterrupt: ```
false
1,284,528,474
https://api.github.com/repos/huggingface/datasets/issues/4567
https://github.com/huggingface/datasets/pull/4567
4,567
Add evaluation data for amazon_reviews_multi
closed
2
2022-06-25T09:40:52
2023-09-24T09:35:22
2022-09-23T09:37:23
lewtun
[ "dataset contribution" ]
null
true
1,284,397,594
https://api.github.com/repos/huggingface/datasets/issues/4566
https://github.com/huggingface/datasets/issues/4566
4,566
Document link #load_dataset_enhancing_performance points to nowhere
closed
2
2022-06-25T01:18:19
2023-01-24T16:33:40
2023-01-24T16:33:40
subercui
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. ![image](https://user-images.githubusercontent.com/11674033/175752806-5b066b92-9d28-4771-9112-5c8606f07741.png) The [load_dataset_enhancing_performance](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#load_dataset_enhancing_performance) link [here](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.load_from_disk.keep_in_memory) points to nowhere, I guess it should point to https://huggingface.co/docs/datasets/v2.3.2/en/cache#improve-performance?
false
1,284,141,666
https://api.github.com/repos/huggingface/datasets/issues/4565
https://github.com/huggingface/datasets/issues/4565
4,565
Add UFSC OCPap dataset
closed
1
2022-06-24T20:07:54
2022-07-06T19:03:02
2022-07-06T19:03:02
johnnv1
[ "dataset request" ]
## Adding a Dataset - **Name:** UFSC OCPap: Papanicolaou Stained Oral Cytology Dataset (v4) - **Description:** The UFSC OCPap dataset comprises 9,797 labeled images of 1200x1600 pixels acquired from 5 slides of cancer diagnosed and 3 healthy of oral brush samples, from distinct patients. - **Paper:** https://dx.doi.org/10.2139/ssrn.4119212 - **Data:** https://data.mendeley.com/datasets/dr7ydy9xbk/1 - **Motivation:** real data of pap stained oral cytology samples Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,283,932,333
https://api.github.com/repos/huggingface/datasets/issues/4564
https://github.com/huggingface/datasets/pull/4564
4,564
Support streaming bookcorpus dataset
closed
1
2022-06-24T16:13:39
2022-07-06T09:34:48
2022-07-06T09:23:04
albertvillanova
[]
Support streaming bookcorpus dataset.
true
1,283,914,383
https://api.github.com/repos/huggingface/datasets/issues/4563
https://github.com/huggingface/datasets/pull/4563
4,563
Support streaming allocine dataset
closed
1
2022-06-24T15:55:03
2022-06-24T16:54:57
2022-06-24T16:44:41
albertvillanova
[]
Support streaming allocine dataset. Fix #4562.
true
1,283,779,557
https://api.github.com/repos/huggingface/datasets/issues/4562
https://github.com/huggingface/datasets/issues/4562
4,562
Dataset Viewer issue for allocine
closed
5
2022-06-24T13:50:38
2022-06-27T06:39:32
2022-06-24T16:44:41
lewtun
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/allocine ### Description Not sure if this is a problem with `bz2` compression, but I thought these datasets could be streamed: ``` Status code: 400 Exception: AttributeError Message: 'TarContainedFile' object has no attribute 'readable' ``` ### Owner No
false
1,283,624,242
https://api.github.com/repos/huggingface/datasets/issues/4561
https://github.com/huggingface/datasets/pull/4561
4,561
Add evaluation data to acronym_identification
closed
1
2022-06-24T11:17:33
2022-06-27T09:37:55
2022-06-27T08:49:22
lewtun
[]
null
true
1,283,558,873
https://api.github.com/repos/huggingface/datasets/issues/4560
https://github.com/huggingface/datasets/pull/4560
4,560
Add evaluation metadata to imagenet-1k
closed
2
2022-06-24T10:12:41
2023-09-24T09:35:32
2022-09-23T09:37:03
lewtun
[ "dataset contribution" ]
null
true
1,283,544,937
https://api.github.com/repos/huggingface/datasets/issues/4559
https://github.com/huggingface/datasets/pull/4559
4,559
Add action names in schema_guided_dstc8 dataset card
closed
1
2022-06-24T10:00:01
2022-06-24T10:54:28
2022-06-24T10:43:47
lhoestq
[]
As aseked in https://huggingface.co/datasets/schema_guided_dstc8/discussions/1, I added the action names in the dataset card
true
1,283,479,650
https://api.github.com/repos/huggingface/datasets/issues/4558
https://github.com/huggingface/datasets/pull/4558
4,558
Add evaluation metadata to wmt14
closed
2
2022-06-24T09:08:54
2023-09-24T09:35:39
2022-09-23T09:36:50
lewtun
[ "dataset contribution" ]
null
true
1,283,473,889
https://api.github.com/repos/huggingface/datasets/issues/4557
https://github.com/huggingface/datasets/pull/4557
4,557
Add evaluation metadata to wmt16
closed
3
2022-06-24T09:04:23
2023-09-24T09:35:49
2022-09-23T09:36:32
lewtun
[ "dataset contribution" ]
Just to confirm: we should add this metadata via GitHub and not Hub PRs for canonical datasets right?
true
1,283,462,881
https://api.github.com/repos/huggingface/datasets/issues/4556
https://github.com/huggingface/datasets/issues/4556
4,556
Dataset Viewer issue for conll2003
closed
1
2022-06-24T08:55:18
2022-06-24T09:50:39
2022-06-24T09:50:39
lewtun
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/conll2003/viewer/conll2003/test ### Description Seems like a cache problem with this config / split: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/conll2003/__init__.py' ``` ### Owner No
false
1,283,451,651
https://api.github.com/repos/huggingface/datasets/issues/4555
https://github.com/huggingface/datasets/issues/4555
4,555
Dataset Viewer issue for xtreme
closed
1
2022-06-24T08:46:08
2022-06-24T09:50:45
2022-06-24T09:50:45
lewtun
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/xtreme/viewer/PAN-X.de/test ### Description There seems to be a problem with the cache of this config / split: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/xtreme/349258adc25bb45e47de193222f95e68a44f7a7ab53c4283b3f007208a11bf7e/xtreme.py' ``` ### Owner No
false
1,283,369,453
https://api.github.com/repos/huggingface/datasets/issues/4554
https://github.com/huggingface/datasets/pull/4554
4,554
Fix WMT dataset loading issue and docs update (Re-opened)
closed
1
2022-06-24T07:26:16
2022-07-08T15:39:20
2022-07-08T15:27:44
khushmeeet
[]
This PR is a fix for #4354 Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets. Let me know, if any additional changes are required. Thanks
true
1,282,779,560
https://api.github.com/repos/huggingface/datasets/issues/4553
https://github.com/huggingface/datasets/pull/4553
4,553
Stop dropping columns in to_tf_dataset() before we load batches
closed
4
2022-06-23T18:21:05
2022-07-04T19:00:13
2022-07-04T18:49:01
Rocketknight1
[]
`to_tf_dataset()` dropped unnecessary columns before loading batches from the dataset, but this is causing problems when using a transform, because the dropped columns might be needed to compute the transform. Since there's no real way to check which columns the transform might need, we skip dropping columns and instead drop keys from the batch after we load it. cc @amyeroberts and https://github.com/huggingface/notebooks/pull/202
true
1,282,615,646
https://api.github.com/repos/huggingface/datasets/issues/4552
https://github.com/huggingface/datasets/pull/4552
4,552
Tell users to upload on the hub directly
closed
2
2022-06-23T15:47:52
2022-06-26T15:49:46
2022-06-26T15:39:11
lhoestq
[]
As noted in https://github.com/huggingface/datasets/pull/4534, it is still not clear that it is recommended to add datasets on the Hugging Face Hub directly instead of GitHub, so I updated some docs. Moreover since users won't be able to get reviews from us on the Hub, I added a paragraph to tell users that they can open a discussion and tag `datasets` maintainers for reviews. Finally I removed the _previous good reasons_ to add a dataset on GitHub to only keep this one: > In some rare cases it makes more sense to open a PR on GitHub. For example when you are not the author of the dataset and there is no clear organization / namespace that you can put the dataset under. Does it sound good to you @albertvillanova @julien-c ?
true
1,282,534,807
https://api.github.com/repos/huggingface/datasets/issues/4551
https://github.com/huggingface/datasets/pull/4551
4,551
Perform hidden file check on relative data file path
closed
5
2022-06-23T14:49:11
2022-06-30T14:49:20
2022-06-30T14:38:18
mariosasko
[]
Fix #4549
true
1,282,374,441
https://api.github.com/repos/huggingface/datasets/issues/4550
https://github.com/huggingface/datasets/issues/4550
4,550
imdb source error
closed
1
2022-06-23T13:02:52
2022-06-23T13:47:05
2022-06-23T13:47:04
Muhtasham
[ "bug" ]
## Describe the bug imdb dataset not loading ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("imdb") ``` ## Expected results ## Actual results ```bash 06/23/2022 14:45:18 - INFO - datasets.builder - Dataset not on Hf google storage. Downloading and preparing it from source 06/23/2022 14:46:34 - INFO - datasets.utils.file_utils - HEAD request to http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz timed out, retrying... [1.0] ..... ConnectionError: Couldn't reach http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz (ConnectTimeout(MaxRetryError("HTTPConnectionPool(host='ai.stanford.edu', port=80): Max retries exceeded with url: /~amaas/data/sentiment/aclImdb_v1.tar.gz (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x7f2d750cf690>, 'Connection to ai.stanford.edu timed out. (connect timeout=100)'))"))) ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
false
1,282,312,975
https://api.github.com/repos/huggingface/datasets/issues/4549
https://github.com/huggingface/datasets/issues/4549
4,549
FileNotFoundError when passing a data_file inside a directory starting with double underscores
closed
2
2022-06-23T12:19:24
2022-06-30T14:38:18
2022-06-30T14:38:18
lhoestq
[ "bug" ]
Bug experienced in the `accelerate` CI: https://github.com/huggingface/accelerate/runs/7016055148?check_suite_focus=true This is related to https://github.com/huggingface/datasets/pull/4505 and the changes from https://github.com/huggingface/datasets/pull/4412
false
1,282,218,096
https://api.github.com/repos/huggingface/datasets/issues/4548
https://github.com/huggingface/datasets/issues/4548
4,548
Metadata.jsonl for Imagefolder is ignored if it's in a parent directory to the splits directories/do not have "{split}_" prefix
closed
1
2022-06-23T10:58:57
2022-06-30T10:15:32
2022-06-30T10:15:32
polinaeterna
[]
If data contains a single `metadata.jsonl` file for several splits, it won't be included in a dataset's `data_files` and therefore ignored. This happens when a directory is structured like as follows: ``` train/ file_1.jpg file_2.jpg test/ file_3.jpg file_4.jpg metadata.jsonl ``` or like as follows: ``` train_file_1.jpg train_file_2.jpg test_file_3.jpg test_file_4.jpg metadata.jsonl ``` The same for HF repos. because it's ignored by the patterns [here](https://github.com/huggingface/datasets/blob/master/src/datasets/data_files.py#L29) @lhoestq @mariosasko Do you think it's better to add this functionality in `data_files.py` or just specifically in imagefolder/audiofolder code? In `data_files.py` would me more general but I don't know if there are any other cases when that might be needed.
false
1,282,160,517
https://api.github.com/repos/huggingface/datasets/issues/4547
https://github.com/huggingface/datasets/pull/4547
4,547
[CI] Fix some warnings
closed
4
2022-06-23T10:10:49
2022-06-28T14:10:57
2022-06-28T13:59:54
lhoestq
[]
There are some warnings in the CI that are annoying, I tried to remove most of them
true
1,282,093,288
https://api.github.com/repos/huggingface/datasets/issues/4546
https://github.com/huggingface/datasets/pull/4546
4,546
[CI] fixing seqeval install in ci by pinning setuptools-scm
closed
1
2022-06-23T09:24:37
2022-06-23T10:24:16
2022-06-23T10:13:44
lhoestq
[]
The latest setuptools-scm version supported on 3.6 is 6.4.2. However for some reason circleci has version 7, which doesn't work. I fixed this by pinning the version of setuptools-scm in the circleci job Fix https://github.com/huggingface/datasets/issues/4544
true
1,280,899,028
https://api.github.com/repos/huggingface/datasets/issues/4545
https://github.com/huggingface/datasets/pull/4545
4,545
Make DuplicateKeysError more user friendly [For Issue #2556]
closed
2
2022-06-22T21:01:34
2022-06-28T09:37:06
2022-06-28T09:26:04
VijayKalmath
[]
# What does this PR do? ## Summary *DuplicateKeysError error does not provide any information regarding the examples which have the same the key.* *This information is very helpful for debugging the dataset generator script.* ## Additions - ## Changes - Changed `DuplicateKeysError Class` in `src/datasets/keyhash.py` to add current index and duplicate_key_indices to error message. - Changed `check_duplicate_keys` function in `src/datasets/arrow_writer.py` to find indices of examples with duplicate hash if duplicate keys are found. ## Deletions - ## To do : - [x] Find way to find and print path `<Path to Dataset>` in Error message ## Issues Addressed : Fixes #2556
true
1,280,500,340
https://api.github.com/repos/huggingface/datasets/issues/4544
https://github.com/huggingface/datasets/issues/4544
4,544
[CI] seqeval installation fails sometimes on python 3.6
closed
0
2022-06-22T16:35:23
2022-06-23T10:13:44
2022-06-23T10:13:44
lhoestq
[]
The CI sometimes fails to install seqeval, which cause the `seqeval` metric tests to fail. The installation fails because of this error: ``` Collecting seqeval Downloading seqeval-1.2.2.tar.gz (43 kB) |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 10 kB 42.1 MB/s eta 0:00:01 |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 20 kB 53.3 MB/s eta 0:00:01 |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–Œ | 30 kB 67.2 MB/s eta 0:00:01 |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ | 40 kB 76.1 MB/s eta 0:00:01 |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 43 kB 10.0 MB/s Preparing metadata (setup.py) ... - error ERROR: Command errored out with exit status 1: command: /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"'; __file__='"'"'/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-pf54_vqy cwd: /tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/ Complete output (22 lines): Traceback (most recent call last): File "<string>", line 1, in <module> File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/setup.py", line 56, in <module> 'Programming Language :: Python :: Implementation :: PyPy' File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/__init__.py", line 143, in setup return distutils.core.setup(**attrs) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/core.py", line 108, in setup _setup_distribution = dist = klass(attrs) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 442, in __init__ k: v for k, v in attrs.items() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/distutils/dist.py", line 281, in __init__ self.finalize_options() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/setuptools/dist.py", line 601, in finalize_options ep.load()(self, ep.name, value) File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2346, in load return self.resolve() File "/home/circleci/.pyenv/versions/3.6.15/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2352, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "/tmp/pip-install-1l96tbyj/seqeval_b31086f711d84743abe6905d2aa9dade/.eggs/setuptools_scm-7.0.2-py3.6.egg/setuptools_scm/__init__.py", line 5 from __future__ import annotations ^ SyntaxError: future feature annotations is not defined ---------------------------------------- WARNING: Discarding https://files.pythonhosted.org/packages/9d/2d/233c79d5b4e5ab1dbf111242299153f3caddddbb691219f363ad55ce783d/seqeval-1.2.2.tar.gz#sha256=f28e97c3ab96d6fcd32b648f6438ff2e09cfba87f05939da9b3970713ec56e6f (from https://pypi.org/simple/seqeval/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output. ``` for example in https://app.circleci.com/pipelines/github/huggingface/datasets/12665/workflows/93878eb9-a923-4b35-b2e7-c5e9b22f10ad/jobs/75300 Here is a diff of the pip install logs until the error is reached: https://www.diffchecker.com/VkQDLeQT This could be caused by the latest updates of setuptools-scm
false
1,280,379,781
https://api.github.com/repos/huggingface/datasets/issues/4543
https://github.com/huggingface/datasets/pull/4543
4,543
[CI] Fix upstream hub test url
closed
2
2022-06-22T15:34:27
2022-06-22T16:37:40
2022-06-22T16:27:37
lhoestq
[]
Some tests were still using moon-stagign instead of hub-ci. I also updated the token to use one dedicated to `datasets`
true
1,280,269,445
https://api.github.com/repos/huggingface/datasets/issues/4542
https://github.com/huggingface/datasets/issues/4542
4,542
[to_tf_dataset] Use Feather for better compatibility with TensorFlow ?
open
48
2022-06-22T14:42:00
2022-10-11T08:45:45
null
lhoestq
[ "generic discussion" ]
To have better performance in TensorFlow, it is important to provide lists of data files in supported formats. For example sharded TFRecords datasets are extremely performant. This is because tf.data can better leverage parallelism in this case, and load one file at a time in memory. It seems that using `tensorflow_io` we could have something similar for `to_tf_dataset` if we provide sharded Feather files: https://www.tensorflow.org/io/api_docs/python/tfio/arrow/ArrowFeatherDataset Feather is a format almost equivalent to the Arrow IPC Stream format we're using in `datasets`: Feather V2 is equivalent to Arrow IPC File format, which is an extension of the stream format (it has an extra footer). Therefore we could store datasets as Feather instead of Arrow IPC Stream format without breaking the whole library. Here are a few points to explore - [ ] check the performance of ArrowFeatherDataset in tf.data - [ ] check what would change if we were to switch to Feather if needed, in particular check that those are fine: memory mapping, typing, writing, reading to python objects, etc. We would also need to implement sharding when loading a dataset (this will be done anyway for #546) cc @Rocketknight1 @gante feel free to comment in case I missed anything ! I'll share some files and scripts, so that we can benchmark performance of Feather files with tf.data
false
1,280,161,436
https://api.github.com/repos/huggingface/datasets/issues/4541
https://github.com/huggingface/datasets/pull/4541
4,541
Fix timestamp conversion from Pandas to Python datetime in streaming mode
closed
2
2022-06-22T13:40:01
2022-06-22T16:39:27
2022-06-22T16:29:09
lhoestq
[]
Arrow accepts both pd.Timestamp and datetime.datetime objects to create timestamp arrays. However a timestamp array is always converted to datetime.datetime objects. This created an inconsistency between streaming in non-streaming. e.g. the `ett` dataset outputs datetime.datetime objects in non-streaming but pd.timestamp in streaming. I fixed this by always converting pd.Timestamp to datetime.datetime during the example encoding step. I fixed the same issue for pd.Timedelta as well. Finally I added an extra step of conversion for Series and DataFrame to take this into account in case such data are passed as Series or DataFrame. Fix https://github.com/huggingface/datasets/issues/4533 Related to https://github.com/huggingface/datasets-server/issues/397
true
1,280,142,942
https://api.github.com/repos/huggingface/datasets/issues/4540
https://github.com/huggingface/datasets/issues/4540
4,540
Avoid splitting by` .py` for the file.
closed
4
2022-06-22T13:26:55
2022-07-07T13:17:44
2022-07-07T13:17:44
espoirMur
[ "good first issue" ]
https://github.com/huggingface/datasets/blob/90b3a98065556fc66380cafd780af9b1814b9426/src/datasets/load.py#L272 Hello, Thanks you for this library . I was using it and I had one edge case. my home folder name ends with `.py` it is `/home/espoir.py` so anytime I am running the code to load a local module this code here it is failing because after splitting it is trying to save the code to my home directory. Step to reproduce. - If you have a home folder which ends with `.py` - load a module with a local folder `qa_dataset = load_dataset("src/data/build_qa_dataset.py")` it is failed A possible workaround would be to use pathlib at the mentioned line ` meta_path = Path(importable_local_file).parent.joinpath("metadata.json")` this can alivate the issue . Let me what are your thought on this and I can try to fix it by A PR.
false
1,279,779,829
https://api.github.com/repos/huggingface/datasets/issues/4539
https://github.com/huggingface/datasets/pull/4539
4,539
Replace deprecated logging.warn with logging.warning
closed
0
2022-06-22T08:32:29
2022-06-22T13:43:23
2022-06-22T12:51:51
hugovk
[]
Replace `logging.warn` (deprecated in [Python 2.7, 2011](https://github.com/python/cpython/commit/04d5bc00a219860c69ea17eaa633d3ab9917409f)) with `logging.warning` (added in [Python 2.3, 2003](https://github.com/python/cpython/commit/6fa635df7aa88ae9fd8b41ae42743341316c90f7)). * https://docs.python.org/3/library/logging.html#logging.Logger.warning * https://github.com/python/cpython/issues/57444
true
1,279,409,786
https://api.github.com/repos/huggingface/datasets/issues/4538
https://github.com/huggingface/datasets/issues/4538
4,538
Dataset Viewer issue for Pile of Law
closed
5
2022-06-22T02:48:40
2022-06-27T07:30:23
2022-06-26T22:26:22
Breakend
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/pile-of-law/pile-of-law ### Description Hi, I would like to turn off the dataset viewer for our dataset without enabling access requests. To comply with upstream dataset creator requests/licenses, we would like to make sure that the data is not indexed by search engines and so would like to turn off dataset previews. But we do not want to collect user emails because it would violate single blind review, allowing us to deduce potential reviewers' identities. Is there a way that we can turn off the dataset viewer without collecting identity information? Thanks so much! ### Owner Yes
false
1,279,144,310
https://api.github.com/repos/huggingface/datasets/issues/4537
https://github.com/huggingface/datasets/pull/4537
4,537
Fix WMT dataset loading issue and docs update
closed
2
2022-06-21T21:48:02
2022-06-24T07:05:43
2022-06-24T07:05:10
khushmeeet
[]
This PR is a fix for #4354 Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets. As I am on a M1 Mac, I am not able to create a virtual `dev` environment using `pip install -e ".[dev]"`. Issue is with `tensorflow-text` not supported on M1s and there is no supporting repo by Apple or Google. So, if I was needed to perform local testing, I am not able to do that. Let me know, if any additional changes are required. Thanks
true
1,278,734,727
https://api.github.com/repos/huggingface/datasets/issues/4536
https://github.com/huggingface/datasets/pull/4536
4,536
Properly raise FileNotFound even if the dataset is private
closed
1
2022-06-21T17:05:50
2022-06-28T10:46:51
2022-06-28T10:36:10
lhoestq
[]
`tests/test_load.py::test_load_streaming_private_dataset` was failing because the hub now returns 401 when getting the HfApi.dataset_info of a dataset without authentication. `load_dataset` was raising ConnectionError, while it should be FileNoteFoundError since it first checks for local files before checking the Hub. Moreover when use_auth_token is not set (default is False), we should not pass `token=None` to HfApi.dataset_info, or it will use the local token by default - instead it should use no token. It's currently not possible to ask for no token to be used, so as a workaround I simply set token="no-token"
true
1,278,365,039
https://api.github.com/repos/huggingface/datasets/issues/4535
https://github.com/huggingface/datasets/pull/4535
4,535
Add `batch_size` parameter when calling `add_faiss_index` and `add_faiss_index_from_external_arrays`
closed
5
2022-06-21T12:18:49
2022-06-27T16:25:09
2022-06-27T16:14:36
alvarobartt
[]
Currently, even though the `batch_size` when adding vectors to the FAISS index can be tweaked in `FaissIndex.add_vectors()`, the function `ArrowDataset.add_faiss_index` doesn't have either the parameter `batch_size` to be propagated to the nested `FaissIndex.add_vectors` function or `*args, **kwargs`, so on, this PR adds the `batch_size` parameter to both `ArrowDataset.add_faiss_index` and `ArrowDataset.add_faiss_index_from_external_arrays`. This is useful so as to tweak the `batch_size` according to the VM specifications.
true
1,277,897,197
https://api.github.com/repos/huggingface/datasets/issues/4534
https://github.com/huggingface/datasets/pull/4534
4,534
Add `tldr_news` dataset
closed
2
2022-06-21T05:02:43
2022-06-23T14:33:54
2022-06-21T14:21:11
JulesBelveze
[]
This PR aims at adding support for a news dataset: `tldr news`. This dataset is based on the daily [tldr tech newsletter](https://tldr.tech/newsletter) and contains a `headline` as well as a `content` for every piece of news contained in a newsletter.
true
1,277,211,490
https://api.github.com/repos/huggingface/datasets/issues/4533
https://github.com/huggingface/datasets/issues/4533
4,533
Timestamp not returned as datetime objects in streaming mode
closed
0
2022-06-20T17:28:47
2022-06-22T16:29:09
2022-06-22T16:29:09
lhoestq
[ "streaming" ]
As reported in (internal) https://github.com/huggingface/datasets-server/issues/397 ```python >>> from datasets import load_dataset >>> dataset = load_dataset("ett", name="h2", split="test", streaming=True) >>> d = next(iter(dataset)) >>> d['start'] Timestamp('2016-07-01 00:00:00') ``` while loading in non-streaming mode it returns `datetime.datetime(2016, 7, 1, 0, 0)`
false
1,277,167,129
https://api.github.com/repos/huggingface/datasets/issues/4532
https://github.com/huggingface/datasets/pull/4532
4,532
Add Video feature
closed
3
2022-06-20T16:36:41
2022-11-10T16:59:51
2022-11-10T16:59:51
nateraw
[]
The following adds a `Video` feature for encoding/decoding videos on the fly from in memory bytes. It uses my own `encoded-video` library which is basically `pytorchvideo`'s encoded video but with all the `torch` specific stuff stripped out. Because of that, and because the tool I used under the hood is not very mature, I leave this as a draft idea that we can use to build off of.
true
1,277,054,172
https://api.github.com/repos/huggingface/datasets/issues/4531
https://github.com/huggingface/datasets/issues/4531
4,531
Dataset Viewer issue for CSV datasets
closed
2
2022-06-20T14:56:24
2022-06-21T08:28:46
2022-06-21T08:28:27
merveenoyan
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/scikit-learn/breast-cancer-wisconsin ### Description I'm populating CSV datasets [here](https://huggingface.co/scikit-learn) but the viewer is not enabled and it looks for a dataset loading script, the datasets aren't on queue as well. You can replicate the problem by simply uploading any CSV dataset. ### Owner Yes
false
1,276,884,962
https://api.github.com/repos/huggingface/datasets/issues/4530
https://github.com/huggingface/datasets/pull/4530
4,530
Add AudioFolder packaged loader
closed
10
2022-06-20T12:54:02
2022-08-22T14:36:49
2022-08-22T14:20:40
polinaeterna
[ "enhancement" ]
will close #3964 AudioFolder is almost identical to ImageFolder except for inferring labels is not the default behavior (`drop_labels` is set to True in config), the option of inferring them is preserved though. The weird thing is happening with the `test_data_files_with_metadata_and_archives` when `streaming` is `True`. Here is the log from the CI: ``` ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/datasets/features/audio.py:237: in _decode_non_mp3_path_like array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/librosa/util/decorators.py:88: in inner_f return f(*args, **kwargs) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/librosa/core/audio.py:176: in load raise (exc) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/librosa/core/audio.py:155: in load context = sf.SoundFile(path) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/soundfile.py:629: in __init__ self._file = self._open(file, mode_int, closefd) ../.pyenv/versions/3.6.15/lib/python3.6/site-packages/soundfile.py:1184: in _open "Error opening {0!r}: ".format(self.name)) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ err = 72 prefix = "Error opening <zipfile.ZipExtFile name='audio_file.wav' mode='r' compress_type=deflate>: " def _error_check(err, prefix=""): """Pretty-print a numerical error code if there is an error.""" if err != 0: err_str = _snd.sf_error_number(err) > raise RuntimeError(prefix + _ffi.string(err_str).decode('utf-8', 'replace')) E RuntimeError: Error opening <zipfile.ZipExtFile name='audio_file.wav' mode='r' compress_type=deflate>: Error in WAV file. No 'data' chunk marker. ``` I hadn't been able to reproduce this locally until I created the same test environment (I mean with `pip install .[tests]`) with python3.6. The same env but with python3.8 passes the test! I didn't manage to figure out what's wrong, I also tried simply to replace the test wav file and still got the same error. Versions of `soundfile`, `librosa` and `libsndfile` are identical. Might it be something with zip compression? Sounds weird but I don't have any other ideas... TODO: - [x] align with #4622 - [x] documentation - [x] tests for AutoFolder?
true
1,276,729,303
https://api.github.com/repos/huggingface/datasets/issues/4529
https://github.com/huggingface/datasets/issues/4529
4,529
Ecoset
closed
3
2022-06-20T10:39:34
2023-10-26T09:12:32
2023-10-04T18:19:52
DiGyt
[ "dataset request" ]
## Adding a Dataset - **Name:** *Ecoset* - **Description:** *https://www.kietzmannlab.org/ecoset/* - **Paper:** *https://doi.org/10.1073/pnas.2011417118* - **Data:** *https://codeocean.com/capsule/9570390/tree/v1* - **Motivation:** **Ecoset** was created as a clean and ecologically valid alternative to **Imagenet**. It is a large image recognition dataset, similar to Imagenet in size and structure. However, the authors of ecoset claim several improvements over Imagenet, like: - more ecologically valid classes (e.g. not over-focussed on distinguishing different dog breeds) - less NSFW content - 'pre-packed image recognition models' that come with the dataset and can be used for validation of other models. I am working for one of the authors of the paper with the aim of bringing Ecoset to huggingface datasets. Therefore I can work on this issue personally, but could use some help from devs and experienced users if the dataset is of interest to them. I phrased some of my questions on [discuss.huggingface](https://discuss.huggingface.co/t/handling-large-image-datasets/19373).
false
1,276,679,155
https://api.github.com/repos/huggingface/datasets/issues/4528
https://github.com/huggingface/datasets/issues/4528
4,528
Memory leak when iterating a Dataset
closed
5
2022-06-20T10:03:14
2022-09-12T08:51:39
2022-09-12T08:51:39
NouamaneTazi
[ "bug" ]
e## Describe the bug It seems that memory never gets freed after iterating a `Dataset` (using `.map()` or a simple `for` loop) ## Steps to reproduce the bug ```python import gc import logging import time import pyarrow from datasets import load_dataset from tqdm import trange import os, psutil logging.basicConfig(level=logging.INFO) logger = logging.getLogger(__name__) process = psutil.Process(os.getpid()) print(process.memory_info().rss) # output: 633507840 bytes corpus = load_dataset("BeIR/msmarco", 'corpus', keep_in_memory=False, streaming=False)['corpus'] # or "BeIR/trec-covid" for a smaller dataset print(process.memory_info().rss) # output: 698601472 bytes logger.info("Applying method to all examples in all splits") for i in trange(0, len(corpus), 1000): batch = corpus[i:i+1000] data = pyarrow.total_allocated_bytes() if data > 0: logger.info(f"{i}/{len(corpus)}: {data}") print(process.memory_info().rss) # output: 3788247040 bytes del batch gc.collect() print(process.memory_info().rss) # output: 3788247040 bytes logger.info("Done...") time.sleep(100) ``` ## Expected results Limited memory usage, and memory to be freed after processing ## Actual results Memory leak ![test](https://user-images.githubusercontent.com/29777165/174578276-f2c37e6c-b5d8-4985-b4d8-8413eb2b3241.png) You can see how the memory allocation keeps increasing until it reaches a steady state when we hit the `time.sleep(100)`, which showcases that even the garbage collector couldn't free the allocated memory ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
false
1,276,583,536
https://api.github.com/repos/huggingface/datasets/issues/4527
https://github.com/huggingface/datasets/issues/4527
4,527
Dataset Viewer issue for vadis/sv-ident
closed
1
2022-06-20T08:47:42
2022-06-21T16:42:46
2022-06-21T16:42:45
albertvillanova
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/vadis/sv-ident ### Description The dataset preview does not work: ``` Server Error Status code: 400 Exception: Status400Error Message: The dataset does not exist. ``` However, the dataset is streamable and works locally: ```python In [1]: from datasets import load_dataset; ds = load_dataset("sv-ident.py", split="train", streaming=True); item = next(iter(ds)); item Using custom data configuration default Out[1]: {'sentence': 'Our point, however, is that so long as downward (favorable) comparisons overwhelm the potential for unfavorable comparisons, system justification should be a likely outcome amongst the disadvantaged.', 'is_variable': 1, 'variable': ['exploredata-ZA5400_VarV66', 'exploredata-ZA5400_VarV53'], 'research_data': ['ZA5400'], 'doc_id': '73106', 'uuid': 'b9fbb80f-3492-4b42-b9d5-0254cc33ac10', 'lang': 'en'} ``` CC: @e-tornike ### Owner No
false
1,276,580,185
https://api.github.com/repos/huggingface/datasets/issues/4526
https://github.com/huggingface/datasets/issues/4526
4,526
split cache used when processing different split
open
2
2022-06-20T08:44:58
2022-06-28T14:04:58
null
gpucce
[ "bug" ]
## Describe the bug` ``` ds1 = load_dataset('squad', split='validation') ds2 = load_dataset('squad', split='train') ds1 = ds1.map(some_function) ds2 = ds2.map(some_function) assert ds1 == ds2 ``` This happens when ds1 and ds2 are created in `pytorch_lightning.DataModule` through ``` class myDataModule: def train_dataloader(self): ds = load_dataset('squad', split='train') ds = ds.map(some_function) return [ds] def val_dataloader(self): ds = load_dataset('squad', split="validation") ds = ds.map(some_function) return [ds] ``` I don't know if it depends on `pytorch_lightning` or `datasets` but setting `ds.map(some_function, load_from_cache_file=False)` fixes the issue. If this is not enough to replicate I will try and provide and MWE, I don't have time now so I thought I wuld open the issue first!
false
1,276,491,386
https://api.github.com/repos/huggingface/datasets/issues/4525
https://github.com/huggingface/datasets/issues/4525
4,525
Out of memory error on workers while running Beam+Dataflow
closed
10
2022-06-20T07:28:12
2024-10-09T16:09:50
2024-10-09T16:09:50
albertvillanova
[ "bug" ]
## Describe the bug While running the preprocessing of the natural_question dataset (see PR #4368), there is an issue for the "default" config (train+dev files). Previously we ran the preprocessing for the "dev" config (only dev files) with success. Train data files are larger than dev ones and apparently workers run out of memory while processing them. Any help/hint is welcome! Error message: ``` Data channel closed, unable to receive additional data from SDK sdk-0-0 ``` Info from the Diagnostics tab: ``` Out of memory: Killed process 1882 (python) total-vm:6041764kB, anon-rss:3290928kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:9520kB oom_score_adj:900 The worker VM had to shut down one or more processes due to lack of memory. ``` ## Additional information ### Stack trace ``` Traceback (most recent call last): File "/home/albert_huggingface_co/natural_questions/venv/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/commands/run_beam.py", line 127, in run builder.download_and_prepare( File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/datasets/builder.py", line 1389, in _download_and_prepare pipeline_results.wait_until_finish() File "/home/albert_huggingface_co/natural_questions/venv/lib/python3.9/site-packages/apache_beam/runners/dataflow/dataflow_runner.py", line 1667, in wait_until_finish raise DataflowRuntimeException( apache_beam.runners.dataflow.dataflow_runner.DataflowRuntimeException: Dataflow pipeline failed. State: FAILED, Error: Data channel closed, unable to receive additional data from SDK sdk-0-0 ``` ### Logs ``` Error message from worker: Data channel closed, unable to receive additional data from SDK sdk-0-0 Workflow failed. Causes: S30:train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/Read+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/GroupByKey/GroupByWindow+train/ReadAllFromText/ReadAllFiles/Reshard/ReshufflePerKey/FlatMap(restore_timestamps)+train/ReadAllFromText/ReadAllFiles/Reshard/RemoveRandomKeys+train/ReadAllFromText/ReadAllFiles/ReadRange+train/Map(_parse_example)+train/Encode+train/Count N. Examples+train/Get values/Values+train/Save to parquet/Write/WriteImpl/WindowInto(WindowIntoFn)+train/Save to parquet/Write/WriteImpl/WriteBundles+train/Save to parquet/Write/WriteImpl/Pair+train/Save to parquet/Write/WriteImpl/GroupByKey/Write failed., The job failed because a work item has failed 4 times. Look in previous log entries for the cause of each one of the 4 failures. For more information, see https://cloud.google.com/dataflow/docs/guides/common-errors. The work item was attempted on these workers: beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: Data channel closed, unable to receive additional data from SDK sdk-0-0, beamapp-alberthuggingface-06170554-5p23-harness-t4v9 Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-bwsj Root cause: The worker lost contact with the service., beamapp-alberthuggingface-06170554-5p23-harness-5052 Root cause: The worker lost contact with the service. ```
false
1,275,909,186
https://api.github.com/repos/huggingface/datasets/issues/4524
https://github.com/huggingface/datasets/issues/4524
4,524
Downloading via Apache Pipeline, client cancelled (org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException)
open
2
2022-06-18T23:36:45
2022-06-21T00:38:20
null
ddegenaro
[ "bug" ]
## Describe the bug When downloading some `wikipedia` languages (in particular, I'm having a hard time with Spanish, Cebuano, and Russian) via FlinkRunner, I encounter the exception in the title. I have been playing with package versions a lot, because unfortunately, the different dependencies required by these packages seem to be incompatible in terms of versions (dill and requests, for instance). It should be noted that the following code runs for several hours without issue, executing the `load_dataset()` function, before the exception occurs. ## Steps to reproduce the bug ```python # bash commands !pip install datasets !pip install apache-beam[interactive] !pip install mwparserfromhell !pip install dill==0.3.5.1 !pip install requests==2.23.0 # imports import os from datasets import load_dataset import apache_beam as beam import mwparserfromhell from google.colab import drive import dill import requests # mount drive drive_dir = os.path.join(os.getcwd(), 'drive') drive.mount(drive_dir) # confirming the versions of these two packages are the ones that are suggested by the outputs from the bash commands print(dill.__version__) print(requests.__version__) lang = 'es' # or 'ru' or 'ceb' - these are the ones causing the issue lang_dir = os.path.join(drive_dir, 'path/to/my/folder', lang) if not os.path.exists(lang_dir): x = None x = load_dataset('wikipedia', '20220301.' + lang, beam_runner='Flink', split='train') x.save_to_disk(lang_dir) ``` ## Expected results Although some warnings are generally produced by this code (run in Colab Notebook), most languages I've tried have been successfully downloaded. It should simply go through without issue, but for these languages, I am continually encountering this error. ## Actual results Traceback below: ``` Exception in thread run_worker_3-1: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 234, in run for work_request in self._control_stub.Control(get_responses()): File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__ return self._next() File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next raise self grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "Socket closed" debug_error_string = "{"created":"@1655593643.871830638","description":"Error received from peer ipv4:127.0.0.1:44441","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Socket closed","grpc_status":14}" > Traceback (most recent call last): File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 267, in _execute response = task() File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 340, in <lambda> lambda: self.create_worker().do_instruction(request), request) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 581, in do_instruction getattr(request, request_type), request.instruction_id) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 618, in process_bundle bundle_processor.process_bundle(instruction_id)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 996, in process_bundle element.data) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 221, in process_encoded self.output(decoded_value) File "apache_beam/runners/worker/operations.py", line 346, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 348, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive File "apache_beam/runners/worker/operations.py", line 707, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/worker/operations.py", line 708, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/common.py", line 1200, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ERROR:apache_beam.runners.worker.sdk_worker:Error processing instruction 26. Original traceback is Traceback (most recent call last): File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 267, in _execute response = task() File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 340, in <lambda> lambda: self.create_worker().do_instruction(request), request) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 581, in do_instruction getattr(request, request_type), request.instruction_id) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 618, in process_bundle bundle_processor.process_bundle(instruction_id)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 996, in process_bundle element.data) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 221, in process_encoded self.output(decoded_value) File "apache_beam/runners/worker/operations.py", line 346, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 348, in apache_beam.runners.worker.operations.Operation.output File "apache_beam/runners/worker/operations.py", line 215, in apache_beam.runners.worker.operations.SingletonConsumerSet.receive File "apache_beam/runners/worker/operations.py", line 707, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/worker/operations.py", line 708, in apache_beam.runners.worker.operations.DoOperation.process File "apache_beam/runners/common.py", line 1200, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 1281, in apache_beam.runners.common.DoFnRunner._reraise_augmented File "apache_beam/runners/common.py", line 1198, in apache_beam.runners.common.DoFnRunner.process File "apache_beam/runners/common.py", line 718, in apache_beam.runners.common.PerWindowInvoker.invoke_process File "apache_beam/runners/common.py", line 782, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/bundle_processor.py", line 426, in __getitem__ self._cache[target_window] = self._side_input_data.view_fn(raw_view) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 391, in <lambda> lambda iterable: from_runtime_iterable(iterable, view_options)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/pvalue.py", line 512, in _from_runtime_iterable head = list(itertools.islice(it, 2)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1228, in _lazy_iterator self._underlying.get_raw(state_key, continuation_token)) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1019, in get_raw continuation_token=continuation_token))) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/sdk_worker.py", line 1060, in _blocking_request raise RuntimeError(response.error) RuntimeError: Unknown process bundle instruction id '26' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles'] ERROR:root:org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: CANCELLED: client cancelled ERROR:apache_beam.runners.worker.data_plane:Failed to read inputs in the data plane. Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 634, in _read_inputs for elements in elements_iterator: File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__ return self._next() File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next raise self grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.CANCELLED details = "Multiplexer hanging up" debug_error_string = "{"created":"@1655593654.436885887","description":"Error received from peer ipv4:127.0.0.1:43263","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Multiplexer hanging up","grpc_status":1}" > Exception in thread read_grpc_client_inputs: Traceback (most recent call last): File "/usr/lib/python3.7/threading.py", line 926, in _bootstrap_inner self.run() File "/usr/lib/python3.7/threading.py", line 870, in run self._target(*self._args, **self._kwargs) File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 651, in <lambda> target=lambda: self._read_inputs(elements_iterator), File "/usr/local/lib/python3.7/dist-packages/apache_beam/runners/worker/data_plane.py", line 634, in _read_inputs for elements in elements_iterator: File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 426, in __next__ return self._next() File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in _next raise self grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with: status = StatusCode.CANCELLED details = "Multiplexer hanging up" debug_error_string = "{"created":"@1655593654.436885887","description":"Error received from peer ipv4:127.0.0.1:43263","file":"src/core/lib/surface/call.cc","file_line":952,"grpc_message":"Multiplexer hanging up","grpc_status":1}" > --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) [/tmp/ipykernel_219/3869142325.py](https://localhost:8080/#) in <module> 18 x = None 19 x = load_dataset('wikipedia', '20220301.' + lang, beam_runner='Flink', ---> 20 split='train') 21 x.save_to_disk(lang_dir) 3 frames [/usr/local/lib/python3.7/dist-packages/apache_beam/runners/portability/portable_runner.py](https://localhost:8080/#) in wait_until_finish(self, duration) 604 605 if self._runtime_exception: --> 606 raise self._runtime_exception 607 608 return self._state RuntimeError: Pipeline BeamApp-root-0618220708-b3b59a0e_d8efcf67-9119-4f76-b013-70de7b29b54d failed in state FAILED: org.apache.beam.vendor.grpc.v1p43p2.io.grpc.StatusRuntimeException: CANCELLED: client cancelled ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
false
1,275,002,639
https://api.github.com/repos/huggingface/datasets/issues/4523
https://github.com/huggingface/datasets/pull/4523
4,523
Update download url and improve card of `cats_vs_dogs` dataset
closed
1
2022-06-17T12:59:44
2022-06-21T14:23:26
2022-06-21T14:13:08
mariosasko
[]
Improve the download URL (reported here: https://huggingface.co/datasets/cats_vs_dogs/discussions/1), remove the `image_file_path` column (not used in Transformers, so it should be safe) and add more info to the card.
true
1,274,929,328
https://api.github.com/repos/huggingface/datasets/issues/4522
https://github.com/huggingface/datasets/issues/4522
4,522
Try to reduce the number of datasets that require manual download
open
0
2022-06-17T11:42:03
2022-06-17T11:52:48
null
severo
[]
> Currently, 41 canonical datasets require manual download. I checked their scripts and I'm pretty sure this number can be reduced to β‰ˆ 30 by not relying on bash scripts to download data, hosting data directly on the Hub when the license permits, etc. Then, we will mostly be left with datasets with restricted access, which we can ignore from https://github.com/huggingface/datasets-server/issues/12#issuecomment-1026920432
false
1,274,919,437
https://api.github.com/repos/huggingface/datasets/issues/4521
https://github.com/huggingface/datasets/issues/4521
4,521
Datasets method `.map` not hashing
closed
3
2022-06-17T11:31:10
2022-08-04T12:08:16
2022-06-28T13:23:05
sanchit-gandhi
[ "bug" ]
## Describe the bug Datasets method `.map` not hashing, even with an empty no-op function ## Steps to reproduce the bug ```python from datasets import load_dataset # download 9MB dummy dataset ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean") def prepare_dataset(batch): return(batch) ds = ds.map( prepare_dataset, num_proc=1, desc="preprocess train dataset", ) ``` ## Expected results Hashed and cached dataset preprocessing ## Actual results Does not hash properly: ``` Parameter 'function'=<function prepare_dataset at 0x7fccb68e9280> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.31 - Python version: 3.9.12 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 cc @lhoestq
false
1,274,879,180
https://api.github.com/repos/huggingface/datasets/issues/4520
https://github.com/huggingface/datasets/issues/4520
4,520
Failure to hash `dataclasses` - results in functions that cannot be hashed or cached in `.map`
closed
2
2022-06-17T10:47:17
2022-06-28T14:47:17
2022-06-28T14:04:29
sanchit-gandhi
[ "bug" ]
Dataclasses cannot be hashed. As a result, they cannot be hashed or cached if used in the `.map` method. Dataclasses are used extensively in Transformers examples scripts: (c.f. [CTC example](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py)). Since dataclasses cannot be hashed, one has to define separate variables prior to passing dataclass attributes to the `.map` method: ```python phoneme_language = data_args.phoneme_language ``` in the example https://github.com/huggingface/transformers/blob/3c7e56fbb11f401de2528c1dcf0e282febc031cd/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L603-L630 ## Steps to reproduce the bug ```python from dataclasses import dataclass, field from datasets.fingerprint import Hasher @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ phoneme_language: str = field( default=None, metadata={"help": "The name of the phoneme language to use."} ) data_args = DataTrainingArguments(phoneme_language ="foo") Hasher.hash(data_args) phoneme_language = data_args.phoneme_language Hasher.hash(phoneme_language) ``` ## Expected results A hash. ## Actual results <details> <summary> Traceback </summary> ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Input In [1], in <cell line: 16>() 10 phoneme_language: str = field( 11 default=None, metadata={"help": "The name of the phoneme language to use."} 12 ) 14 data_args = DataTrainingArguments(phoneme_language ="foo") ---> 16 Hasher.hash(data_args) 18 phoneme_language = data_args. phoneme_language 20 Hasher.hash(phoneme_language) File ~/datasets/src/datasets/fingerprint.py:237, in Hasher.hash(cls, value) 235 return cls.dispatch[type(value)](cls, value) 236 else: --> 237 return cls.hash_default(value) File ~/datasets/src/datasets/fingerprint.py:230, in Hasher.hash_default(cls, value) 228 @classmethod 229 def hash_default(cls, value: Any) -> str: --> 230 return cls.hash_bytes(dumps(value)) File ~/datasets/src/datasets/utils/py_utils.py:564, in dumps(obj) 562 file = StringIO() 563 with _no_cache_fields(obj): --> 564 dump(obj, file) 565 return file.getvalue() File ~/datasets/src/datasets/utils/py_utils.py:539, in dump(obj, file) 537 def dump(obj, file): 538 """pickle an object to a file""" --> 539 Pickler(file, recurse=True).dump(obj) 540 return File ~/hf/lib/python3.8/site-packages/dill/_dill.py:620, in Pickler.dump(self, obj) 618 raise PicklingError(msg) 619 else: --> 620 StockPickler.dump(self, obj) 621 return File /usr/lib/python3.8/pickle.py:487, in _Pickler.dump(self, obj) 485 if self.proto >= 4: 486 self.framer.start_framing() --> 487 self.save(obj) 488 self.write(STOP) 489 self.framer.end_framing() File /usr/lib/python3.8/pickle.py:603, in _Pickler.save(self, obj, save_persistent_id) 599 raise PicklingError("Tuple returned by %s must have " 600 "two to six elements" % reduce) 602 # Save the reduce() output and finally memoize the object --> 603 self.save_reduce(obj=obj, *rv) File /usr/lib/python3.8/pickle.py:687, in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj) 684 raise PicklingError( 685 "args[0] from __newobj__ args has the wrong class") 686 args = args[1:] --> 687 save(cls) 688 save(args) 689 write(NEWOBJ) File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1838, in save_type(pickler, obj, postproc_list) 1836 postproc_list = [] 1837 postproc_list.append((setattr, (obj, '__qualname__', obj_name))) -> 1838 _save_with_postproc(pickler, (_create_type, ( 1839 type(obj), obj.__name__, obj.__bases__, _dict 1840 )), obj=obj, postproc_list=postproc_list) 1841 log.info("# %s" % _t) 1842 else: File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1140, in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list) 1137 pickler._postproc[id(obj)] = postproc_list 1139 # TODO: Use state_setter in Python 3.8 to allow for faster cPickle implementations -> 1140 pickler.save_reduce(*reduction, obj=obj) 1142 if is_pickler_dill: 1143 # pickler.x -= 1 1144 # print(pickler.x*' ', 'pop', obj, id(obj)) 1145 postproc = pickler._postproc.pop(id(obj)) File /usr/lib/python3.8/pickle.py:692, in _Pickler.save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj) 690 else: 691 save(func) --> 692 save(args) 693 write(REDUCE) 695 if obj is not None: 696 # If the object is already in the memo, this means it is 697 # recursive. In this case, throw away everything we put on the 698 # stack, and fetch the object back from the memo. File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File /usr/lib/python3.8/pickle.py:901, in _Pickler.save_tuple(self, obj) 899 write(MARK) 900 for element in obj: --> 901 save(element) 903 if id(obj) in memo: 904 # Subtle. d was not in memo when we entered save_tuple(), so 905 # the process of saving the tuple's elements must have saved (...) 909 # could have been done in the "for element" loop instead, but 910 # recursive tuples are a rare thing. 911 get = self.get(memo[id(obj)][0]) File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1251, in save_module_dict(pickler, obj) 1248 if is_dill(pickler, child=False) and pickler._session: 1249 # we only care about session the first pass thru 1250 pickler._first_pass = False -> 1251 StockPickler.save_dict(pickler, obj) 1252 log.info("# D2") 1253 return File /usr/lib/python3.8/pickle.py:971, in _Pickler.save_dict(self, obj) 968 self.write(MARK + DICT) 970 self.memoize(obj) --> 971 self._batch_setitems(obj.items()) File /usr/lib/python3.8/pickle.py:997, in _Pickler._batch_setitems(self, items) 995 for k, v in tmp: 996 save(k) --> 997 save(v) 998 write(SETITEMS) 999 elif n: File /usr/lib/python3.8/pickle.py:560, in _Pickler.save(self, obj, save_persistent_id) 558 f = self.dispatch.get(t) 559 if f is not None: --> 560 f(self, obj) # Call unbound method with explicit self 561 return 563 # Check private dispatch table if any, or else 564 # copyreg.dispatch_table File ~/datasets/src/datasets/utils/py_utils.py:862, in save_function(pickler, obj) 859 if state_dict: 860 state = state, state_dict --> 862 dill._dill._save_with_postproc( 863 pickler, 864 ( 865 dill._dill._create_function, 866 (obj.__code__, globs, obj.__name__, obj.__defaults__, closure), 867 state, 868 ), 869 obj=obj, 870 postproc_list=postproc_list, 871 ) 872 else: 873 closure = obj.func_closure File ~/hf/lib/python3.8/site-packages/dill/_dill.py:1153, in _save_with_postproc(pickler, reduction, is_pickler_dill, obj, postproc_list) 1151 dest, source = reduction[1] 1152 if source: -> 1153 pickler.write(pickler.get(pickler.memo[id(dest)][0])) 1154 pickler._batch_setitems(iter(source.items())) 1155 else: 1156 # Updating with an empty dictionary. Same as doing nothing. KeyError: 140434581781568 ``` </details> ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.11.0-1028-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.2 cc @lhoestq
false
1,274,110,623
https://api.github.com/repos/huggingface/datasets/issues/4519
https://github.com/huggingface/datasets/pull/4519
4,519
Create new sections for audio and vision in guides
closed
2
2022-06-16T21:38:24
2022-07-07T15:36:37
2022-07-07T15:24:58
stevhliu
[ "documentation" ]
This PR creates separate sections in the guides for audio, vision, text, and general usage so it is easier for users to find loading, processing, or sharing guides specific to the dataset type they're working with. It'll also allow us to scale the docs to additional dataset types - like time series, tabular, etc. - while keeping our docs information architecture. Some other changes include: - ~Experimented with decorating text with some CSS to highlight guides specific to each modality. Hopefully, it'll be easier for users to find and realize that these different docs exist!~ Will experiment with this in a different PR. - Added deprecation warning for Metrics and redirect to Evaluate. - Updated `set_format` section to recommend using the new `to_tf_dataset` function if you need to convert to a TensorFlow dataset. - Reorganized `toctree` to nest general usage, audio, vision, and text sections under the how-to guides. - A quick review and edit to the Load and Process docs for clarity.
true