url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
1.61B
1.64B
node_id
stringlengths
18
19
number
int64
5.6k
5.66k
title
stringlengths
12
113
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
null
comments
int64
0
15
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
10
19.9k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
2 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/5660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5660/comments
https://api.github.com/repos/huggingface/datasets/issues/5660/events
https://github.com/huggingface/datasets/issues/5660
1,635,543,646
I_kwDODunzps5hfGpe
5,660
integration with imbalanced-learn
{ "login": "tansaku", "id": 30216, "node_id": "MDQ6VXNlcjMwMjE2", "avatar_url": "https://avatars.githubusercontent.com/u/30216?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tansaku", "html_url": "https://github.com/tansaku", "followers_url": "https://api.github.com/users/tansaku/followers", "following_url": "https://api.github.com/users/tansaku/following{/other_user}", "gists_url": "https://api.github.com/users/tansaku/gists{/gist_id}", "starred_url": "https://api.github.com/users/tansaku/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tansaku/subscriptions", "organizations_url": "https://api.github.com/users/tansaku/orgs", "repos_url": "https://api.github.com/users/tansaku/repos", "events_url": "https://api.github.com/users/tansaku/events{/privacy}", "received_events_url": "https://api.github.com/users/tansaku/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
"2023-03-22T11:05:17"
"2023-03-22T11:05:17"
null
NONE
null
### Feature request Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets? ### Motivation I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I've looked online, asked gpt-4, but so far not making much progress. ### Your contribution If I can get this working myself I can submit a PR with example code to go in the docs
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5660/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5660/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5659/comments
https://api.github.com/repos/huggingface/datasets/issues/5659/events
https://github.com/huggingface/datasets/issues/5659
1,635,447,540
I_kwDODunzps5hevL0
5,659
[Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files
{ "login": "sanchit-gandhi", "id": 93869735, "node_id": "U_kgDOBZhWpw", "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sanchit-gandhi", "html_url": "https://github.com/sanchit-gandhi", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
"2023-03-22T10:07:33"
"2023-03-22T13:52:11"
null
CONTRIBUTOR
null
### Describe the bug I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4. The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file type. The installation guide suggests that `libsndfile` is bundled in when `soundfile` is pip installed: https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/docs/source/installation.md?plain=1#L70-L71 However, just pip installing `soundfile==0.12.1` throws an error that `libsndfile` is missing: ``` pip install soundfile==0.12.1 ``` Then: ```python >>> soundfile >>> soundfile.__libsndfile_version__ ``` <details> <summary> Traceback (most recent call last): </summary> ``` File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 161, in <module> import _soundfile_data # ImportError if this doesn't exist ModuleNotFoundError: No module named '_soundfile_data' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 170, in <module> raise OSError('sndfile library not found using ctypes.util.find_library') OSError: sndfile library not found using ctypes.util.find_library During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/sanchitgandhi/hf/lib/python3.8/site-packages/soundfile.py", line 192, in <module> _snd = _ffi.dlopen(_explicit_libname) OSError: cannot load library 'libsndfile.so': libsndfile.so: cannot open shared object file: No such file or directory ``` </details> Thus, I've followed the official instructions for installing the `soundfile` package from https://github.com/bastibe/python-soundfile#installation, which states that `libsndfile` needs to be installed separately as: ``` pip install --upgrade soundfile sudo apt install libsndfile1 ``` We can now import `soundfile`: ```python >>> import soundfile >>> soundfile.__version__ '0.12.1' >>> soundfile.__libsndfile_version__ '1.0.28' ``` We see that we have `soundfile==0.12.1`, which matches the `datasets[audio]` package constraints: https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/setup.py#L144-L147 But we have `libsndfile==1.0.28`, which is too low for decoding mp3 files: https://github.com/huggingface/datasets/blob/e1af108015e43f9df8734a1faeeaeb9eafce3971/src/datasets/config.py#L136-L138 Updating/upgrading the `libsndfile` doesn't change this: ``` sudo apt-get update sudo apt-get upgrade ``` Is there any other suggestion for how to get a compatible `libsndfile` version? Currently, the version bundled with Ubuntu `apt-get` is too low for decoding mp3 files. Maybe we could add this under `setup.py` such that we install the correct `libsndfile` version when we do `pip install datasets[audio]`? IMO this would help circumvent such version issues. ### Steps to reproduce the bug Environment described above. Loading mp3 files: ```python from datasets import load_dataset common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True) print(next(iter(common_voice_es))) ``` ```python --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) Cell In[4], line 2 1 common_voice_es = load_dataset("common_voice", "es", split="validation", streaming=True) ----> 2 print(next(iter(common_voice_es))) File ~/datasets/src/datasets/iterable_dataset.py:941, in IterableDataset.__iter__(self) 937 for key, example in ex_iterable: 938 if self.features: 939 # `IterableDataset` automatically fills missing columns with None. 940 # This is done with `_apply_feature_types_on_example`. --> 941 yield _apply_feature_types_on_example( 942 example, self.features, token_per_repo_id=self._token_per_repo_id 943 ) 944 else: 945 yield example File ~/datasets/src/datasets/iterable_dataset.py:700, in _apply_feature_types_on_example(example, features, token_per_repo_id) 698 encoded_example = features.encode_example(example) 699 # Decode example for Audio feature, e.g. --> 700 decoded_example = features.decode_example(encoded_example, token_per_repo_id=token_per_repo_id) 701 return decoded_example File ~/datasets/src/datasets/features/features.py:1864, in Features.decode_example(self, example, token_per_repo_id) 1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1851 """Decode example with custom feature decoding. 1852 1853 Args: (...) 1861 `dict[str, Any]` 1862 """ -> 1864 return { 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1866 if self._column_requires_decoding[column_name] 1867 else value 1868 for column_name, (feature, value) in zip_dict( 1869 {key: value for key, value in self.items() if key in example}, example 1870 ) 1871 } File ~/datasets/src/datasets/features/features.py:1865, in <dictcomp>(.0) 1850 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1851 """Decode example with custom feature decoding. 1852 1853 Args: (...) 1861 `dict[str, Any]` 1862 """ 1864 return { -> 1865 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1866 if self._column_requires_decoding[column_name] 1867 else value 1868 for column_name, (feature, value) in zip_dict( 1869 {key: value for key, value in self.items() if key in example}, example 1870 ) 1871 } File ~/datasets/src/datasets/features/features.py:1308, in decode_nested_example(schema, obj, token_per_repo_id) 1305 elif isinstance(schema, (Audio, Image)): 1306 # we pass the token to read and decode files from private repositories in streaming mode 1307 if obj is not None and schema.decode: -> 1308 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) 1309 return obj File ~/datasets/src/datasets/features/audio.py:167, in Audio.decode_example(self, value, token_per_repo_id) 162 raise RuntimeError( 163 "Decoding 'opus' files requires system library 'libsndfile'>=1.0.31, " 164 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. ' 165 ) 166 elif not config.IS_MP3_SUPPORTED and audio_format == "mp3": --> 167 raise RuntimeError( 168 "Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, " 169 'You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. ' 170 ) 172 if file is None: 173 token_per_repo_id = token_per_repo_id or {} RuntimeError: Decoding 'mp3' files requires system library 'libsndfile'>=1.1.0, You can try to update `soundfile` python library: `pip install "soundfile>=0.12.1"`. ``` ### Expected behavior Load mp3 files! ### Environment info - `datasets` version: 2.10.2.dev0 - Platform: Linux-5.13.0-1023-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.13.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - Soundfile version: 0.12.1 - Libsndfile version: 1.0.28
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5659/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5659/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5658/comments
https://api.github.com/repos/huggingface/datasets/issues/5658/events
https://github.com/huggingface/datasets/pull/5658
1,634,867,204
PR_kwDODunzps5MmJe0
5,658
docs: Update num_shards docs to mention num_proc on Dataset and DatasetDict
{ "login": "connor-henderson", "id": 78612354, "node_id": "MDQ6VXNlcjc4NjEyMzU0", "avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/connor-henderson", "html_url": "https://github.com/connor-henderson", "followers_url": "https://api.github.com/users/connor-henderson/followers", "following_url": "https://api.github.com/users/connor-henderson/following{/other_user}", "gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}", "starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions", "organizations_url": "https://api.github.com/users/connor-henderson/orgs", "repos_url": "https://api.github.com/users/connor-henderson/repos", "events_url": "https://api.github.com/users/connor-henderson/events{/privacy}", "received_events_url": "https://api.github.com/users/connor-henderson/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
"2023-03-22T00:12:18"
"2023-03-22T00:14:05"
null
NONE
null
Closes #5653 @mariosasko
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5658/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5658/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5658", "html_url": "https://github.com/huggingface/datasets/pull/5658", "diff_url": "https://github.com/huggingface/datasets/pull/5658.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5658.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5656/comments
https://api.github.com/repos/huggingface/datasets/issues/5656/events
https://github.com/huggingface/datasets/pull/5656
1,634,156,563
PR_kwDODunzps5Mjxoo
5,656
Fix `fsspec.open` when using an HTTP proxy
{ "login": "bryant1410", "id": 3905501, "node_id": "MDQ6VXNlcjM5MDU1MDE=", "avatar_url": "https://avatars.githubusercontent.com/u/3905501?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bryant1410", "html_url": "https://github.com/bryant1410", "followers_url": "https://api.github.com/users/bryant1410/followers", "following_url": "https://api.github.com/users/bryant1410/following{/other_user}", "gists_url": "https://api.github.com/users/bryant1410/gists{/gist_id}", "starred_url": "https://api.github.com/users/bryant1410/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bryant1410/subscriptions", "organizations_url": "https://api.github.com/users/bryant1410/orgs", "repos_url": "https://api.github.com/users/bryant1410/repos", "events_url": "https://api.github.com/users/bryant1410/events{/privacy}", "received_events_url": "https://api.github.com/users/bryant1410/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
"2023-03-21T15:23:29"
"2023-03-22T13:55:30"
null
CONTRIBUTOR
null
Most HTTP(S) downloads from this library support proxy automatically by reading the `HTTP_PROXY` environment variable (et al.) because `requests` is widely used. However, in some parts of the code, `fsspec` is used, which in turn uses `aiohttp` for HTTP(S) requests (as opposed to `requests`), which in turn doesn't support reading proxy env variables by default. This PR enables reading them automatically. Read [aiohttp docs on using proxies](https://docs.aiohttp.org/en/stable/client_advanced.html?highlight=trust_env#proxy-support). For context, [the Python library requests](https://requests.readthedocs.io/en/latest/user/advanced/?highlight=http_proxy#proxies) and [the official Python library via `urllib.urlopen` support this automatically by default](https://docs.python.org/3/library/urllib.request.html#urllib.request.urlopen). Many (most common ones?) programs also do the same, including cURL, APT, Wget, and many others.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5656/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5656/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5656", "html_url": "https://github.com/huggingface/datasets/pull/5656", "diff_url": "https://github.com/huggingface/datasets/pull/5656.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5656.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5655/comments
https://api.github.com/repos/huggingface/datasets/issues/5655/events
https://github.com/huggingface/datasets/pull/5655
1,634,030,017
PR_kwDODunzps5MjWYy
5,655
Improve features decoding in to_iterable_dataset
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
3
"2023-03-21T14:18:09"
"2023-03-22T16:31:24"
null
MEMBER
null
Following discussion at https://github.com/huggingface/datasets/pull/5589 Right now `to_iterable_dataset` on images/audio hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images/audios unnecessarily). I fixed it by providing a generator that yields undecoded examples
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5655/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5655/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5655", "html_url": "https://github.com/huggingface/datasets/pull/5655", "diff_url": "https://github.com/huggingface/datasets/pull/5655.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5655.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5654/comments
https://api.github.com/repos/huggingface/datasets/issues/5654/events
https://github.com/huggingface/datasets/issues/5654
1,633,523,705
I_kwDODunzps5hXZf5
5,654
Offset overflow when executing Dataset.map
{ "login": "jan-pair", "id": 118280608, "node_id": "U_kgDOBwzRoA", "avatar_url": "https://avatars.githubusercontent.com/u/118280608?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jan-pair", "html_url": "https://github.com/jan-pair", "followers_url": "https://api.github.com/users/jan-pair/followers", "following_url": "https://api.github.com/users/jan-pair/following{/other_user}", "gists_url": "https://api.github.com/users/jan-pair/gists{/gist_id}", "starred_url": "https://api.github.com/users/jan-pair/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jan-pair/subscriptions", "organizations_url": "https://api.github.com/users/jan-pair/orgs", "repos_url": "https://api.github.com/users/jan-pair/repos", "events_url": "https://api.github.com/users/jan-pair/events{/privacy}", "received_events_url": "https://api.github.com/users/jan-pair/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
"2023-03-21T09:33:27"
"2023-03-21T10:32:07"
null
NONE
null
### Describe the bug Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big. The map function executes all iterations, and then returns the following error: ```bash Traceback (most recent call last): File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3353, in _map_single writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 582, in finalize self.write_examples_on_file() File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 446, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 555, in write_batch self.write_table(pa_table, writer_batch_size) File "/home/ubuntu/miniconda3/envs/enhancement/lib/python3.8/site-packages/datasets/arrow_writer.py", line 567, in write_table pa_table = pa_table.combine_chunks() File "pyarrow/table.pxi", line 3315, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays ``` Here is the minimal code (`/home/datasets/DIV2K_train_HR` is just a folder of images that can be replaced by any appropriate): ### Steps to reproduce the bug ```python from glob import glob import torch from datasets import Dataset, Image from torchvision.transforms import PILToTensor, RandomCrop file_paths = glob("/home/datasets/DIV2K_train_HR/*") to_tensor = PILToTensor() crop_transf = RandomCrop(size=256) def prepare_data(example): tensor = to_tensor(example["image"].convert("RGB")) return {"hr": torch.stack([crop_transf(tensor) for _ in range(25)])} train_data = Dataset.from_dict({"image": file_paths}).cast_column("image", Image()) train_data = train_data.map( prepare_data, cache_file_name="/home/datasets/DIV2K_train_HR_crops.tmp", desc="Caching multiple random crops of image", remove_columns="image", ) print(train_data[0].keys(), train_data[0]["hr"].shape) ``` ### Expected behavior Cached file is stored at `"/home/datasets/DIV2K_train_HR_crops.tmp"`, output is `dict_keys(['hr']) torch.Size([25, 3, 256, 256])` ### Environment info - `datasets` version: 2.10.1 - Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.10 - Python version: 3.8.16 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - Pytorch version: 2.0.0+cu117 - torchvision version: 0.15.1+cu117
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5654/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5654/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5653/comments
https://api.github.com/repos/huggingface/datasets/issues/5653/events
https://github.com/huggingface/datasets/issues/5653
1,633,254,159
I_kwDODunzps5hWXsP
5,653
Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented
{ "login": "RmZeta2718", "id": 42400165, "node_id": "MDQ6VXNlcjQyNDAwMTY1", "avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RmZeta2718", "html_url": "https://github.com/RmZeta2718", "followers_url": "https://api.github.com/users/RmZeta2718/followers", "following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}", "gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}", "starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions", "organizations_url": "https://api.github.com/users/RmZeta2718/orgs", "repos_url": "https://api.github.com/users/RmZeta2718/repos", "events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}", "received_events_url": "https://api.github.com/users/RmZeta2718/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" }, { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
open
false
null
[]
null
1
"2023-03-21T05:25:35"
"2023-03-21T13:19:57"
null
NONE
null
### Describe the bug [`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented ### Steps to reproduce the bug Nothing to reproduce ### Expected behavior [document of `num_shards`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_shards) explicitly says that it depends on `max_shard_size`, it should also mention `num_proc`. ### Environment info datasets main document
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5653/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5653/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5652/comments
https://api.github.com/repos/huggingface/datasets/issues/5652/events
https://github.com/huggingface/datasets/pull/5652
1,632,546,073
PR_kwDODunzps5MeVUR
5,652
Copy features
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
6
"2023-03-20T17:17:23"
"2023-03-22T15:00:00"
null
MEMBER
null
Some users (even internally at HF) are doing ```python dset_features = dset.features dset_features.pop(col_to_remove) dset = dset.map(..., features=dset_features) ``` Right now this causes issues because it modifies the features dict in place before the map. In this PR I modified `dset.features` to return a copy of the features, so that users can modify it if they want.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5652/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5652/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5652", "html_url": "https://github.com/huggingface/datasets/pull/5652", "diff_url": "https://github.com/huggingface/datasets/pull/5652.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5652.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5651/comments
https://api.github.com/repos/huggingface/datasets/issues/5651/events
https://github.com/huggingface/datasets/issues/5651
1,631,967,509
I_kwDODunzps5hRdkV
5,651
expanduser in save_to_disk
{ "login": "RmZeta2718", "id": 42400165, "node_id": "MDQ6VXNlcjQyNDAwMTY1", "avatar_url": "https://avatars.githubusercontent.com/u/42400165?v=4", "gravatar_id": "", "url": "https://api.github.com/users/RmZeta2718", "html_url": "https://github.com/RmZeta2718", "followers_url": "https://api.github.com/users/RmZeta2718/followers", "following_url": "https://api.github.com/users/RmZeta2718/following{/other_user}", "gists_url": "https://api.github.com/users/RmZeta2718/gists{/gist_id}", "starred_url": "https://api.github.com/users/RmZeta2718/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RmZeta2718/subscriptions", "organizations_url": "https://api.github.com/users/RmZeta2718/orgs", "repos_url": "https://api.github.com/users/RmZeta2718/repos", "events_url": "https://api.github.com/users/RmZeta2718/events{/privacy}", "received_events_url": "https://api.github.com/users/RmZeta2718/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
0
"2023-03-20T12:02:18"
"2023-03-20T12:03:59"
null
NONE
null
### Describe the bug save_to_disk() does not expand `~` 1. `dataset = load_datasets("any dataset")` 2. `dataset.save_to_disk("~/data")` 3. a folder named "~" created in current folder 4. FileNotFoundError is raised, because the expanded path does not exist (`/home/<user>/data`) related issue https://github.com/huggingface/transformers/issues/10628 ### Steps to reproduce the bug As described above. ### Expected behavior expanduser correctly ### Environment info - datasets 2.10.1 - python 3.10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5651/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5651/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5650/comments
https://api.github.com/repos/huggingface/datasets/issues/5650/events
https://github.com/huggingface/datasets/issues/5650
1,630,336,919
I_kwDODunzps5hLPeX
5,650
load_dataset can't work correct with my image data
{ "login": "WiNE-iNEFF", "id": 41611046, "node_id": "MDQ6VXNlcjQxNjExMDQ2", "avatar_url": "https://avatars.githubusercontent.com/u/41611046?v=4", "gravatar_id": "", "url": "https://api.github.com/users/WiNE-iNEFF", "html_url": "https://github.com/WiNE-iNEFF", "followers_url": "https://api.github.com/users/WiNE-iNEFF/followers", "following_url": "https://api.github.com/users/WiNE-iNEFF/following{/other_user}", "gists_url": "https://api.github.com/users/WiNE-iNEFF/gists{/gist_id}", "starred_url": "https://api.github.com/users/WiNE-iNEFF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/WiNE-iNEFF/subscriptions", "organizations_url": "https://api.github.com/users/WiNE-iNEFF/orgs", "repos_url": "https://api.github.com/users/WiNE-iNEFF/repos", "events_url": "https://api.github.com/users/WiNE-iNEFF/events{/privacy}", "received_events_url": "https://api.github.com/users/WiNE-iNEFF/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
10
"2023-03-18T13:59:13"
"2023-03-22T12:41:48"
null
NONE
null
I have about 20000 images in my folder which divided into 4 folders with class names. When i use load_dataset("my_folder_name", split="train") this function create dataset in which there are only 4 images, the remaining 19000 images were not added there. What is the problem and did not understand. Tried converting images and the like but absolutely nothing worked
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5650/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5650/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5649/comments
https://api.github.com/repos/huggingface/datasets/issues/5649/events
https://github.com/huggingface/datasets/issues/5649
1,630,173,460
I_kwDODunzps5hKnkU
5,649
The index column created with .to_sql() is dependent on the batch_size when writing
{ "login": "lsb", "id": 45281, "node_id": "MDQ6VXNlcjQ1Mjgx", "avatar_url": "https://avatars.githubusercontent.com/u/45281?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lsb", "html_url": "https://github.com/lsb", "followers_url": "https://api.github.com/users/lsb/followers", "following_url": "https://api.github.com/users/lsb/following{/other_user}", "gists_url": "https://api.github.com/users/lsb/gists{/gist_id}", "starred_url": "https://api.github.com/users/lsb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lsb/subscriptions", "organizations_url": "https://api.github.com/users/lsb/orgs", "repos_url": "https://api.github.com/users/lsb/repos", "events_url": "https://api.github.com/users/lsb/events{/privacy}", "received_events_url": "https://api.github.com/users/lsb/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
1
"2023-03-18T05:25:17"
"2023-03-20T13:16:12"
null
NONE
null
### Describe the bug It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index. This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export. ### Steps to reproduce the bug ``` from datasets import Dataset import sqlite3 db = sqlite3.connect(":memory:") nice_numbers = Dataset.from_dict({"nice_number": range(101,106)}) nice_numbers.to_sql("nice1", db, batch_size=1) nice_numbers.to_sql("nice2", db, batch_size=2) print(db.execute("select * from nice1").fetchall()) # [(0, 101), (0, 102), (0, 103), (0, 104), (0, 105)] print(db.execute("select * from nice2").fetchall()) # [(0, 101), (1, 102), (0, 103), (1, 104), (0, 105)] ``` ### Expected behavior I expected the "index" column to be unique ### Environment info ``` % datasets-cli env Copy-and-paste the text below in your GitHub issue. - `datasets` version: 2.10.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.9.6 - PyArrow version: 7.0.0 - Pandas version: 1.5.2 zsh: segmentation fault datasets-cli env ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5649/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5649/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5648/comments
https://api.github.com/repos/huggingface/datasets/issues/5648/events
https://github.com/huggingface/datasets/issues/5648
1,629,253,719
I_kwDODunzps5hHHBX
5,648
flatten_indices doesn't work with pandas format
{ "login": "alialamiidrissi", "id": 14365168, "node_id": "MDQ6VXNlcjE0MzY1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alialamiidrissi", "html_url": "https://github.com/alialamiidrissi", "followers_url": "https://api.github.com/users/alialamiidrissi/followers", "following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}", "gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}", "starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions", "organizations_url": "https://api.github.com/users/alialamiidrissi/orgs", "repos_url": "https://api.github.com/users/alialamiidrissi/repos", "events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}", "received_events_url": "https://api.github.com/users/alialamiidrissi/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
1
"2023-03-17T12:44:25"
"2023-03-21T13:12:03"
null
NONE
null
### Describe the bug Hi, I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output ### Steps to reproduce the bug tabular_data = pd.DataFrame(np.random.randn(10,10)) tabular_data = datasets.arrow_dataset.Dataset.from_pandas(tabular_data) tabular_data.with_format("pandas").select([0,1,2,3]).flatten_indices() ### Expected behavior No error thrown ### Environment info - `datasets` version: 2.10.1 - Python version: 3.9.5 - PyArrow version: 11.0.0 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5648/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5648/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5647/comments
https://api.github.com/repos/huggingface/datasets/issues/5647/events
https://github.com/huggingface/datasets/issues/5647
1,628,225,544
I_kwDODunzps5hDMAI
5,647
Make all print statements optional
{ "login": "gagan3012", "id": 49101362, "node_id": "MDQ6VXNlcjQ5MTAxMzYy", "avatar_url": "https://avatars.githubusercontent.com/u/49101362?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gagan3012", "html_url": "https://github.com/gagan3012", "followers_url": "https://api.github.com/users/gagan3012/followers", "following_url": "https://api.github.com/users/gagan3012/following{/other_user}", "gists_url": "https://api.github.com/users/gagan3012/gists{/gist_id}", "starred_url": "https://api.github.com/users/gagan3012/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gagan3012/subscriptions", "organizations_url": "https://api.github.com/users/gagan3012/orgs", "repos_url": "https://api.github.com/users/gagan3012/repos", "events_url": "https://api.github.com/users/gagan3012/events{/privacy}", "received_events_url": "https://api.github.com/users/gagan3012/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
0
"2023-03-16T20:30:07"
"2023-03-16T20:30:07"
null
NONE
null
### Feature request Make all print statements optional to speed up the development ### Motivation Im loading multiple tiny datasets and all the print statements make the loading slower ### Your contribution I can help contribute
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5647/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5647/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5646/comments
https://api.github.com/repos/huggingface/datasets/issues/5646/events
https://github.com/huggingface/datasets/pull/5646
1,627,838,762
PR_kwDODunzps5MOqjj
5,646
Allow self as key in `Features`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2023-03-16T16:17:03"
"2023-03-16T17:21:58"
"2023-03-16T17:14:50"
CONTRIBUTOR
null
Fix #5641
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5646/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5646/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5646", "html_url": "https://github.com/huggingface/datasets/pull/5646", "diff_url": "https://github.com/huggingface/datasets/pull/5646.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5646.patch", "merged_at": "2023-03-16T17:14:50" }
true
https://api.github.com/repos/huggingface/datasets/issues/5645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5645/comments
https://api.github.com/repos/huggingface/datasets/issues/5645/events
https://github.com/huggingface/datasets/issues/5645
1,627,108,278
I_kwDODunzps5g-7O2
5,645
Datasets map and select(range()) is giving dill error
{ "login": "Tanya-11", "id": 90728105, "node_id": "MDQ6VXNlcjkwNzI4MTA1", "avatar_url": "https://avatars.githubusercontent.com/u/90728105?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Tanya-11", "html_url": "https://github.com/Tanya-11", "followers_url": "https://api.github.com/users/Tanya-11/followers", "following_url": "https://api.github.com/users/Tanya-11/following{/other_user}", "gists_url": "https://api.github.com/users/Tanya-11/gists{/gist_id}", "starred_url": "https://api.github.com/users/Tanya-11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Tanya-11/subscriptions", "organizations_url": "https://api.github.com/users/Tanya-11/orgs", "repos_url": "https://api.github.com/users/Tanya-11/repos", "events_url": "https://api.github.com/users/Tanya-11/events{/privacy}", "received_events_url": "https://api.github.com/users/Tanya-11/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2023-03-16T10:01:28"
"2023-03-17T04:24:51"
"2023-03-17T04:24:51"
NONE
null
### Describe the bug I'm using Huggingface Datasets library to load the dataset in google colab When I do, > data = train_dataset.select(range(10)) or > train_datasets = train_dataset.map( > process_data_to_model_inputs, > batched=True, > batch_size=batch_size, > remove_columns=["article", "abstract"], > ) I get following error: `module 'dill._dill' has no attribute 'log'` I've tried downgrading the dill version from latest to 0.2.8, but no luck. Stack trace: > --------------------------------------------------------------------------- > ModuleNotFoundError Traceback (most recent call last) > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in _no_cache_fields(obj) > 367 try: > --> 368 import transformers as tr > 369 > > ModuleNotFoundError: No module named 'transformers' > > During handling of the above exception, another exception occurred: > > AttributeError Traceback (most recent call last) > 17 frames > <ipython-input-13-dd14813880a6> in <module> > ----> 1 test = train_dataset.select(range(10)) > > /usr/local/lib/python3.9/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) > 155 } > 156 # apply actual function > --> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) > 158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] > 159 # re-apply format to the output > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) > 155 if kwargs.get(fingerprint_name) is None: > 156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name > --> 157 kwargs[fingerprint_name] = update_fingerprint( > 158 self._fingerprint, transform, kwargs_for_fingerprint > 159 ) > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args) > 103 for key in sorted(transform_args): > 104 hasher.update(key) > --> 105 hasher.update(transform_args[key]) > 106 return hasher.hexdigest() > 107 > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in update(self, value) > 55 def update(self, value): > 56 self.m.update(f"=={type(value)}==".encode("utf8")) > ---> 57 self.m.update(self.hash(value).encode("utf-8")) > 58 > 59 def hexdigest(self): > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash(cls, value) > 51 return cls.dispatch[type(value)](cls, value) > 52 else: > ---> 53 return cls.hash_default(value) > 54 > 55 def update(self, value): > > /usr/local/lib/python3.9/dist-packages/datasets/fingerprint.py in hash_default(cls, value) > 44 @classmethod > 45 def hash_default(cls, value): > ---> 46 return cls.hash_bytes(dumps(value)) > 47 > 48 @classmethod > > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dumps(obj) > 387 file = StringIO() > 388 with _no_cache_fields(obj): > --> 389 dump(obj, file) > 390 return file.getvalue() > 391 > > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in dump(obj, file) > 359 def dump(obj, file): > 360 """pickle an object to a file""" > --> 361 Pickler(file, recurse=True).dump(obj) > 362 return > 363 > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in dump(self, obj) > 392 return > 393 > --> 394 def load_session(filename='/tmp/session.pkl', main=None): > 395 """update the __main__ module with the state from the session file""" > 396 if main is None: main = _main_module > > /usr/lib/python3.9/pickle.py in dump(self, obj) > 485 if self.proto >= 4: > 486 self.framer.start_framing() > --> 487 self.save(obj) > 488 self.write(STOP) > 489 self.framer.end_framing() > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id) > 386 pickler._byref = False # disable pickling by name reference > 387 pickler._recurse = False # disable pickling recursion for globals > --> 388 pickler._session = True # is best indicator of when pickling a session > 389 pickler.dump(main) > 390 finally: > > /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id) > 558 f = self.dispatch.get(t) > 559 if f is not None: > --> 560 f(self, obj) # Call unbound method with explicit self > 561 return > 562 > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save_singleton(pickler, obj) > > /usr/lib/python3.9/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, state_setter, obj) > 689 write(NEWOBJ) > 690 else: > --> 691 save(func) > 692 save(args) > 693 write(REDUCE) > > /usr/local/lib/python3.9/dist-packages/dill/_dill.py in save(self, obj, save_persistent_id) > 386 pickler._byref = False # disable pickling by name reference > 387 pickler._recurse = False # disable pickling recursion for globals > --> 388 pickler._session = True # is best indicator of when pickling a session > 389 pickler.dump(main) > 390 finally: > > /usr/lib/python3.9/pickle.py in save(self, obj, save_persistent_id) > 558 f = self.dispatch.get(t) > 559 if f is not None: > --> 560 f(self, obj) # Call unbound method with explicit self > 561 return > 562 > > /usr/local/lib/python3.9/dist-packages/datasets/utils/py_utils.py in save_function(pickler, obj) > 583 dill._dill.log.info("# F1") > 584 else: > --> 585 dill._dill.log.info("F2: %s" % obj) > 586 name = getattr(obj, "__qualname__", getattr(obj, "__name__", None)) > 587 dill._dill.StockPickler.save_global(pickler, obj, name=name) > > AttributeError: module 'dill._dill' has no attribute 'log' ### Steps to reproduce the bug After loading the dataset(eg: https://huggingface.co/datasets/scientific_papers) in google colab do either > data = train_dataset.select(range(10)) or > train_datasets = train_dataset.map( > process_data_to_model_inputs, > batched=True, > batch_size=batch_size, > remove_columns=["article", "abstract"], > ) ### Expected behavior The map and select function should work ### Environment info dataset: https://huggingface.co/datasets/scientific_papers dill = 0.3.6 python= 3.9.16 transformer = 4.2.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5645/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5645/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5644/comments
https://api.github.com/repos/huggingface/datasets/issues/5644/events
https://github.com/huggingface/datasets/pull/5644
1,626,204,046
PR_kwDODunzps5MJHUi
5,644
Allow direct cast from binary to Audio/Image
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2023-03-15T20:02:54"
"2023-03-16T14:20:44"
"2023-03-16T14:12:55"
CONTRIBUTOR
null
To address https://github.com/huggingface/datasets/discussions/5593.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5644/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5644/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5644", "html_url": "https://github.com/huggingface/datasets/pull/5644", "diff_url": "https://github.com/huggingface/datasets/pull/5644.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5644.patch", "merged_at": "2023-03-16T14:12:55" }
true
https://api.github.com/repos/huggingface/datasets/issues/5643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5643/comments
https://api.github.com/repos/huggingface/datasets/issues/5643/events
https://github.com/huggingface/datasets/pull/5643
1,626,160,220
PR_kwDODunzps5MI9zO
5,643
Support PyArrow arrays as column values in `from_dict`
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2023-03-15T19:32:40"
"2023-03-16T17:23:06"
"2023-03-16T17:15:40"
CONTRIBUTOR
null
For consistency with `pa.Table.from_pydict`, which supports both Python lists and PyArrow arrays as column values. "Fixes" https://discuss.huggingface.co/t/pyarrow-lib-floatarray-did-not-recognize-python-value-type-when-inferring-an-arrow-data-type/33417
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5643/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5643/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5643", "html_url": "https://github.com/huggingface/datasets/pull/5643", "diff_url": "https://github.com/huggingface/datasets/pull/5643.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5643.patch", "merged_at": "2023-03-16T17:15:39" }
true
https://api.github.com/repos/huggingface/datasets/issues/5642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5642/comments
https://api.github.com/repos/huggingface/datasets/issues/5642/events
https://github.com/huggingface/datasets/pull/5642
1,626,043,177
PR_kwDODunzps5MIjw9
5,642
Bump hfh to 0.11.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
6
"2023-03-15T18:26:07"
"2023-03-20T12:34:09"
"2023-03-20T12:26:58"
MEMBER
null
to fix errors like ``` requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/... ``` (e.g. from this [failing CI](https://github.com/huggingface/datasets/actions/runs/4428956210/jobs/7769160997)) 0.11.0 is the current minimum version in `transformers` around 5% of users are currently using versions `<0.11.0`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5642/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5642/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5642", "html_url": "https://github.com/huggingface/datasets/pull/5642", "diff_url": "https://github.com/huggingface/datasets/pull/5642.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5642.patch", "merged_at": "2023-03-20T12:26:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/5641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5641/comments
https://api.github.com/repos/huggingface/datasets/issues/5641/events
https://github.com/huggingface/datasets/issues/5641
1,625,942,730
I_kwDODunzps5g6erK
5,641
Features cannot be named "self"
{ "login": "alialamiidrissi", "id": 14365168, "node_id": "MDQ6VXNlcjE0MzY1MTY4", "avatar_url": "https://avatars.githubusercontent.com/u/14365168?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alialamiidrissi", "html_url": "https://github.com/alialamiidrissi", "followers_url": "https://api.github.com/users/alialamiidrissi/followers", "following_url": "https://api.github.com/users/alialamiidrissi/following{/other_user}", "gists_url": "https://api.github.com/users/alialamiidrissi/gists{/gist_id}", "starred_url": "https://api.github.com/users/alialamiidrissi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alialamiidrissi/subscriptions", "organizations_url": "https://api.github.com/users/alialamiidrissi/orgs", "repos_url": "https://api.github.com/users/alialamiidrissi/repos", "events_url": "https://api.github.com/users/alialamiidrissi/events{/privacy}", "received_events_url": "https://api.github.com/users/alialamiidrissi/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2023-03-15T17:16:40"
"2023-03-16T17:14:51"
"2023-03-16T17:14:51"
NONE
null
### Describe the bug Hi, I noticed that we cannot create a HuggingFace dataset from Pandas DataFrame with a column named `self`. The error seems to be coming from arguments validation in the `Features.from_dict` function. ### Steps to reproduce the bug ```python import datasets dummy_pandas = pd.DataFrame([0,1,2,3], columns = ["self"]) datasets.arrow_dataset.Dataset.from_pandas(dummy_pandas) ``` ### Expected behavior No error thrown ### Environment info - `datasets` version: 2.8.0 - Python version: 3.9.5 - PyArrow version: 6.0.1 - Pandas version: 1.4.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5641/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5641/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5640/comments
https://api.github.com/repos/huggingface/datasets/issues/5640/events
https://github.com/huggingface/datasets/pull/5640
1,625,896,057
PR_kwDODunzps5MID3I
5,640
Less zip false positives
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
6
"2023-03-15T16:48:59"
"2023-03-16T13:47:37"
"2023-03-16T13:40:12"
MEMBER
null
`zipfile.is_zipfile` return false positives for some Parquet files. It causes errors when loading certain parquet datasets, where some files are considered ZIP files by `zipfile.is_zipfile` This is a known issue: https://github.com/python/cpython/issues/72680 At first I wanted to rely only on magic numbers, but then I found that someone contributed a [fix to is_zipfile](https://github.com/python/cpython/pull/5053) - do you think we should use it @albertvillanova or not ? IMO it's ok to rely on magic numbers only for now, since in streaming mode we've had no issue checking only the magic number so far. Close https://github.com/huggingface/datasets/issues/5639
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5640/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5640/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5640", "html_url": "https://github.com/huggingface/datasets/pull/5640", "diff_url": "https://github.com/huggingface/datasets/pull/5640.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5640.patch", "merged_at": "2023-03-16T13:40:12" }
true
https://api.github.com/repos/huggingface/datasets/issues/5639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5639/comments
https://api.github.com/repos/huggingface/datasets/issues/5639/events
https://github.com/huggingface/datasets/issues/5639
1,625,737,098
I_kwDODunzps5g5seK
5,639
Parquet file wrongly recognized as zip prevents loading a dataset
{ "login": "clefourrier", "id": 22726840, "node_id": "MDQ6VXNlcjIyNzI2ODQw", "avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4", "gravatar_id": "", "url": "https://api.github.com/users/clefourrier", "html_url": "https://github.com/clefourrier", "followers_url": "https://api.github.com/users/clefourrier/followers", "following_url": "https://api.github.com/users/clefourrier/following{/other_user}", "gists_url": "https://api.github.com/users/clefourrier/gists{/gist_id}", "starred_url": "https://api.github.com/users/clefourrier/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/clefourrier/subscriptions", "organizations_url": "https://api.github.com/users/clefourrier/orgs", "repos_url": "https://api.github.com/users/clefourrier/repos", "events_url": "https://api.github.com/users/clefourrier/events{/privacy}", "received_events_url": "https://api.github.com/users/clefourrier/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2023-03-15T15:20:45"
"2023-03-16T13:40:14"
"2023-03-16T13:40:14"
CONTRIBUTOR
null
### Describe the bug When trying to `load_dataset_builder` for `HuggingFaceGECLM/StackExchange_Mar2023`, extraction fails, because parquet file [devops-00000-of-00001-22fe902fd8702892.parquet](https://huggingface.co/datasets/HuggingFaceGECLM/StackExchange_Mar2023/resolve/1f8c9a2ab6f7d0f9ae904b8b922e4384592ae1a5/data/devops-00000-of-00001-22fe902fd8702892.parquet) is wrongly identified by python as being a zip not a parquet. (Full thread on [Slack](https://huggingface.slack.com/archives/C02V51Q3800/p1678890880803599)) ### Steps to reproduce the bug ```python from datasets import load_dataset_builder ds = load_dataset_builder("HuggingFaceGECLM/StackExchange_Mar2023") ``` ### Expected behavior Loading the file normally. ### Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.14.0-1058-oem-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5639/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5639/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5638/comments
https://api.github.com/repos/huggingface/datasets/issues/5638/events
https://github.com/huggingface/datasets/issues/5638
1,625,564,471
I_kwDODunzps5g5CU3
5,638
xPath to implement all operations for Path
{ "login": "thomasw21", "id": 24695242, "node_id": "MDQ6VXNlcjI0Njk1MjQy", "avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomasw21", "html_url": "https://github.com/thomasw21", "followers_url": "https://api.github.com/users/thomasw21/followers", "following_url": "https://api.github.com/users/thomasw21/following{/other_user}", "gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}", "starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions", "organizations_url": "https://api.github.com/users/thomasw21/orgs", "repos_url": "https://api.github.com/users/thomasw21/repos", "events_url": "https://api.github.com/users/thomasw21/events{/privacy}", "received_events_url": "https://api.github.com/users/thomasw21/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
5
"2023-03-15T13:47:11"
"2023-03-17T13:21:12"
"2023-03-17T13:21:12"
MEMBER
null
### Feature request Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally. ### Motivation I'm using xPath to interact with remote objects. ### Your contribution I could try to make a PR. I'm a bit unfamiliar with chaining right now.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5638/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5638/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5637/comments
https://api.github.com/repos/huggingface/datasets/issues/5637/events
https://github.com/huggingface/datasets/issues/5637
1,625,295,691
I_kwDODunzps5g4AtL
5,637
IterableDataset with_format does not support 'device' keyword for jax
{ "login": "Lime-Cakes", "id": 91322985, "node_id": "MDQ6VXNlcjkxMzIyOTg1", "avatar_url": "https://avatars.githubusercontent.com/u/91322985?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Lime-Cakes", "html_url": "https://github.com/Lime-Cakes", "followers_url": "https://api.github.com/users/Lime-Cakes/followers", "following_url": "https://api.github.com/users/Lime-Cakes/following{/other_user}", "gists_url": "https://api.github.com/users/Lime-Cakes/gists{/gist_id}", "starred_url": "https://api.github.com/users/Lime-Cakes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Lime-Cakes/subscriptions", "organizations_url": "https://api.github.com/users/Lime-Cakes/orgs", "repos_url": "https://api.github.com/users/Lime-Cakes/repos", "events_url": "https://api.github.com/users/Lime-Cakes/events{/privacy}", "received_events_url": "https://api.github.com/users/Lime-Cakes/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
"2023-03-15T11:04:12"
"2023-03-16T18:30:59"
null
NONE
null
### Describe the bug As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'device'` Looking over the code, it seems IterableDataset support only pytorch and no support for jax device keyword? https://github.com/huggingface/datasets/blob/fc5c84f36684343bff3e424cb0fd1ac5ecdd66da/src/datasets/iterable_dataset.py#L1029 ### Steps to reproduce the bug 1. Load an IterableDataset (tested in streaming mode) 2. Call with_format('jax',device=device) ### Expected behavior I expect to call `with_format('jax', device=device)` as per [documentation](https://huggingface.co/docs/datasets/use_with_jax) without error ### Environment info Tested with installing newest (dev) and also pip release (2.10.1). - `datasets` version: 2.10.2.dev0 - Platform: Linux-5.15.89+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - Huggingface_hub version: 0.12.1 - PyArrow version: 11.0.0 - Pandas version: 1.3.5
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5637/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5637/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5636/comments
https://api.github.com/repos/huggingface/datasets/issues/5636/events
https://github.com/huggingface/datasets/pull/5636
1,623,721,577
PR_kwDODunzps5MAunR
5,636
Fix CI: ignore C901 ("some_func" is to complex) in `ruff`
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2023-03-14T15:29:11"
"2023-03-14T16:37:06"
"2023-03-14T16:29:52"
CONTRIBUTOR
null
idk if I should have added this ignore to `ruff` too, but I added :)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5636/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5636/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5636", "html_url": "https://github.com/huggingface/datasets/pull/5636", "diff_url": "https://github.com/huggingface/datasets/pull/5636.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5636.patch", "merged_at": "2023-03-14T16:29:52" }
true
https://api.github.com/repos/huggingface/datasets/issues/5635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5635/comments
https://api.github.com/repos/huggingface/datasets/issues/5635/events
https://github.com/huggingface/datasets/pull/5635
1,623,682,558
PR_kwDODunzps5MAmLU
5,635
Pass custom metadata filename to Image/Audio folders
{ "login": "polinaeterna", "id": 16348744, "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "gravatar_id": "", "url": "https://api.github.com/users/polinaeterna", "html_url": "https://github.com/polinaeterna", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "repos_url": "https://api.github.com/users/polinaeterna/repos", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
4
"2023-03-14T15:08:16"
"2023-03-22T17:50:31"
null
CONTRIBUTOR
null
This is a quick fix. Now it requires to pass data via `data_files` parameters and include a required metadata file there and pass its filename as `metadata_filename` parameter. For example, with the structure like: ``` data images_dir/ im1.jpg im2.jpg ... metadata_dir/ meta_file1.jsonl meta_file2.jsonl ... ``` to load data with `metadata_file1.jsonl` do: ```python ds = load_dataset("imagefolder", data_files=["data/images_dir/**", "data/metadata_dir/meta_file1.jsonl"], metadata_filename="meta_file1.jsonl") ``` Note that if you have multiple splits, metadata file should be specified in each of them in `data_files`, smth like: ```python data_files={ "train": ["data/train/**", "data/metadata_dir/meta_file1.jsonl"], "test": ["data/train/**", "data/metadata_dir/meta_file1.jsonl"] } ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5635/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5635/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5635", "html_url": "https://github.com/huggingface/datasets/pull/5635", "diff_url": "https://github.com/huggingface/datasets/pull/5635.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5635.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5634/comments
https://api.github.com/repos/huggingface/datasets/issues/5634/events
https://github.com/huggingface/datasets/issues/5634
1,622,424,174
I_kwDODunzps5gtDpu
5,634
Not all progress bars are showing up when they should for downloading dataset
{ "login": "garlandz-db", "id": 110427462, "node_id": "U_kgDOBpT9Rg", "avatar_url": "https://avatars.githubusercontent.com/u/110427462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/garlandz-db", "html_url": "https://github.com/garlandz-db", "followers_url": "https://api.github.com/users/garlandz-db/followers", "following_url": "https://api.github.com/users/garlandz-db/following{/other_user}", "gists_url": "https://api.github.com/users/garlandz-db/gists{/gist_id}", "starred_url": "https://api.github.com/users/garlandz-db/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/garlandz-db/subscriptions", "organizations_url": "https://api.github.com/users/garlandz-db/orgs", "repos_url": "https://api.github.com/users/garlandz-db/repos", "events_url": "https://api.github.com/users/garlandz-db/events{/privacy}", "received_events_url": "https://api.github.com/users/garlandz-db/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
"2023-03-13T23:04:18"
"2023-03-21T01:59:59"
null
NONE
null
### Describe the bug During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too. ipywidgets <img width="1243" alt="image" src="https://user-images.githubusercontent.com/110427462/224851138-13fee5b7-ab51-4883-b96f-1b9808782e3b.png"> tqdm <img width="1251" alt="Screen Shot 2023-03-13 at 3 58 59 PM" src="https://user-images.githubusercontent.com/110427462/224851180-5feb7825-9250-4b1e-ad0c-f3172ac1eb78.png"> ### Steps to reproduce the bug 1. Run this line ``` from datasets import load_dataset rotten_tomatoes = load_dataset("rotten_tomatoes", split="train") ``` ### Expected behavior all progress bars for builder script, metadata, readme, training, validation, and test set ### Environment info requirements.txt ``` aiofiles==22.1.0 aiohttp==3.8.4 aiosignal==1.3.1 aiosqlite==0.18.0 anyio==3.6.2 appnope==0.1.3 argon2-cffi==21.3.0 argon2-cffi-bindings==21.2.0 arrow==1.2.3 asttokens==2.2.1 async-generator==1.10 async-timeout==4.0.2 attrs==22.2.0 Babel==2.12.1 backcall==0.2.0 beautifulsoup4==4.11.2 bleach==6.0.0 brotlipy @ file:///Users/runner/miniforge3/conda-bld/brotlipy_1666764961872/work certifi==2022.12.7 cffi @ file:///Users/runner/miniforge3/conda-bld/cffi_1671179414629/work cfgv==3.3.1 charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1661170624537/work comm==0.1.2 conda==22.9.0 conda-package-handling @ file:///home/conda/feedstock_root/build_artifacts/conda-package-handling_1669907009957/work conda_package_streaming @ file:///home/conda/feedstock_root/build_artifacts/conda-package-streaming_1669733752472/work coverage==7.2.1 cryptography @ file:///Users/runner/miniforge3/conda-bld/cryptography_1669592251328/work datasets==2.1.0 debugpy==1.6.6 decorator==5.1.1 defusedxml==0.7.1 dill==0.3.6 distlib==0.3.6 distro==1.4.0 entrypoints==0.4 exceptiongroup==1.1.0 executing==1.2.0 fastjsonschema==2.16.3 filelock==3.9.0 flaky==3.7.0 fqdn==1.5.1 frozenlist==1.3.3 fsspec==2023.3.0 huggingface-hub==0.10.1 identify==2.5.18 idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1663625384323/work iniconfig==2.0.0 ipykernel==6.12.1 ipyparallel==8.4.1 ipython==7.32.0 ipython-genutils==0.2.0 ipywidgets==8.0.4 isoduration==20.11.0 jedi==0.18.2 Jinja2==3.1.2 json5==0.9.11 jsonpointer==2.3 jsonschema==4.17.3 jupyter-events==0.6.3 jupyter-ydoc==0.2.2 jupyter_client==8.0.3 jupyter_core==5.2.0 jupyter_server==2.4.0 jupyter_server_fileid==0.8.0 jupyter_server_terminals==0.4.4 jupyter_server_ydoc==0.6.1 jupyterlab==3.6.1 jupyterlab-pygments==0.2.2 jupyterlab-widgets==3.0.5 jupyterlab_server==2.20.0 libmambapy @ file:///Users/runner/miniforge3/conda-bld/mamba-split_1671598370072/work/libmambapy mamba @ file:///Users/runner/miniforge3/conda-bld/mamba-split_1671598370072/work/mamba MarkupSafe==2.1.2 matplotlib-inline==0.1.6 mistune==2.0.5 multidict==6.0.4 multiprocess==0.70.14 nbclassic==0.5.3 nbclient==0.7.2 nbconvert==7.2.9 nbformat==5.7.3 nest-asyncio==1.5.6 nodeenv==1.7.0 notebook==6.5.3 notebook_shim==0.2.2 numpy==1.24.2 outcome==1.2.0 packaging==23.0 pandas==1.5.3 pandocfilters==1.5.0 parso==0.8.3 pexpect==4.8.0 pickleshare==0.7.5 platformdirs==3.0.0 plotly==5.13.1 pluggy==1.0.0 pre-commit==3.1.0 prometheus-client==0.16.0 prompt-toolkit==3.0.38 psutil==5.9.4 ptyprocess==0.7.0 pure-eval==0.2.2 pyarrow==11.0.0 pycosat @ file:///Users/runner/miniforge3/conda-bld/pycosat_1666836580084/work pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work Pygments==2.14.0 pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1665350324128/work pyrsistent==0.19.3 PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1661604839144/work pytest==7.2.1 pytest-asyncio==0.20.3 pytest-cov==4.0.0 pytest-timeout==2.1.0 python-dateutil==2.8.2 python-json-logger==2.0.7 pytz==2022.7.1 PyYAML==6.0 pyzmq==25.0.0 requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1661872987712/work responses==0.18.0 rfc3339-validator==0.1.4 rfc3986-validator==0.1.1 ruamel-yaml-conda @ file:///Users/runner/miniforge3/conda-bld/ruamel_yaml_1666819760545/work Send2Trash==1.8.0 simplegeneric==0.8.1 six==1.16.0 sniffio==1.3.0 sortedcontainers==2.4.0 soupsieve==2.4 stack-data==0.6.2 tenacity==8.2.2 terminado==0.17.1 tinycss2==1.2.1 tomli==2.0.1 toolz @ file:///home/conda/feedstock_root/build_artifacts/toolz_1657485559105/work tornado==6.2 tqdm==4.64.1 traitlets==5.8.1 trio==0.22.0 typing_extensions==4.5.0 uri-template==1.2.0 urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1669259737463/work virtualenv==20.19.0 wcwidth==0.2.6 webcolors==1.12 webencodings==0.5.1 websocket-client==1.5.1 widgetsnbextension==4.0.5 xxhash==3.2.0 y-py==0.5.9 yarl==1.8.2 ypy-websocket==0.8.2 zstandard==0.19.0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5634/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5634/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5633/comments
https://api.github.com/repos/huggingface/datasets/issues/5633/events
https://github.com/huggingface/datasets/issues/5633
1,621,469,970
I_kwDODunzps5gpasS
5,633
Cannot import datasets
{ "login": "eerio", "id": 11250555, "node_id": "MDQ6VXNlcjExMjUwNTU1", "avatar_url": "https://avatars.githubusercontent.com/u/11250555?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eerio", "html_url": "https://github.com/eerio", "followers_url": "https://api.github.com/users/eerio/followers", "following_url": "https://api.github.com/users/eerio/following{/other_user}", "gists_url": "https://api.github.com/users/eerio/gists{/gist_id}", "starred_url": "https://api.github.com/users/eerio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eerio/subscriptions", "organizations_url": "https://api.github.com/users/eerio/orgs", "repos_url": "https://api.github.com/users/eerio/repos", "events_url": "https://api.github.com/users/eerio/events{/privacy}", "received_events_url": "https://api.github.com/users/eerio/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2023-03-13T13:14:44"
"2023-03-13T17:54:19"
"2023-03-13T17:54:19"
NONE
null
### Describe the bug Hi, I cannot even import the library :( I installed it by running: ``` $ conda install datasets ``` Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran: ``` $ conda remove datasets $ conda install -c huggingface datasets ``` Please see 'steps to reproduce the bug' for the specific error, as steps to reproduce is just importing the library ### Steps to reproduce the bug ``` $ python3 Python 3.8.15 (default, Nov 24 2022, 15:19:38) [GCC 11.2.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> import datasets Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/__init__.py", line 33, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 59, in <module> from .arrow_reader import ArrowReader File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/datasets/arrow_reader.py", line 27, in <module> import pyarrow.parquet as pq File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/parquet/__init__.py", line 20, in <module> from .core import * File "/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/parquet/core.py", line 37, in <module> from pyarrow._parquet import (ParquetReader, Statistics, # noqa ImportError: cannot import name 'FileEncryptionProperties' from 'pyarrow._parquet' (/home/jack/.conda/envs/jack_zpp/lib/python3.8/site-packages/pyarrow/_parquet.cpython-38-x86_64-linux-gnu.so) ``` ### Expected behavior I would expect for the statement `import datasets` to cause no error ### Environment info Output of `conda list`: ``` # packages in environment at /home/jack/.conda/envs/pbalawender_zpp: # # Name Version Build Channel _libgcc_mutex 0.1 main _openmp_mutex 5.1 1_gnu abseil-cpp 20210324.2 h2531618_0 advertools 0.13.2 pypi_0 pypi aiofiles 0.8.0 pypi_0 pypi aiohttp 3.8.3 py38h5eee18b_0 aiosignal 1.2.0 pyhd3eb1b0_0 aiosqlite 0.17.0 pypi_0 pypi anyio 3.6.2 pypi_0 pypi aquirdturtle-collapsible-headings 3.1.0 pypi_0 pypi argon2-cffi 21.3.0 pypi_0 pypi argon2-cffi-bindings 21.2.0 pypi_0 pypi arrow 1.2.3 pypi_0 pypi arrow-cpp 3.0.0 py38h6b21186_4 asttokens 2.2.0 pypi_0 pypi async-timeout 4.0.2 py38h06a4308_0 attrs 22.1.0 py38h06a4308_0 automat 22.10.0 pypi_0 pypi aws-c-common 0.4.57 he6710b0_1 aws-c-event-stream 0.1.6 h2531618_5 aws-checksums 0.1.9 he6710b0_0 aws-sdk-cpp 1.8.185 hce553d0_0 babel 2.11.0 pypi_0 pypi backcall 0.2.0 pyhd3eb1b0_0 beautifulsoup4 4.11.1 pypi_0 pypi blas 1.0 mkl bleach 5.0.1 pypi_0 pypi boost-cpp 1.73.0 h27cfd23_11 bottleneck 1.3.5 py38h7deecbd_0 brotli 1.0.9 h5eee18b_7 brotli-bin 1.0.9 h5eee18b_7 brotlipy 0.7.0 py38h27cfd23_1003 bzip2 1.0.8 h7b6447c_0 c-ares 1.18.1 h7f8727e_0 ca-certificates 2023.01.10 h06a4308_0 certifi 2022.9.24 pypi_0 pypi cffi 1.15.1 py38h5eee18b_3 charset-normalizer 2.1.1 pypi_0 pypi click 8.1.3 pypi_0 pypi constantly 15.1.0 pypi_0 pypi contourpy 1.0.6 pypi_0 pypi cryptography 38.0.4 pypi_0 pypi cssselect 1.2.0 pypi_0 pypi cudatoolkit 10.1.243 h8cb64d8_10 conda-forge cycler 0.11.0 pypi_0 pypi dacite 1.6.0 pypi_0 pypi dataclasses 0.8 pyh6d0b6a4_7 datasets 1.18.4 py_0 huggingface datetime 4.7 pypi_0 pypi debugpy 1.6.4 pypi_0 pypi decorator 5.1.1 pyhd3eb1b0_0 defusedxml 0.7.1 pypi_0 pypi dill 0.3.6 py38h06a4308_0 docker-pycreds 0.4.0 pypi_0 pypi double-conversion 3.1.5 he6710b0_1 entrypoints 0.4 py38h06a4308_0 executing 0.8.3 pyhd3eb1b0_0 filelock 3.8.0 pypi_0 pypi flake8 6.0.0 pypi_0 pypi flask 2.1.3 py38h06a4308_0 flit-core 3.6.0 pyhd3eb1b0_0 fonttools 4.38.0 pypi_0 pypi fqdn 1.5.1 pypi_0 pypi freetype 2.12.1 h4a9f257_0 frozenlist 1.3.3 py38h5eee18b_0 fsspec 2022.11.0 py38h06a4308_0 gensim 4.2.0 pypi_0 pypi gflags 2.2.2 he6710b0_0 giflib 5.2.1 h5eee18b_3 gitdb 4.0.10 pypi_0 pypi gitpython 3.1.30 pypi_0 pypi glog 0.5.0 h2531618_0 grpc-cpp 1.39.0 hae934f6_5 huggingface-hub 0.11.1 pypi_0 pypi huggingface_hub 0.13.1 py_0 huggingface hyperlink 21.0.0 pypi_0 pypi icu 58.2 he6710b0_3 idna 3.4 py38h06a4308_0 importlib-metadata 5.1.0 pypi_0 pypi importlib_metadata 4.11.3 hd3eb1b0_0 importlib_resources 5.2.0 pyhd3eb1b0_1 incremental 22.10.0 pypi_0 pypi intel-openmp 2021.4.0 h06a4308_3561 ipykernel 6.17.1 pyh210e3f2_0 conda-forge ipython 8.7.0 pypi_0 pypi ipython-genutils 0.2.0 pypi_0 pypi ipywidgets 8.0.2 pyhd8ed1ab_1 conda-forge isoduration 20.11.0 pypi_0 pypi itemadapter 0.7.0 pypi_0 pypi itemloaders 1.0.6 pypi_0 pypi itsdangerous 2.0.1 pyhd3eb1b0_0 jedi 0.18.2 pypi_0 pypi jinja2 3.1.2 py38h06a4308_0 jmespath 1.0.1 pypi_0 pypi joblib 1.2.0 pypi_0 pypi jpeg 9b h024ee3a_2 json5 0.9.10 pypi_0 pypi jsonpickle 3.0.0 pypi_0 pypi jsonpointer 2.3 pypi_0 pypi jsonschema 4.17.3 py38h06a4308_0 jupyter-core 5.1.0 pypi_0 pypi jupyter-events 0.5.0 pypi_0 pypi jupyter-server 1.23.3 pypi_0 pypi jupyter-server-fileid 0.6.0 pypi_0 pypi jupyter-server-ydoc 0.4.0 pypi_0 pypi jupyter-ydoc 0.2.2 pypi_0 pypi jupyter_client 7.4.9 py38h06a4308_0 jupyter_core 5.2.0 py38h06a4308_0 jupyterlab 3.6.0a4 pypi_0 pypi jupyterlab-pygments 0.2.2 pypi_0 pypi jupyterlab-server 2.16.3 pypi_0 pypi jupyterlab_widgets 3.0.3 pyhd8ed1ab_0 conda-forge kiwisolver 1.4.4 pypi_0 pypi krb5 1.19.4 h568e23c_0 lcms2 2.12 h3be6417_0 ld_impl_linux-64 2.38 h1181459_1 libboost 1.73.0 h3ff78a5_11 libbrotlicommon 1.0.9 h5eee18b_7 libbrotlidec 1.0.9 h5eee18b_7 libbrotlienc 1.0.9 h5eee18b_7 libcurl 7.88.1 h91b91d3_0 libedit 3.1.20221030 h5eee18b_0 libev 4.33 h7f8727e_1 libevent 2.1.12 h8f2d780_0 libffi 3.4.2 h6a678d5_6 libgcc-ng 11.2.0 h1234567_1 libgomp 11.2.0 h1234567_1 libnghttp2 1.46.0 hce63b2e_0 libpng 1.6.39 h5eee18b_0 libprotobuf 3.17.2 h4ff587b_1 libsodium 1.0.18 h7b6447c_0 libssh2 1.10.0 h8f2d780_0 libstdcxx-ng 11.2.0 h1234567_1 libthrift 0.14.2 hcc01f38_0 libtiff 4.1.0 h2733197_1 libuv 1.44.2 h5eee18b_0 libwebp 1.2.0 h89dd481_0 lz4-c 1.9.4 h6a678d5_0 markupsafe 2.1.1 py38h7f8727e_0 matplotlib 3.6.2 pypi_0 pypi matplotlib-inline 0.1.6 py38h06a4308_0 mccabe 0.7.0 pypi_0 pypi mistune 2.0.4 pypi_0 pypi mkl 2021.4.0 h06a4308_640 mkl-service 2.4.0 py38h7f8727e_0 mkl_fft 1.3.1 py38hd3c417c_0 mkl_random 1.2.2 py38h51133e4_0 morfeusz2 1.99.6 pypi_0 pypi multidict 6.0.2 py38h5eee18b_0 multiprocess 0.70.14 py38h06a4308_0 nbclassic 0.4.8 pypi_0 pypi nbclient 0.7.2 pypi_0 pypi nbconvert 7.2.5 pypi_0 pypi nbformat 5.7.0 py38h06a4308_0 ncurses 6.4 h6a678d5_0 nest-asyncio 1.5.6 py38h06a4308_0 ninja 1.10.2 h06a4308_5 ninja-base 1.10.2 hd09550d_5 notebook 6.5.2 pypi_0 pypi notebook-shim 0.2.2 pypi_0 pypi numexpr 2.8.4 py38he184ba9_0 numpy 1.23.5 py38h14f4228_0 numpy-base 1.23.5 py38h31eccc5_0 oauthlib 3.2.2 pypi_0 pypi opencv-python 4.6.0.66 pypi_0 pypi openssl 1.1.1t h7f8727e_0 orc 1.6.9 ha97a36c_3 packaging 22.0 py38h06a4308_0 pandas 1.5.2 pypi_0 pypi pandocfilters 1.5.0 pypi_0 pypi parsel 1.7.0 pypi_0 pypi parso 0.8.3 pyhd3eb1b0_0 pathlib 1.0.1 pypi_0 pypi pathtools 0.1.2 pypi_0 pypi pexpect 4.8.0 pyhd3eb1b0_3 pickleshare 0.7.5 pyhd3eb1b0_1003 pillow 9.3.0 pypi_0 pypi pip 22.2.2 py38h06a4308_0 pkgutil-resolve-name 1.3.10 py38h06a4308_0 platformdirs 2.5.4 pypi_0 pypi prometheus-client 0.15.0 pypi_0 pypi promise 2.3 pypi_0 pypi prompt-toolkit 3.0.33 pypi_0 pypi protego 0.2.1 pypi_0 pypi protobuf 4.21.12 pypi_0 pypi psutil 5.9.0 py38h5eee18b_0 ptyprocess 0.7.0 pyhd3eb1b0_2 pure_eval 0.2.2 pyhd3eb1b0_0 pyarrow 10.0.1 pypi_0 pypi pyasn1 0.4.8 pypi_0 pypi pyasn1-modules 0.2.8 pypi_0 pypi pycodestyle 2.10.0 pypi_0 pypi pycparser 2.21 pyhd3eb1b0_0 pydispatcher 2.0.6 pypi_0 pypi pyflakes 3.0.1 pypi_0 pypi pygments 2.11.2 pyhd3eb1b0_0 pyopenssl 22.1.0 pypi_0 pypi pyrsistent 0.18.0 py38heee7806_0 pysocks 1.7.1 py38h06a4308_0 python 3.8.15 h7a1cb2a_2 python-dateutil 2.8.2 pyhd3eb1b0_0 python-dotenv 0.21.0 pypi_0 pypi python-fastjsonschema 2.16.2 py38h06a4308_0 python-json-logger 2.0.4 pypi_0 pypi python-xxhash 2.0.2 py38h5eee18b_1 pytorch 1.7.1 py3.8_cuda10.1.243_cudnn7.6.3_0 pytorch pytz 2022.6 pypi_0 pypi pyyaml 6.0 py38h5eee18b_1 pyzmq 23.2.0 py38h6a678d5_0 queuelib 1.6.2 pypi_0 pypi re2 2022.04.01 h295c915_0 readline 8.2 h5eee18b_0 regex 2022.10.31 pypi_0 pypi requests 2.28.1 py38h06a4308_0 requests-file 1.5.1 pypi_0 pypi requests-oauthlib 1.3.1 pypi_0 pypi rfc3339-validator 0.1.4 pypi_0 pypi rfc3986-validator 0.1.1 pypi_0 pypi scikit-learn 1.1.3 pypi_0 pypi scipy 1.9.3 pypi_0 pypi scrapy 2.7.1 pypi_0 pypi seaborn 0.12.1 pypi_0 pypi send2trash 1.8.0 pypi_0 pypi sentry-sdk 1.12.1 pypi_0 pypi service-identity 21.1.0 pypi_0 pypi setproctitle 1.3.2 pypi_0 pypi setuptools 65.6.3 pypi_0 pypi shortuuid 1.0.11 pypi_0 pypi six 1.16.0 pyhd3eb1b0_1 smart-open 6.2.0 pypi_0 pypi smmap 5.0.0 pypi_0 pypi snappy 1.1.9 h295c915_0 sniffio 1.3.0 pypi_0 pypi soupsieve 2.3.2.post1 pypi_0 pypi sqlite 3.40.1 h5082296_0 stack-data 0.6.2 pypi_0 pypi stack_data 0.2.0 pyhd3eb1b0_0 terminado 0.17.0 pypi_0 pypi threadpoolctl 3.1.0 pypi_0 pypi tinycss2 1.2.1 pypi_0 pypi tk 8.6.12 h1ccaba5_0 tldextract 3.4.0 pypi_0 pypi tokenizers 0.13.2 pypi_0 pypi tomli 2.0.1 pypi_0 pypi torchvision 0.8.2 py38_cu101 pytorch tornado 6.2 py38h5eee18b_0 tqdm 4.64.1 py38h06a4308_0 traitlets 5.6.0 pypi_0 pypi transformers 4.25.1 pypi_0 pypi tweepy 4.12.1 pypi_0 pypi twisted 22.10.0 pypi_0 pypi twython 3.9.1 pypi_0 pypi typing-extensions 4.4.0 py38h06a4308_0 typing_extensions 4.4.0 py38h06a4308_0 uri-template 1.2.0 pypi_0 pypi uriparser 0.9.3 he6710b0_1 urllib3 1.26.13 pypi_0 pypi utf8proc 2.6.1 h27cfd23_0 w3lib 2.1.0 pypi_0 pypi wandb 0.13.7 pypi_0 pypi wcwidth 0.2.5 pyhd3eb1b0_0 webcolors 1.12 pypi_0 pypi webencodings 0.5.1 pypi_0 pypi websocket-client 1.4.2 pypi_0 pypi werkzeug 2.2.2 py38h06a4308_0 wheel 0.38.4 py38h06a4308_0 widgetsnbextension 4.0.3 py38h06a4308_0 xxhash 0.8.0 h7f8727e_3 xz 5.2.10 h5eee18b_1 y-py 0.5.4 pypi_0 pypi yaml 0.2.5 h7b6447c_0 yarl 1.8.1 py38h5eee18b_0 ypy-websocket 0.5.0 pypi_0 pypi zeromq 4.3.4 h2531618_0 zipp 3.11.0 py38h06a4308_0 zlib 1.2.13 h5eee18b_0 zope-interface 5.5.2 pypi_0 pypi zstd 1.4.9 haebb681_0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5633/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5633/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5632/comments
https://api.github.com/repos/huggingface/datasets/issues/5632/events
https://github.com/huggingface/datasets/issues/5632
1,621,177,391
I_kwDODunzps5goTQv
5,632
Dataset cannot convert too large dictionnary
{ "login": "MaraLac", "id": 108518627, "node_id": "U_kgDOBnfc4w", "avatar_url": "https://avatars.githubusercontent.com/u/108518627?v=4", "gravatar_id": "", "url": "https://api.github.com/users/MaraLac", "html_url": "https://github.com/MaraLac", "followers_url": "https://api.github.com/users/MaraLac/followers", "following_url": "https://api.github.com/users/MaraLac/following{/other_user}", "gists_url": "https://api.github.com/users/MaraLac/gists{/gist_id}", "starred_url": "https://api.github.com/users/MaraLac/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MaraLac/subscriptions", "organizations_url": "https://api.github.com/users/MaraLac/orgs", "repos_url": "https://api.github.com/users/MaraLac/repos", "events_url": "https://api.github.com/users/MaraLac/events{/privacy}", "received_events_url": "https://api.github.com/users/MaraLac/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
"2023-03-13T10:14:40"
"2023-03-16T15:28:57"
null
NONE
null
### Describe the bug Hello everyone! I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})". However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this. Indeed, I can create the dataset until a certain size of my dictionnary, and then I have the error "OverflowError: Python int too large to convert to C long". Do you know how to solve this problem? Unfortunately I cannot give a reproductible code because I cannot share a so large file, but you can find the code below (it's a test on only a part of the validation data ~10Go, but it's already the case). Thank you! ### Steps to reproduce the bug SAVE_DIR = './data/' features = h5py.File(SAVE_DIR+'features.hdf5','r') valid_data = features["validation"]["data/features"] v_array_values = [np.float32(item[()]) for item in valid_data.values()] for i in range(len(v_array_values)): v_array_values[i] = v_array_values[i].round(decimals=5) dict_valid = datasets.Dataset.from_dict({'input_values': v_array_values}) ### Expected behavior The code is expected to give me a Huggingface dataset. ### Environment info python: 3.8.15 numpy: 1.22.3 datasets: 2.3.2 pyarrow: 8.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5632/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5632/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5631/comments
https://api.github.com/repos/huggingface/datasets/issues/5631/events
https://github.com/huggingface/datasets/issues/5631
1,620,442,854
I_kwDODunzps5glf7m
5,631
Custom split names
{ "login": "ErfanMoosaviMonazzah", "id": 79091831, "node_id": "MDQ6VXNlcjc5MDkxODMx", "avatar_url": "https://avatars.githubusercontent.com/u/79091831?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ErfanMoosaviMonazzah", "html_url": "https://github.com/ErfanMoosaviMonazzah", "followers_url": "https://api.github.com/users/ErfanMoosaviMonazzah/followers", "following_url": "https://api.github.com/users/ErfanMoosaviMonazzah/following{/other_user}", "gists_url": "https://api.github.com/users/ErfanMoosaviMonazzah/gists{/gist_id}", "starred_url": "https://api.github.com/users/ErfanMoosaviMonazzah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ErfanMoosaviMonazzah/subscriptions", "organizations_url": "https://api.github.com/users/ErfanMoosaviMonazzah/orgs", "repos_url": "https://api.github.com/users/ErfanMoosaviMonazzah/repos", "events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/events{/privacy}", "received_events_url": "https://api.github.com/users/ErfanMoosaviMonazzah/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
1
"2023-03-12T17:21:43"
"2023-03-13T18:13:02"
null
NONE
null
### Feature request Hi, I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (currently i can have more splits when I am loading datasets from urls, but not hub) ### Motivation Easier access to more splits ### Your contribution No
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5631/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5631/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5630/comments
https://api.github.com/repos/huggingface/datasets/issues/5630/events
https://github.com/huggingface/datasets/pull/5630
1,620,327,510
PR_kwDODunzps5L1ahF
5,630
adds early exit if url is `PathLike`
{ "login": "vvvm23", "id": 44398246, "node_id": "MDQ6VXNlcjQ0Mzk4MjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/44398246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vvvm23", "html_url": "https://github.com/vvvm23", "followers_url": "https://api.github.com/users/vvvm23/followers", "following_url": "https://api.github.com/users/vvvm23/following{/other_user}", "gists_url": "https://api.github.com/users/vvvm23/gists{/gist_id}", "starred_url": "https://api.github.com/users/vvvm23/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vvvm23/subscriptions", "organizations_url": "https://api.github.com/users/vvvm23/orgs", "repos_url": "https://api.github.com/users/vvvm23/repos", "events_url": "https://api.github.com/users/vvvm23/events{/privacy}", "received_events_url": "https://api.github.com/users/vvvm23/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
"2023-03-12T11:23:28"
"2023-03-15T11:58:38"
null
NONE
null
Closes #4864 Should fix errors thrown when attempting to load `json` dataset using `pathlib.Path` in `data_files` argument.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5630/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5630/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5630", "html_url": "https://github.com/huggingface/datasets/pull/5630", "diff_url": "https://github.com/huggingface/datasets/pull/5630.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5630.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5629/comments
https://api.github.com/repos/huggingface/datasets/issues/5629/events
https://github.com/huggingface/datasets/issues/5629
1,619,921,247
I_kwDODunzps5gjglf
5,629
load_dataset gives "403" error when using Financial phrasebank
{ "login": "Jimchoo91", "id": 67709789, "node_id": "MDQ6VXNlcjY3NzA5Nzg5", "avatar_url": "https://avatars.githubusercontent.com/u/67709789?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Jimchoo91", "html_url": "https://github.com/Jimchoo91", "followers_url": "https://api.github.com/users/Jimchoo91/followers", "following_url": "https://api.github.com/users/Jimchoo91/following{/other_user}", "gists_url": "https://api.github.com/users/Jimchoo91/gists{/gist_id}", "starred_url": "https://api.github.com/users/Jimchoo91/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jimchoo91/subscriptions", "organizations_url": "https://api.github.com/users/Jimchoo91/orgs", "repos_url": "https://api.github.com/users/Jimchoo91/repos", "events_url": "https://api.github.com/users/Jimchoo91/events{/privacy}", "received_events_url": "https://api.github.com/users/Jimchoo91/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
1
"2023-03-11T07:46:39"
"2023-03-13T18:27:26"
null
NONE
null
When I try to load this dataset, I receive the following error: ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403) Has this been seen before? Thanks. The website loads when I try to access it manually.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5629/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5629/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5628/comments
https://api.github.com/repos/huggingface/datasets/issues/5628/events
https://github.com/huggingface/datasets/pull/5628
1,619,641,810
PR_kwDODunzps5LzVKi
5,628
add kwargs to index search
{ "login": "SaulLu", "id": 55560583, "node_id": "MDQ6VXNlcjU1NTYwNTgz", "avatar_url": "https://avatars.githubusercontent.com/u/55560583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaulLu", "html_url": "https://github.com/SaulLu", "followers_url": "https://api.github.com/users/SaulLu/followers", "following_url": "https://api.github.com/users/SaulLu/following{/other_user}", "gists_url": "https://api.github.com/users/SaulLu/gists{/gist_id}", "starred_url": "https://api.github.com/users/SaulLu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SaulLu/subscriptions", "organizations_url": "https://api.github.com/users/SaulLu/orgs", "repos_url": "https://api.github.com/users/SaulLu/repos", "events_url": "https://api.github.com/users/SaulLu/events{/privacy}", "received_events_url": "https://api.github.com/users/SaulLu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2023-03-10T21:24:58"
"2023-03-15T14:48:47"
"2023-03-15T14:46:04"
CONTRIBUTOR
null
This PR proposes to add kwargs to index search methods. This is particularly useful for setting the timeout of a query on elasticsearch. A typical use case would be: ```python dset.add_elasticsearch_index("filename", es_client=es_client) scores, examples = dset.get_nearest_examples("filename", "my_name-train_29", request_timeout=60) ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5628/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5628/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5628", "html_url": "https://github.com/huggingface/datasets/pull/5628", "diff_url": "https://github.com/huggingface/datasets/pull/5628.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5628.patch", "merged_at": "2023-03-15T14:46:04" }
true
https://api.github.com/repos/huggingface/datasets/issues/5627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5627/comments
https://api.github.com/repos/huggingface/datasets/issues/5627/events
https://github.com/huggingface/datasets/issues/5627
1,619,336,609
I_kwDODunzps5ghR2h
5,627
Unable to load AutoTrain-generated dataset from the hub
{ "login": "ijmiller2", "id": 8560151, "node_id": "MDQ6VXNlcjg1NjAxNTE=", "avatar_url": "https://avatars.githubusercontent.com/u/8560151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ijmiller2", "html_url": "https://github.com/ijmiller2", "followers_url": "https://api.github.com/users/ijmiller2/followers", "following_url": "https://api.github.com/users/ijmiller2/following{/other_user}", "gists_url": "https://api.github.com/users/ijmiller2/gists{/gist_id}", "starred_url": "https://api.github.com/users/ijmiller2/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ijmiller2/subscriptions", "organizations_url": "https://api.github.com/users/ijmiller2/orgs", "repos_url": "https://api.github.com/users/ijmiller2/repos", "events_url": "https://api.github.com/users/ijmiller2/events{/privacy}", "received_events_url": "https://api.github.com/users/ijmiller2/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
2
"2023-03-10T17:25:58"
"2023-03-11T15:44:42"
null
NONE
null
### Describe the bug DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match ``` ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string _fingerprint: string _format_columns: list<item: string> child 0, item: string _format_kwargs: struct<> _format_type: null _indexes: struct<> _output_all_columns: bool _split: null to {'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}} because column names don't match ``` ### Steps to reproduce the bug Steps to reproduce: 1. `pip install datasets==2.10.1` 2. Attempt to load (private dataset). Note that I'm authenticated via ` huggingface-cli login` ``` from datasets import load_dataset # load dataset dataset = "ijmiller2/autotrain-data-betterbin-vision-10000" dataset = load_dataset(dataset) ``` Here's the full traceback: ```Downloading and preparing dataset json/ijmiller2--autotrain-data-betterbin-vision-10000 to /Users/ian/.cache/huggingface/datasets/ijmiller2___json/ijmiller2--autotrain-data-betterbin-vision-10000-2eae034a9ff8a1a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51... Downloading data files: 100%|███████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2383.80it/s] Extracting data files: 100%|█████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 505.95it/s] --------------------------------------------------------------------------- ValueError Traceback (most recent call last) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1874, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1868 writer = writer_class( 1869 features=writer._features, 1870 path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"), 1871 storage_options=self._fs.storage_options, 1872 embed_local_files=embed_local_files, 1873 ) -> 1874 writer.write_table(table) 1875 num_examples_progress_update += len(table) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/arrow_writer.py:568, in ArrowWriter.write_table(self, pa_table, writer_batch_size) 567 pa_table = pa_table.combine_chunks() --> 568 pa_table = table_cast(pa_table, self._schema) 569 if self.embed_local_files: File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2312, in table_cast(table, schema) 2311 if table.schema != schema: -> 2312 return cast_table_to_schema(table, schema) 2313 elif table.schema.metadata != schema.metadata: File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2270, in cast_table_to_schema(table, schema) 2269 if sorted(table.column_names) != sorted(features): -> 2270 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") 2271 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string _fingerprint: string _format_columns: list<item: string> child 0, item: string _format_kwargs: struct<> _format_type: null _indexes: struct<> _output_all_columns: bool _split: null to {'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}} because column names don't match The above exception was the direct cause of the following exception: DatasetGenerationError Traceback (most recent call last) Input In [8], in <cell line: 6>() 4 # load dataset 5 dataset = "ijmiller2/autotrain-data-betterbin-vision-10000" ----> 6 dataset = load_dataset(dataset) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/load.py:1782, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs) 1779 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1781 # Download and prepare data -> 1782 builder_instance.download_and_prepare( 1783 download_config=download_config, 1784 download_mode=download_mode, 1785 verification_mode=verification_mode, 1786 try_from_hf_gcs=try_from_hf_gcs, 1787 num_proc=num_proc, 1788 ) 1790 # Build dataset for splits 1791 keep_in_memory = ( 1792 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1793 ) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:872, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs) 870 if num_proc is not None: 871 prepare_split_kwargs["num_proc"] = num_proc --> 872 self._download_and_prepare( 873 dl_manager=dl_manager, 874 verification_mode=verification_mode, 875 **prepare_split_kwargs, 876 **download_and_prepare_kwargs, 877 ) 878 # Sync info 879 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values()) File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:967, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs) 963 split_dict.add(split_generator.split_info) 965 try: 966 # Prepare split will record examples associated to the split --> 967 self._prepare_split(split_generator, **prepare_split_kwargs) 968 except OSError as e: 969 raise OSError( 970 "Cannot find data file. " 971 + (self.manual_download_instructions or "") 972 + "\nOriginal error:\n" 973 + str(e) 974 ) from None File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1749, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size) 1747 job_id = 0 1748 with pbar: -> 1749 for job_id, done, content in self._prepare_split_single( 1750 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args 1751 ): 1752 if done: 1753 result = content File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1892, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id) 1890 if isinstance(e, SchemaInferenceError) and e.__context__ is not None: 1891 e = e.__context__ -> 1892 raise DatasetGenerationError("An error occurred while generating the dataset") from e 1894 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths) DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior I'm ultimately trying to generate my own performance metrics on validation data (before putting an endpoint into production) and so was hoping to load all or at least the validation subset from the hub. I'm expecting the `load_dataset()` function to work as shown in the documentation [here](https://huggingface.co/docs/datasets/loading#hugging-face-hub): ```python dataset = load_dataset( "lhoestq/custom_squad", revision="main" # tag name, or branch name, or commit hash ) ``` ### Environment info - `datasets` version: 2.10.1 - Platform: macOS-13.2.1-arm64-arm-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5627/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5627/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5626/comments
https://api.github.com/repos/huggingface/datasets/issues/5626/events
https://github.com/huggingface/datasets/pull/5626
1,619,252,984
PR_kwDODunzps5LyBT4
5,626
Support streaming datasets with numpy.load
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2023-03-10T16:33:39"
"2023-03-21T06:36:05"
"2023-03-21T06:28:54"
MEMBER
null
Support streaming datasets with `numpy.load`. See: https://huggingface.co/datasets/qgallouedec/gia_dataset/discussions/1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5626/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5626/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5626", "html_url": "https://github.com/huggingface/datasets/pull/5626", "diff_url": "https://github.com/huggingface/datasets/pull/5626.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5626.patch", "merged_at": "2023-03-21T06:28:54" }
true
https://api.github.com/repos/huggingface/datasets/issues/5625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5625/comments
https://api.github.com/repos/huggingface/datasets/issues/5625/events
https://github.com/huggingface/datasets/issues/5625
1,618,971,855
I_kwDODunzps5gf4zP
5,625
Allow "jsonl" data type signifier
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "repos_url": "https://api.github.com/users/BramVanroy/repos", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
2
"2023-03-10T13:21:48"
"2023-03-11T10:35:39"
null
CONTRIBUTOR
null
### Feature request `load_dataset` currently does not accept `jsonl` as type but only `json`. ### Motivation I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because ``` FileNotFoundError: Couldn't find a dataset script at jsonl\jsonl.py or any data file in the same directory. Couldn't find 'jsonl' on the Hugging Face Hub either: FileNotFoundError: Dataset 'jsonl' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`. ``` The reason is because the script has these lines to extract the data type by its extension. Therefore, the derived type is `jsonl` which is not recognized by datasets as the error above shows. https://github.com/huggingface/transformers/blob/ade26bf9912f69e2110137443e4406d7dbe253e7/examples/pytorch/translation/run_translation.py#L342-L356 I suppose you could argue that this is the script's fault (in which case I'll do a PR over at `transformers`) but it makes sense to me to add `jsonl` as an alias to `json` in `datasets`. ### Your contribution At the moment I cannot work on this. I think it can be as "easy" as having an alias for json, namely jsonl.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5625/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5624/comments
https://api.github.com/repos/huggingface/datasets/issues/5624/events
https://github.com/huggingface/datasets/issues/5624
1,617,400,192
I_kwDODunzps5gZ5GA
5,624
glue datasets returning -1 for test split
{ "login": "lithafnium", "id": 8939967, "node_id": "MDQ6VXNlcjg5Mzk5Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/8939967?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lithafnium", "html_url": "https://github.com/lithafnium", "followers_url": "https://api.github.com/users/lithafnium/followers", "following_url": "https://api.github.com/users/lithafnium/following{/other_user}", "gists_url": "https://api.github.com/users/lithafnium/gists{/gist_id}", "starred_url": "https://api.github.com/users/lithafnium/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lithafnium/subscriptions", "organizations_url": "https://api.github.com/users/lithafnium/orgs", "repos_url": "https://api.github.com/users/lithafnium/repos", "events_url": "https://api.github.com/users/lithafnium/events{/privacy}", "received_events_url": "https://api.github.com/users/lithafnium/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
1
"2023-03-09T14:47:18"
"2023-03-09T16:49:29"
"2023-03-09T16:49:29"
NONE
null
### Describe the bug Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online. ### Steps to reproduce the bug ``` dataset = load_dataset("glue", "sst2") for d in dataset: # prints out -1 print(d["label"] ``` ### Expected behavior Expected behavior should be 0/1 instead of -1. ### Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.15.0-46-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - PyArrow version: 8.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5624/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5623/comments
https://api.github.com/repos/huggingface/datasets/issues/5623/events
https://github.com/huggingface/datasets/pull/5623
1,616,712,665
PR_kwDODunzps5Lpb4q
5,623
Remove set_access_token usage + fail tests if FutureWarning
{ "login": "Wauplin", "id": 11801849, "node_id": "MDQ6VXNlcjExODAxODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Wauplin", "html_url": "https://github.com/Wauplin", "followers_url": "https://api.github.com/users/Wauplin/followers", "following_url": "https://api.github.com/users/Wauplin/following{/other_user}", "gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}", "starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions", "organizations_url": "https://api.github.com/users/Wauplin/orgs", "repos_url": "https://api.github.com/users/Wauplin/repos", "events_url": "https://api.github.com/users/Wauplin/events{/privacy}", "received_events_url": "https://api.github.com/users/Wauplin/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
6
"2023-03-09T08:46:01"
"2023-03-09T15:39:00"
"2023-03-09T15:31:59"
CONTRIBUTOR
null
`set_access_token` is deprecated and will be removed in `huggingface_hub>=0.14`. This PR removes it from the tests (it was not used in `datasets` source code itself). FYI, it was not needed since `set_access_token` was just setting git credentials and `datasets` doesn't seem to use git anywhere. In the future, use `set_git_credential` if needed. It is a git-credential-agnostic helper, i.e. you can store your git token in `git-credential-cache`, `git-credential-store`, `osxkeychain`, etc. The legacy `set_access_token` could only set in `git-credential-store` no matter the user preference. (for context, I found out about this while working on https://github.com/huggingface/huggingface_hub/pull/1381) --- In addition to this, I have added ``` filterwarnings = error::FutureWarning:huggingface_hub* ``` to the `setup.cfg` config file to fail on future warnings from `huggingface_hub`. In `hfh`'s CI we trigger on FutureWarning from any package but it's less robust (any package update leads can lead to a failure). No obligation to keep it like that (I can remove it if you prefer) but I think it's a good idea in order to track future FutureWarnings. FYI, in `huggingface_hub` tests we use `-Werror::FutureWarning --log-cli-level=INFO -sv --durations=0` - FutureWarning are processed as error - verbose mode / INFO logs (and above) are captured for easier debugging in github report - track each test duration, just to see where we can improve. We have a quite long CI (~10min) so it helped improve that.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5623/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5623", "html_url": "https://github.com/huggingface/datasets/pull/5623", "diff_url": "https://github.com/huggingface/datasets/pull/5623.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5623.patch", "merged_at": "2023-03-09T15:31:58" }
true
https://api.github.com/repos/huggingface/datasets/issues/5622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5622/comments
https://api.github.com/repos/huggingface/datasets/issues/5622/events
https://github.com/huggingface/datasets/pull/5622
1,615,190,942
PR_kwDODunzps5LkSj8
5,622
Update README template to better template
{ "login": "emiltj", "id": 54767532, "node_id": "MDQ6VXNlcjU0NzY3NTMy", "avatar_url": "https://avatars.githubusercontent.com/u/54767532?v=4", "gravatar_id": "", "url": "https://api.github.com/users/emiltj", "html_url": "https://github.com/emiltj", "followers_url": "https://api.github.com/users/emiltj/followers", "following_url": "https://api.github.com/users/emiltj/following{/other_user}", "gists_url": "https://api.github.com/users/emiltj/gists{/gist_id}", "starred_url": "https://api.github.com/users/emiltj/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emiltj/subscriptions", "organizations_url": "https://api.github.com/users/emiltj/orgs", "repos_url": "https://api.github.com/users/emiltj/repos", "events_url": "https://api.github.com/users/emiltj/events{/privacy}", "received_events_url": "https://api.github.com/users/emiltj/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2023-03-08T12:30:23"
"2023-03-11T05:07:38"
"2023-03-11T05:07:38"
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5622/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5622/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5622", "html_url": "https://github.com/huggingface/datasets/pull/5622", "diff_url": "https://github.com/huggingface/datasets/pull/5622.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5622.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5621/comments
https://api.github.com/repos/huggingface/datasets/issues/5621/events
https://github.com/huggingface/datasets/pull/5621
1,615,029,615
PR_kwDODunzps5LjwD8
5,621
Adding Oracle Cloud to docs
{ "login": "ahosler", "id": 29129502, "node_id": "MDQ6VXNlcjI5MTI5NTAy", "avatar_url": "https://avatars.githubusercontent.com/u/29129502?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ahosler", "html_url": "https://github.com/ahosler", "followers_url": "https://api.github.com/users/ahosler/followers", "following_url": "https://api.github.com/users/ahosler/following{/other_user}", "gists_url": "https://api.github.com/users/ahosler/gists{/gist_id}", "starred_url": "https://api.github.com/users/ahosler/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ahosler/subscriptions", "organizations_url": "https://api.github.com/users/ahosler/orgs", "repos_url": "https://api.github.com/users/ahosler/repos", "events_url": "https://api.github.com/users/ahosler/events{/privacy}", "received_events_url": "https://api.github.com/users/ahosler/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2023-03-08T10:22:50"
"2023-03-11T00:57:18"
"2023-03-11T00:49:56"
CONTRIBUTOR
null
Adding Oracle Cloud's fsspec implementation to the list of supported cloud storage providers.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5621/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5621/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5621", "html_url": "https://github.com/huggingface/datasets/pull/5621", "diff_url": "https://github.com/huggingface/datasets/pull/5621.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5621.patch", "merged_at": "2023-03-11T00:49:56" }
true
https://api.github.com/repos/huggingface/datasets/issues/5620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5620/comments
https://api.github.com/repos/huggingface/datasets/issues/5620/events
https://github.com/huggingface/datasets/pull/5620
1,613,460,520
PR_kwDODunzps5LefAf
5,620
Bump pyarrow to 8.0.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
12
"2023-03-07T13:31:53"
"2023-03-08T14:01:27"
"2023-03-08T13:54:22"
MEMBER
null
Fix those for Pandas 2.0 (tested [here](https://github.com/huggingface/datasets/actions/runs/4346221280/jobs/7592010397) with pandas==2.0.0.rc0): ```python =========================== short test summary info ============================ FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_in_memory - ImportError: Unable to find a usable engine; tried using: 'pyarrow', 'fastparquet'. A suitable version of pyarrow or fastparquet is required for parquet support. Trying to import the above resulted in these errors: - Pandas requires version '7.0.0' or newer of 'pyarrow' (version '6.0.1' currently installed). - Missing optional dependency 'fastparquet'. fastparquet is required for parquet support. Use pip or conda to install fastparquet. FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_on_disk - ImportError: Unable to find a usable engine; tried using: 'pyarrow', 'fastparquet'. A suitable version of pyarrow or fastparquet is required for parquet support. Trying to import the above resulted in these errors: - Pandas requires version '7.0.0' or newer of 'pyarrow' (version '6.0.1' currently installed). - Missing optional dependency 'fastparquet'. fastparquet is required for parquet support. Use pip or conda to install fastparquet. ===== 2 failed, 2137 passed, 18 skipped, 32 warnings in 212.76s (0:03:32) ====== ``` EDIT: also for performance - with 8.0 we can use `.to_reader()`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5620/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5620", "html_url": "https://github.com/huggingface/datasets/pull/5620", "diff_url": "https://github.com/huggingface/datasets/pull/5620.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5620.patch", "merged_at": "2023-03-08T13:54:21" }
true
https://api.github.com/repos/huggingface/datasets/issues/5619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5619/comments
https://api.github.com/repos/huggingface/datasets/issues/5619/events
https://github.com/huggingface/datasets/pull/5619
1,613,439,709
PR_kwDODunzps5LeaYP
5,619
unpin fsspec
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
3
"2023-03-07T13:22:41"
"2023-03-07T13:47:01"
"2023-03-07T13:39:02"
MEMBER
null
close https://github.com/huggingface/datasets/issues/5618
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5619/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5619", "html_url": "https://github.com/huggingface/datasets/pull/5619", "diff_url": "https://github.com/huggingface/datasets/pull/5619.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5619.patch", "merged_at": "2023-03-07T13:39:02" }
true
https://api.github.com/repos/huggingface/datasets/issues/5618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5618/comments
https://api.github.com/repos/huggingface/datasets/issues/5618/events
https://github.com/huggingface/datasets/issues/5618
1,612,977,934
I_kwDODunzps5gJBcO
5,618
Unpin fsspec < 2023.3.0 once issue fixed
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
0
"2023-03-07T08:41:51"
"2023-03-07T13:39:03"
"2023-03-07T13:39:03"
MEMBER
null
Unpin `fsspec` upper version once root cause of our CI break is fixed. See: - #5614
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5618/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5618/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5617/comments
https://api.github.com/repos/huggingface/datasets/issues/5617/events
https://github.com/huggingface/datasets/pull/5617
1,612,947,422
PR_kwDODunzps5LcvI-
5,617
Fix CI by temporarily pinning fsspec < 2023.3.0
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
2
"2023-03-07T08:18:20"
"2023-03-07T08:44:55"
"2023-03-07T08:37:28"
MEMBER
null
As a hotfix for our CI, temporarily pin `fsspec`: Fix #5616. Until root cause is fixed, see: - #5614
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5617/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5617", "html_url": "https://github.com/huggingface/datasets/pull/5617", "diff_url": "https://github.com/huggingface/datasets/pull/5617.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5617.patch", "merged_at": "2023-03-07T08:37:28" }
true
https://api.github.com/repos/huggingface/datasets/issues/5616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5616/comments
https://api.github.com/repos/huggingface/datasets/issues/5616/events
https://github.com/huggingface/datasets/issues/5616
1,612,932,508
I_kwDODunzps5gI2Wc
5,616
CI is broken after fsspec-2023.3.0 release
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
null
[]
null
0
"2023-03-07T08:06:39"
"2023-03-07T08:37:29"
"2023-03-07T08:37:29"
MEMBER
null
As reported by @lhoestq, our CI is broken after `fsspec` 2023.3.0 release: ``` FAILED tests/test_filesystem.py::test_compression_filesystems[Bz2FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] At index 0 diff: {'name': 'file.txt', 'size': 70, 'type': 'file', 'created': 1678175677.1887748, 'islink': False, 'mode': 33188, 'uid': 1001, 'gid': 123, 'mtime': 1678175677.1887748, 'ino': 286957, 'nlink': 1} != 'file.txt' Full diff: [ - 'file.txt', + {'created': 1678175677.1887748, + 'gid': 123, + 'ino': 286957, + 'islink': False, + 'mode': 33188, + 'mtime': 1678175677.1887748, + 'name': 'file.txt', + 'nlink': 1, + 'size': 70, + 'type': 'file', + 'uid': 1001}, ] ``` Also: ``` FAILED tests/test_filesystem.py::test_compression_filesystems[GzipFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] FAILED tests/test_filesystem.py::test_compression_filesystems[Lz4FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] FAILED tests/test_filesystem.py::test_compression_filesystems[XzFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] FAILED tests/test_filesystem.py::test_compression_filesystems[ZstdFileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] ===== 5 failed, 2134 passed, 18 skipped, 38 warnings in 157.21s (0:02:37) ====== ``` See: - fsspec/filesystem_spec#1205
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5616/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5616/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5615/comments
https://api.github.com/repos/huggingface/datasets/issues/5615/events
https://github.com/huggingface/datasets/issues/5615
1,612,552,653
I_kwDODunzps5gHZnN
5,615
IterableDataset.add_column is unable to accept another IterableDataset as a parameter.
{ "login": "zsaladin", "id": 6466389, "node_id": "MDQ6VXNlcjY0NjYzODk=", "avatar_url": "https://avatars.githubusercontent.com/u/6466389?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zsaladin", "html_url": "https://github.com/zsaladin", "followers_url": "https://api.github.com/users/zsaladin/followers", "following_url": "https://api.github.com/users/zsaladin/following{/other_user}", "gists_url": "https://api.github.com/users/zsaladin/gists{/gist_id}", "starred_url": "https://api.github.com/users/zsaladin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zsaladin/subscriptions", "organizations_url": "https://api.github.com/users/zsaladin/orgs", "repos_url": "https://api.github.com/users/zsaladin/repos", "events_url": "https://api.github.com/users/zsaladin/events{/privacy}", "received_events_url": "https://api.github.com/users/zsaladin/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892913, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEz", "url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix", "name": "wontfix", "color": "ffffff", "default": true, "description": "This will not be worked on" } ]
closed
false
null
[]
null
1
"2023-03-07T01:52:00"
"2023-03-09T15:24:05"
"2023-03-09T15:23:54"
NONE
null
### Describe the bug `IterableDataset.add_column` occurs an exception when passing another `IterableDataset` as a parameter. The method seems to accept only eager evaluated values. https://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/iterable_dataset.py#L1388-L1391 I wrote codes below to make it. ```py def add_column(dataset: IterableDataset, name: str, add_dataset: IterableDataset, key: str) -> IterableDataset: iter_add_dataset = iter(add_dataset) def add_column_fn(example): if name in example: raise ValueError(f"Error when adding {name}: column {name} is already in the dataset.") return {name: next(iter_add_dataset)[key]} return dataset.map(add_column_fn) ``` Is there other way to do it? Or is it intended? ### Steps to reproduce the bug Thie codes below occurs `NotImplementedError` ```py from datasets import IterableDataset def gen(num): yield {f"col{num}": 1} yield {f"col{num}": 2} yield {f"col{num}": 3} ids1 = IterableDataset.from_generator(gen, gen_kwargs={"num": 1}) ids2 = IterableDataset.from_generator(gen, gen_kwargs={"num": 2}) new_ids = ids1.add_column("new_col", ids1) for row in new_ids: print(row) ``` ### Expected behavior `IterableDataset.add_column` is able to task `IterableDataset` and lazy evaluated values as a parameter since IterableDataset is lazy evalued. ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-3.10.0-1160.36.2.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.9.7 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5615/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5614/comments
https://api.github.com/repos/huggingface/datasets/issues/5614/events
https://github.com/huggingface/datasets/pull/5614
1,611,896,357
PR_kwDODunzps5LZOTd
5,614
Fix archive fs test
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "repos_url": "https://api.github.com/users/lhoestq/repos", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
4
"2023-03-06T17:28:09"
"2023-03-07T13:27:50"
"2023-03-07T13:20:57"
MEMBER
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5614/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5614/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5614", "html_url": "https://github.com/huggingface/datasets/pull/5614", "diff_url": "https://github.com/huggingface/datasets/pull/5614.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5614.patch", "merged_at": "2023-03-07T13:20:57" }
true

dataset_info: features:

  • name: url dtype: string
  • name: repository_url dtype: string
  • name: labels_url dtype: string
  • name: comments_url dtype: string
  • name: events_url dtype: string
  • name: html_url dtype: string
  • name: id dtype: int64
  • name: node_id dtype: string
  • name: number dtype: int64
  • name: title dtype: string
  • name: user struct:
    • name: login dtype: string
    • name: id dtype: int64
    • name: node_id dtype: string
    • name: avatar_url dtype: string
    • name: gravatar_id dtype: string
    • name: url dtype: string
    • name: html_url dtype: string
    • name: followers_url dtype: string
    • name: following_url dtype: string
    • name: gists_url dtype: string
    • name: starred_url dtype: string
    • name: subscriptions_url dtype: string
    • name: organizations_url dtype: string
    • name: repos_url dtype: string
    • name: events_url dtype: string
    • name: received_events_url dtype: string
    • name: type dtype: string
    • name: site_admin dtype: bool
  • name: labels list:
    • name: id dtype: int64
    • name: node_id dtype: string
    • name: url dtype: string
    • name: name dtype: string
    • name: color dtype: string
    • name: default dtype: bool
    • name: description dtype: string
  • name: state dtype: string
  • name: locked dtype: bool
  • name: assignee struct:
    • name: login dtype: string
    • name: id dtype: int64
    • name: node_id dtype: string
    • name: avatar_url dtype: string
    • name: gravatar_id dtype: string
    • name: url dtype: string
    • name: html_url dtype: string
    • name: followers_url dtype: string
    • name: following_url dtype: string
    • name: gists_url dtype: string
    • name: starred_url dtype: string
    • name: subscriptions_url dtype: string
    • name: organizations_url dtype: string
    • name: repos_url dtype: string
    • name: events_url dtype: string
    • name: received_events_url dtype: string
    • name: type dtype: string
    • name: site_admin dtype: bool
  • name: assignees list:
    • name: login dtype: string
    • name: id dtype: int64
    • name: node_id dtype: string
    • name: avatar_url dtype: string
    • name: gravatar_id dtype: string
    • name: url dtype: string
    • name: html_url dtype: string
    • name: followers_url dtype: string
    • name: following_url dtype: string
    • name: gists_url dtype: string
    • name: starred_url dtype: string
    • name: subscriptions_url dtype: string
    • name: organizations_url dtype: string
    • name: repos_url dtype: string
    • name: events_url dtype: string
    • name: received_events_url dtype: string
    • name: type dtype: string
    • name: site_admin dtype: bool
  • name: milestone dtype: 'null'
  • name: comments dtype: int64
  • name: created_at dtype: timestamp[s]
  • name: updated_at dtype: timestamp[s]
  • name: closed_at dtype: timestamp[s]
  • name: author_association dtype: string
  • name: active_lock_reason dtype: 'null'
  • name: body dtype: string
  • name: reactions struct:
    • name: url dtype: string
    • name: total_count dtype: int64
    • name: '+1' dtype: int64
    • name: '-1' dtype: int64
    • name: laugh dtype: int64
    • name: hooray dtype: int64
    • name: confused dtype: int64
    • name: heart dtype: int64
    • name: rocket dtype: int64
    • name: eyes dtype: int64
  • name: timeline_url dtype: string
  • name: performed_via_github_app dtype: 'null'
  • name: state_reason dtype: string
  • name: draft dtype: bool
  • name: pull_request struct:
    • name: url dtype: string
    • name: html_url dtype: string
    • name: diff_url dtype: string
    • name: patch_url dtype: string
    • name: merged_at dtype: timestamp[s]
  • name: is_pull_request dtype: bool splits:
  • name: train num_bytes: 201451 num_examples: 60 download_size: 0 dataset_size: 201451

Dataset Card for "github-issues"

More Information needed

Downloads last month
0
Edit dataset card