url
stringlengths
61
61
repository_url
stringclasses
1 value
labels_url
stringlengths
75
75
comments_url
stringlengths
70
70
events_url
stringlengths
68
68
html_url
stringlengths
49
51
id
int64
947M
1.66B
node_id
stringlengths
18
32
number
int64
2.67k
5.73k
title
stringlengths
1
290
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
sequence
created_at
unknown
updated_at
unknown
closed_at
unknown
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
36.2k
reactions
dict
timeline_url
stringlengths
70
70
performed_via_github_app
null
state_reason
stringclasses
3 values
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/5727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5727/comments
https://api.github.com/repos/huggingface/datasets/issues/5727/events
https://github.com/huggingface/datasets/issues/5727
1,661,536,363
I_kwDODunzps5jCQhr
5,727
load_dataset fails with FileNotFound error on Windows
{ "login": "joelkowalewski", "id": 122648572, "node_id": "U_kgDOB093_A", "avatar_url": "https://avatars.githubusercontent.com/u/122648572?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joelkowalewski", "html_url": "https://github.com/joelkowalewski", "followers_url": "https://api.github.com/users/joelkowalewski/followers", "following_url": "https://api.github.com/users/joelkowalewski/following{/other_user}", "gists_url": "https://api.github.com/users/joelkowalewski/gists{/gist_id}", "starred_url": "https://api.github.com/users/joelkowalewski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/joelkowalewski/subscriptions", "organizations_url": "https://api.github.com/users/joelkowalewski/orgs", "repos_url": "https://api.github.com/users/joelkowalewski/repos", "events_url": "https://api.github.com/users/joelkowalewski/events{/privacy}", "received_events_url": "https://api.github.com/users/joelkowalewski/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-04-10T23:21:12"
"2023-04-10T23:21:12"
null
NONE
null
### Describe the bug Although I can import and run the datasets library in a Colab environment, I cannot successfully load any data on my own machine (Windows 10) despite following the install steps: (1) create conda environment (2) activate environment (3) install with: ``conda` install -c huggingface -c conda-forge datasets` Then ``` from datasets import load_dataset # this or any other example from the website fails with the FileNotFoundError glue = load_dataset("glue", "ax") ``` **Below I have pasted the error omitting the full path**: ``` raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at C:\Users\...\glue\glue.py or any data file in the same directory. Couldn't find 'glue' on the Hugging Face Hub either: FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\...\\.cache\\huggingface' ``` ### Steps to reproduce the bug On Windows 10 1) create a minimal conda environment (with just Python) (2) activate environment (3) install datasets with: ``conda` install -c huggingface -c conda-forge datasets` (4) import load_dataset and follow example usage from any dataset card. ### Expected behavior The expected behavior is to load the file into the Python session running on my machine without error. ### Environment info ``` # Name Version Build Channel aiohttp 3.8.4 py311ha68e1ae_0 conda-forge aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge arrow-cpp 11.0.0 h57928b3_13_cpu conda-forge async-timeout 4.0.2 pyhd8ed1ab_0 conda-forge attrs 22.2.0 pyh71513ae_0 conda-forge aws-c-auth 0.6.26 h1262f0c_1 conda-forge aws-c-cal 0.5.21 h7cda486_2 conda-forge aws-c-common 0.8.14 hcfcfb64_0 conda-forge aws-c-compression 0.2.16 h8a79959_5 conda-forge aws-c-event-stream 0.2.20 h5f78564_4 conda-forge aws-c-http 0.7.6 h2545be9_0 conda-forge aws-c-io 0.13.19 h0d2781e_3 conda-forge aws-c-mqtt 0.8.6 hd211e0c_12 conda-forge aws-c-s3 0.2.7 h8113e7b_1 conda-forge aws-c-sdkutils 0.1.8 h8a79959_0 conda-forge aws-checksums 0.1.14 h8a79959_5 conda-forge aws-crt-cpp 0.19.8 he6d3b81_12 conda-forge aws-sdk-cpp 1.10.57 h64004b3_8 conda-forge brotlipy 0.7.0 py311ha68e1ae_1005 conda-forge bzip2 1.0.8 h8ffe710_4 conda-forge c-ares 1.19.0 h2bbff1b_0 ca-certificates 2023.01.10 haa95532_0 certifi 2022.12.7 pyhd8ed1ab_0 conda-forge cffi 1.15.1 py311h7d9ee11_3 conda-forge charset-normalizer 2.1.1 pyhd8ed1ab_0 conda-forge colorama 0.4.6 pyhd8ed1ab_0 conda-forge cryptography 40.0.1 py311h28e9c30_0 conda-forge dataclasses 0.8 pyhc8e2a94_3 conda-forge datasets 2.11.0 py_0 huggingface dill 0.3.6 pyhd8ed1ab_1 conda-forge filelock 3.11.0 pyhd8ed1ab_0 conda-forge frozenlist 1.3.3 py311ha68e1ae_0 conda-forge fsspec 2023.4.0 pyh1a96a4e_0 conda-forge gflags 2.2.2 ha925a31_1004 conda-forge glog 0.6.0 h4797de2_0 conda-forge huggingface_hub 0.13.4 py_0 huggingface idna 3.4 pyhd8ed1ab_0 conda-forge importlib-metadata 6.3.0 pyha770c72_0 conda-forge importlib_metadata 6.3.0 hd8ed1ab_0 conda-forge intel-openmp 2023.0.0 h57928b3_25922 conda-forge krb5 1.20.1 heb0366b_0 conda-forge libabseil 20230125.0 cxx17_h63175ca_1 conda-forge libarrow 11.0.0 h04c43f8_13_cpu conda-forge libblas 3.9.0 16_win64_mkl conda-forge libbrotlicommon 1.0.9 hcfcfb64_8 conda-forge libbrotlidec 1.0.9 hcfcfb64_8 conda-forge libbrotlienc 1.0.9 hcfcfb64_8 conda-forge libcblas 3.9.0 16_win64_mkl conda-forge libcrc32c 1.1.2 h0e60522_0 conda-forge libcurl 7.88.1 h68f0423_1 conda-forge libexpat 2.5.0 h63175ca_1 conda-forge libffi 3.4.2 h8ffe710_5 conda-forge libgoogle-cloud 2.8.0 hf2ff781_1 conda-forge libgrpc 1.52.1 h32da247_1 conda-forge libhwloc 2.9.0 h51c2c0f_0 conda-forge libiconv 1.17 h8ffe710_0 conda-forge liblapack 3.9.0 16_win64_mkl conda-forge libprotobuf 3.21.12 h12be248_0 conda-forge libsqlite 3.40.0 hcfcfb64_0 conda-forge libssh2 1.10.0 h9a1e1f7_3 conda-forge libthrift 0.18.1 h9ce19ad_0 conda-forge libutf8proc 2.8.0 h82a8f57_0 conda-forge libxml2 2.10.3 hc3477c8_6 conda-forge libzlib 1.2.13 hcfcfb64_4 conda-forge lz4-c 1.9.4 hcfcfb64_0 conda-forge mkl 2022.1.0 h6a75c08_874 conda-forge multidict 6.0.4 py311ha68e1ae_0 conda-forge multiprocess 0.70.14 py311ha68e1ae_3 conda-forge numpy 1.24.2 py311h0b4df5a_0 conda-forge openssl 3.1.0 hcfcfb64_0 conda-forge orc 1.8.3 hada7b9e_0 conda-forge packaging 23.0 pyhd8ed1ab_0 conda-forge pandas 2.0.0 py311hf63dbb6_0 conda-forge parquet-cpp 1.5.1 2 conda-forge pip 23.0.1 pyhd8ed1ab_0 conda-forge pthreads-win32 2.9.1 hfa6e2cd_3 conda-forge pyarrow 11.0.0 py311h6a6099b_13_cpu conda-forge pycparser 2.21 pyhd8ed1ab_0 conda-forge pyopenssl 23.1.1 pyhd8ed1ab_0 conda-forge pysocks 1.7.1 pyh0701188_6 conda-forge python 3.11.3 h2628c8c_0_cpython conda-forge python-dateutil 2.8.2 pyhd8ed1ab_0 conda-forge python-tzdata 2023.3 pyhd8ed1ab_0 conda-forge python-xxhash 3.2.0 py311ha68e1ae_0 conda-forge python_abi 3.11 3_cp311 conda-forge pytz 2023.3 pyhd8ed1ab_0 conda-forge pyyaml 6.0 py311ha68e1ae_5 conda-forge re2 2023.02.02 h63175ca_0 conda-forge requests 2.28.2 pyhd8ed1ab_1 conda-forge setuptools 67.6.1 pyhd8ed1ab_0 conda-forge six 1.16.0 pyh6c4a22f_0 conda-forge snappy 1.1.10 hfb803bf_0 conda-forge tbb 2021.8.0 h91493d7_0 conda-forge tk 8.6.12 h8ffe710_0 conda-forge tqdm 4.65.0 pyhd8ed1ab_1 conda-forge typing-extensions 4.5.0 hd8ed1ab_0 conda-forge typing_extensions 4.5.0 pyha770c72_0 conda-forge tzdata 2023c h71feb2d_0 conda-forge ucrt 10.0.22621.0 h57928b3_0 conda-forge urllib3 1.26.15 pyhd8ed1ab_0 conda-forge vc 14.3 hb6edc58_10 conda-forge vs2015_runtime 14.34.31931 h4c5c07a_10 conda-forge wheel 0.40.0 pyhd8ed1ab_0 conda-forge win_inet_pton 1.1.0 pyhd8ed1ab_6 conda-forge xxhash 0.8.1 hcfcfb64_0 conda-forge xz 5.2.10 h8cc25b3_1 yaml 0.2.5 h8ffe710_2 conda-forge yarl 1.8.2 py311ha68e1ae_0 conda-forge zipp 3.15.0 pyhd8ed1ab_0 conda-forge zlib 1.2.13 hcfcfb64_4 conda-forge zstd 1.5.4 hd43e919_0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5727/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5726/comments
https://api.github.com/repos/huggingface/datasets/issues/5726/events
https://github.com/huggingface/datasets/issues/5726
1,660,944,807
I_kwDODunzps5jAAGn
5,726
Fallback JSON Dataset loading does not load all values when features specified manually
{ "login": "myluki2000", "id": 3610788, "node_id": "MDQ6VXNlcjM2MTA3ODg=", "avatar_url": "https://avatars.githubusercontent.com/u/3610788?v=4", "gravatar_id": "", "url": "https://api.github.com/users/myluki2000", "html_url": "https://github.com/myluki2000", "followers_url": "https://api.github.com/users/myluki2000/followers", "following_url": "https://api.github.com/users/myluki2000/following{/other_user}", "gists_url": "https://api.github.com/users/myluki2000/gists{/gist_id}", "starred_url": "https://api.github.com/users/myluki2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/myluki2000/subscriptions", "organizations_url": "https://api.github.com/users/myluki2000/orgs", "repos_url": "https://api.github.com/users/myluki2000/repos", "events_url": "https://api.github.com/users/myluki2000/events{/privacy}", "received_events_url": "https://api.github.com/users/myluki2000/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-04-10T15:22:14"
"2023-04-10T15:22:56"
null
NONE
null
### Describe the bug The fallback JSON dataset loader located here: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L130-L153 does not load the values of features correctly when features are specified manually and not all features have a value in the first entry of the dataset. I'm pretty sure this is not supposed to be expected bahavior? To fix this you'd have to change this line: https://github.com/huggingface/datasets/blob/1c4ec00511868bd881e84a6f7e0333648d833b8e/src/datasets/packaged_modules/json/json.py#L140 To pass a schema to pyarrow which has the same structure as the features argument passed to the load_dataset() method. ### Steps to reproduce the bug Consider a dataset JSON like this: ``` [ { "instruction": "Do stuff", "output": "Answer stuff" }, { "instruction": "Do stuff2", "input": "Additional Input2", "output": "Answer stuff2" } ] ``` Using this code to load the dataset: ``` from datasets import load_dataset, Features, Value features = { "instruction": Value("string"), "input": Value("string"), "output": Value("string") } features = Features(features) ds = load_dataset("json", data_files="./ds.json", features=features) for row in ds["train"]: print(row) ``` we get a dataset that looks like this: | **Instruction** | **Input** | **Output** | |-----------------|--------------------|-----------------| | "Do stuff" | None | "Answer Stuff" | | "Do stuff2" | None | "Answer Stuff2" | ### Expected behavior The input column should contain values other than None for dataset entries that have the "input" attribute set: | **Instruction** | **Input** | **Output** | |-----------------|--------------------|-----------------| | "Do stuff" | None | "Answer Stuff" | | "Do stuff2" | "Additional Input2" | "Answer Stuff2" | ### Environment info Python 3.10.10 Datasets 2.11.0 Windows 10
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5726/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5726/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5725/comments
https://api.github.com/repos/huggingface/datasets/issues/5725/events
https://github.com/huggingface/datasets/issues/5725
1,660,455,202
I_kwDODunzps5i-Iki
5,725
How to limit the number of examples in dataset, for testing?
{ "login": "ndvbd", "id": 845175, "node_id": "MDQ6VXNlcjg0NTE3NQ==", "avatar_url": "https://avatars.githubusercontent.com/u/845175?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ndvbd", "html_url": "https://github.com/ndvbd", "followers_url": "https://api.github.com/users/ndvbd/followers", "following_url": "https://api.github.com/users/ndvbd/following{/other_user}", "gists_url": "https://api.github.com/users/ndvbd/gists{/gist_id}", "starred_url": "https://api.github.com/users/ndvbd/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ndvbd/subscriptions", "organizations_url": "https://api.github.com/users/ndvbd/orgs", "repos_url": "https://api.github.com/users/ndvbd/repos", "events_url": "https://api.github.com/users/ndvbd/events{/privacy}", "received_events_url": "https://api.github.com/users/ndvbd/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-04-10T08:41:43"
"2023-04-10T08:41:43"
null
NONE
null
### Describe the bug I am using this command: `data = load_dataset("json", data_files=data_path)` However, I want to add a parameter, to limit the number of loaded examples to be 10, for development purposes, but can't find this simple parameter. ### Steps to reproduce the bug In the description. ### Expected behavior To be able to limit the number of examples ### Environment info Nothing special
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5725/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5724/comments
https://api.github.com/repos/huggingface/datasets/issues/5724/events
https://github.com/huggingface/datasets/issues/5724
1,659,938,135
I_kwDODunzps5i8KVX
5,724
Error after shuffling streaming IterableDatasets with downloaded dataset
{ "login": "szxiangjn", "id": 41177966, "node_id": "MDQ6VXNlcjQxMTc3OTY2", "avatar_url": "https://avatars.githubusercontent.com/u/41177966?v=4", "gravatar_id": "", "url": "https://api.github.com/users/szxiangjn", "html_url": "https://github.com/szxiangjn", "followers_url": "https://api.github.com/users/szxiangjn/followers", "following_url": "https://api.github.com/users/szxiangjn/following{/other_user}", "gists_url": "https://api.github.com/users/szxiangjn/gists{/gist_id}", "starred_url": "https://api.github.com/users/szxiangjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/szxiangjn/subscriptions", "organizations_url": "https://api.github.com/users/szxiangjn/orgs", "repos_url": "https://api.github.com/users/szxiangjn/repos", "events_url": "https://api.github.com/users/szxiangjn/events{/privacy}", "received_events_url": "https://api.github.com/users/szxiangjn/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-04-09T16:58:44"
"2023-04-09T16:58:44"
null
NONE
null
### Describe the bug I downloaded the C4 dataset, and used streaming IterableDatasets to read it. Everything went normal until I used `dataset = dataset.shuffle(seed=42, buffer_size=10_000)` to shuffle the dataset. Shuffled dataset will throw the following error when it is used by `next(iter(dataset))`: ``` File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 937, in __iter__ for key, example in ex_iterable: File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 627, in __iter__ for x in self.ex_iterable: File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 138, in __iter__ yield from self.generate_examples_fn(**kwargs_with_shuffled_shards) File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 763, in wrapper for key, table in generate_tables_fn(**kwargs): File "/data/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 101, in _generate_tables batch = f.read(self.config.chunksize) File "/data/miniconda3/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 372, in read_with_retries out = read(*args, **kwargs) File "/data/miniconda3/lib/python3.9/gzip.py", line 300, in read return self._buffer.read(size) File "/data/miniconda3/lib/python3.9/_compression.py", line 68, in readinto data = self.read(len(byte_view)) File "/data/miniconda3/lib/python3.9/gzip.py", line 487, in read if not self._read_gzip_header(): File "/data/miniconda3/lib/python3.9/gzip.py", line 435, in _read_gzip_header raise BadGzipFile('Not a gzipped file (%r)' % magic) gzip.BadGzipFile: Not a gzipped file (b've') ``` I found that there is no problem to use the dataset in this way without shuffling. Also, use `dataset = datasets.load_dataset('c4', 'en', split='train', streaming=True)`, which will download the dataset on-the-fly instead of loading from the local file, will also not have problems even after shuffle. ### Steps to reproduce the bug 1. Download C4 dataset from https://huggingface.co/datasets/allenai/c4 2. ``` import datasets dataset = datasets.load_dataset('/path/to/your/data/dir', 'en', streaming=True, split='train') dataset = dataset.shuffle(buffer_size=10_000, seed=42) next(iter(dataset)) ``` ### Expected behavior `next(iter(dataset))` should give me a sample from the dataset ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.4.32-1-tlinux4-0001-x86_64-with-glibc2.28 - Python version: 3.9.16 - Huggingface_hub version: 0.13.1 - PyArrow version: 11.0.0 - Pandas version: 1.5.3
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5724/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5724/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5722/comments
https://api.github.com/repos/huggingface/datasets/issues/5722/events
https://github.com/huggingface/datasets/issues/5722
1,659,837,510
I_kwDODunzps5i7xxG
5,722
Distributed Training Error on Customized Dataset
{ "login": "wlhgtc", "id": 16603773, "node_id": "MDQ6VXNlcjE2NjAzNzcz", "avatar_url": "https://avatars.githubusercontent.com/u/16603773?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wlhgtc", "html_url": "https://github.com/wlhgtc", "followers_url": "https://api.github.com/users/wlhgtc/followers", "following_url": "https://api.github.com/users/wlhgtc/following{/other_user}", "gists_url": "https://api.github.com/users/wlhgtc/gists{/gist_id}", "starred_url": "https://api.github.com/users/wlhgtc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wlhgtc/subscriptions", "organizations_url": "https://api.github.com/users/wlhgtc/orgs", "repos_url": "https://api.github.com/users/wlhgtc/repos", "events_url": "https://api.github.com/users/wlhgtc/events{/privacy}", "received_events_url": "https://api.github.com/users/wlhgtc/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hmm the error doesn't seem related to data loading.\r\n\r\nRegarding `split_dataset_by_node`: it's generally used to split an iterable dataset (e.g. when streaming) in pytorch DDP. It's not needed if you use a regular dataset since the pytorch DataLoader already assigns a subset of the dataset indices to each node." ]
"2023-04-09T11:04:59"
"2023-04-09T16:33:00"
null
NONE
null
Hi guys, recently I tried to use `datasets` to train a dual encoder. I finish my own datasets according to the nice [tutorial](https://huggingface.co/docs/datasets/v2.11.0/en/dataset_script) Here are my code: ```python class RetrivalDataset(datasets.GeneratorBasedBuilder): """CrossEncoder dataset.""" BUILDER_CONFIGS = [RetrivalConfig(name="DuReader")] # DEFAULT_CONFIG_NAME = "DuReader" def _info(self): return datasets.DatasetInfo( features=datasets.Features( { "id": datasets.Value("string"), "question": datasets.Value("string"), "documents": Sequence(datasets.Value("string")), } ), supervised_keys=None, ) def _split_generators(self, dl_manager): """Returns SplitGenerators.""" train_file = self.config.data_dir + self.config.train_file valid_file = self.config.data_dir + self.config.valid_file logger.info(f"Training on {self.config.train_file}") logger.info(f"Evaluating on {self.config.valid_file}") return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"file_path": train_file} ), datasets.SplitGenerator( name=datasets.Split.VALIDATION, gen_kwargs={"file_path": valid_file} ), ] def _generate_examples(self, file_path): with jsonlines.open(file_path, "r") as f: for record in f: label = record["label"] question = record["question"] # dual encoder all_documents = record["all_documents"] positive_paragraph = all_documents.pop(label) all_documents = [positive_paragraph] + all_documents u_id = "{}_#_{}".format( md5_hash(question + "".join(all_documents)), "".join(random.sample(string.ascii_letters + string.digits, 7)), ) item = { "question": question, "documents": all_documents, "id": u_id, } yield u_id, item ``` It works well on single GPU, but got errors as follows when used DDP: ```python Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(OpType=BARRIER), but Rank 0 is running collective: CollectiveFingerPrint(OpType=ALLGATHER_COALESCED) ``` Here are my train script on a two A100 mechine: ```bash export TORCH_DISTRIBUTED_DEBUG=DETAIL export TORCH_SHOW_CPP_STACKTRACES=1 export NCCL_DEBUG=INFO export NCCL_DEBUG_SUBSYS=INIT,COLL,ENV nohup torchrun --nproc_per_node 2 train.py experiments/de-big.json >logs/de-big.log 2>&1& ``` I am not sure if this error below related to my dataset code when use DDP. And I notice the PR(#5369 ), but I don't know when and where should I used the function(`split_dataset_by_node`) . @lhoestq hope you could help me?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5722/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5721/comments
https://api.github.com/repos/huggingface/datasets/issues/5721/events
https://github.com/huggingface/datasets/issues/5721
1,659,680,682
I_kwDODunzps5i7Leq
5,721
Calling datasets.load_dataset("text" ...) results in a wrong split.
{ "login": "cyrilzakka", "id": 1841186, "node_id": "MDQ6VXNlcjE4NDExODY=", "avatar_url": "https://avatars.githubusercontent.com/u/1841186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cyrilzakka", "html_url": "https://github.com/cyrilzakka", "followers_url": "https://api.github.com/users/cyrilzakka/followers", "following_url": "https://api.github.com/users/cyrilzakka/following{/other_user}", "gists_url": "https://api.github.com/users/cyrilzakka/gists{/gist_id}", "starred_url": "https://api.github.com/users/cyrilzakka/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cyrilzakka/subscriptions", "organizations_url": "https://api.github.com/users/cyrilzakka/orgs", "repos_url": "https://api.github.com/users/cyrilzakka/repos", "events_url": "https://api.github.com/users/cyrilzakka/events{/privacy}", "received_events_url": "https://api.github.com/users/cyrilzakka/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-04-08T23:55:12"
"2023-04-08T23:55:12"
null
NONE
null
### Describe the bug When creating a text dataset, the training split should have the bulk of the examples by default. Currently, testing does. ### Steps to reproduce the bug I have a folder with 18K text files in it. Each text file essentially consists in a document or article scraped from online. Calling the following codeL ``` folder_path = "/home/cyril/Downloads/llama_dataset" data = datasets.load_dataset("text", data_dir=folder_path) data.save_to_disk("/home/cyril/Downloads/data.hf") data = datasets.load_from_disk("/home/cyril/Downloads/data.hf") print(data) ``` Results in the following split: ``` DatasetDict({ train: Dataset({ features: ['text'], num_rows: 2114 }) test: Dataset({ features: ['text'], num_rows: 200882 }) validation: Dataset({ features: ['text'], num_rows: 152 }) }) ``` It seems to me like the train/test/validation splits are in the wrong order since test split >>>> train_split ### Expected behavior Train split should have the bulk of the training examples. ### Environment info datasets 2.11.0, python 3.10.6
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5721/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5720/comments
https://api.github.com/repos/huggingface/datasets/issues/5720/events
https://github.com/huggingface/datasets/issues/5720
1,659,610,705
I_kwDODunzps5i66ZR
5,720
Streaming IterableDatasets do not work with torch DataLoaders
{ "login": "jlehrer1", "id": 29244648, "node_id": "MDQ6VXNlcjI5MjQ0NjQ4", "avatar_url": "https://avatars.githubusercontent.com/u/29244648?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jlehrer1", "html_url": "https://github.com/jlehrer1", "followers_url": "https://api.github.com/users/jlehrer1/followers", "following_url": "https://api.github.com/users/jlehrer1/following{/other_user}", "gists_url": "https://api.github.com/users/jlehrer1/gists{/gist_id}", "starred_url": "https://api.github.com/users/jlehrer1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jlehrer1/subscriptions", "organizations_url": "https://api.github.com/users/jlehrer1/orgs", "repos_url": "https://api.github.com/users/jlehrer1/repos", "events_url": "https://api.github.com/users/jlehrer1/events{/privacy}", "received_events_url": "https://api.github.com/users/jlehrer1/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Edit: This behavior is true even without `.take/.set`" ]
"2023-04-08T18:45:48"
"2023-04-09T16:38:48"
null
NONE
null
### Describe the bug When using streaming datasets set up with train/val split using `.skip()` and `.take()`, the following error occurs when iterating over a torch dataloader: ``` File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 363, in __iter__ self._iterator = self._get_iterator() File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 314, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 927, in __init__ w.start() File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object '_generate_examples_from_tables_wrapper.<locals>.wrapper' ``` To reproduce, run the code ``` from datasets import load_dataset data = load_dataset(args.dataset_name, split="train", streaming=True) train_len = 5000 val_len = 100 train, val = data.take(train_len), data.skip(train_len).take(val_len) traindata = IterableClipDataset(data, context_length=args.max_len, tokenizer=tokenizer, image_key="url", text_key="text") traindata = DataLoader(traindata, batch_size=args.batch_size, num_workers=args.num_workers, persistent_workers=True) ``` Where the class IterableClipDataset is a simple wrapper to cast the dataset to a torch iterabledataset, defined via ``` from torch.utils.data import Dataset, IterableDataset from torchvision.transforms import Compose, Resize, ToTensor from transformers import AutoTokenizer import requests from PIL import Image class IterableClipDataset(IterableDataset): def __init__(self, dataset, context_length: int, image_transform=None, tokenizer=None, image_key="image", text_key="text"): self.dataset = dataset self.context_length = context_length self.image_transform = Compose([Resize((224, 224)), ToTensor()]) if image_transform is None else image_transform self.tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") if tokenizer is None else tokenizer self.image_key = image_key self.text_key = text_key def read_image(self, url: str): try: # Try to read the image image = Image.open(requests.get(url, stream=True).raw) except: image = Image.new("RGB", (224, 224), (0, 0, 0)) return image def process_sample(self, image, text): if isinstance(image, str): image = self.read_image(image) if self.image_transform is not None: image = self.image_transform(image) text = self.tokenizer.encode( text, add_special_tokens=True, max_length=self.context_length, truncation=True, padding="max_length" ) text = torch.tensor(text, dtype=torch.long) return image, text def __iter__(self): for sample in self.dataset: image, text = sample[self.image_key], sample[self.text_key] yield self.process_sample(image, text) ``` ### Steps to reproduce the bug Steps to reproduce 1. Install `datasets`, `torch`, and `PIL` (if you want to reproduce exactly) 2. Run the code above ### Expected behavior Batched data is produced from the dataloader ### Environment info ``` datasets == 2.9.0 python == 3.9.12 torch == 1.11.0 ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5720/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5720/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5719/comments
https://api.github.com/repos/huggingface/datasets/issues/5719/events
https://github.com/huggingface/datasets/issues/5719
1,659,203,222
I_kwDODunzps5i5W6W
5,719
Array2D feature creates a list of list instead of a numpy array
{ "login": "off99555", "id": 15215732, "node_id": "MDQ6VXNlcjE1MjE1NzMy", "avatar_url": "https://avatars.githubusercontent.com/u/15215732?v=4", "gravatar_id": "", "url": "https://api.github.com/users/off99555", "html_url": "https://github.com/off99555", "followers_url": "https://api.github.com/users/off99555/followers", "following_url": "https://api.github.com/users/off99555/following{/other_user}", "gists_url": "https://api.github.com/users/off99555/gists{/gist_id}", "starred_url": "https://api.github.com/users/off99555/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/off99555/subscriptions", "organizations_url": "https://api.github.com/users/off99555/orgs", "repos_url": "https://api.github.com/users/off99555/repos", "events_url": "https://api.github.com/users/off99555/events{/privacy}", "received_events_url": "https://api.github.com/users/off99555/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-04-07T21:04:08"
"2023-04-07T21:07:34"
null
NONE
null
### Describe the bug I'm not sure if this is expected behavior or not. When I create a 2D array using `Array2D`, the data has list type instead of numpy array. I think it should not be the expected behavior especially when I feed a numpy array as input to the data creation function. Why is it converting my array into a list? Also if I change the first dimension of the `Array2D` shape to None, it's returning array correctly. ### Steps to reproduce the bug Run this code: ```py from datasets import Dataset, Features, Array2D import numpy as np # you have to change the first dimension of the shape to None to make it return an array features = Features(dict(seq=Array2D((2,2), 'float32'))) ds = Dataset.from_dict(dict(seq=[np.random.rand(2,2)]), features=features) a = ds[0]['seq'] print(a) print(type(a)) ``` ### Expected behavior The following will be printed in stdout: ``` [[0.8127174377441406, 0.3760348856449127], [0.7510159611701965, 0.4322739541530609]] <class 'list'> ``` ### Environment info - `datasets` version: 2.11.0 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.9.13 - Huggingface_hub version: 0.13.4 - PyArrow version: 11.0.0 - Pandas version: 1.4.4
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5719/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5718
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5718/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5718/comments
https://api.github.com/repos/huggingface/datasets/issues/5718/events
https://github.com/huggingface/datasets/pull/5718
1,658,958,406
PR_kwDODunzps5N2IZC
5,718
Reorder default data splits to have validation before test
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5718). All of your documentation changes will be reflected on that endpoint.", "After this CI error: https://github.com/huggingface/datasets/actions/runs/4639528358/jobs/8210492953?pr=5718\r\n```\r\nFAILED tests/test_data_files.py::test_get_data_files_patterns[data_file_per_split4] - AssertionError: assert ['random', 'train'] == ['train', 'random']\r\n At index 0 diff: 'random' != 'train'\r\n Full diff:\r\n - ['train', 'random']\r\n + ['random', 'train']\r\n```\r\nI have checked locally and found out that the data split order is nondeterministic. I am addressing this in a separate issue.\r\n\r\nSee:\r\n- #5728 \r\n- #5729" ]
"2023-04-07T16:01:26"
"2023-04-07T16:05:29"
null
MEMBER
null
This PR reorders data splits, so that by default validation appears before test. The default order becomes: train, validation, test.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5718/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5718/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5718", "html_url": "https://github.com/huggingface/datasets/pull/5718", "diff_url": "https://github.com/huggingface/datasets/pull/5718.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5718.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5717/comments
https://api.github.com/repos/huggingface/datasets/issues/5717/events
https://github.com/huggingface/datasets/issues/5717
1,658,729,866
I_kwDODunzps5i3jWK
5,717
Errror when saving to disk a dataset of images
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false }
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "repos_url": "https://api.github.com/users/mariosasko/repos", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "type": "User", "site_admin": false } ]
null
[ "Looks like as long as the number of shards makes a batch lower than 1000 images it works. In my training set I have 40K images. If I use `num_shards=40` (batch of 1000 images) I get the error, but if I update it to `num_shards=50` (batch of 800 images) it works.\r\n\r\nI will be happy to share my dataset privately if it can help to better debug." ]
"2023-04-07T11:59:17"
"2023-04-07T16:27:26"
null
CONTRIBUTOR
null
### Describe the bug Hello! I have an issue when I try to save on disk my dataset of images. The error I get is: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1442, in save_to_disk for job_id, done, content in Dataset._save_to_disk_single(**kwargs): File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_dataset.py", line 1473, in _save_to_disk_single writer.write_table(pa_table) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/arrow_writer.py", line 570, in write_table pa_table = embed_table_storage(pa_table) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2268, in embed_table_storage arrays = [ File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2269, in <listcomp> embed_array_storage(table[name], feature) if require_storage_embed(feature) else table[name] File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 1817, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 1817, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/table.py", line 2142, in embed_array_storage return feature.embed_storage(array) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/features/image.py", line 269, in embed_storage storage = pa.StructArray.from_arrays([bytes_array, path_array], ["bytes", "path"], mask=bytes_array.is_null()) File "pyarrow/array.pxi", line 2766, in pyarrow.lib.StructArray.from_arrays File "pyarrow/array.pxi", line 2961, in pyarrow.lib.c_mask_inverted_from_obj TypeError: Mask must be a pyarrow.Array of type boolean ``` My dataset is around 50K images, is this error might be due to a bad image? Thanks for the help. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("imagefolder", data_dir="/path/to/dataset") dataset["train"].save_to_disk("./myds", num_shards=40) ``` ### Expected behavior Having my dataset properly saved to disk. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5717/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5716/comments
https://api.github.com/repos/huggingface/datasets/issues/5716/events
https://github.com/huggingface/datasets/issues/5716
1,658,613,092
I_kwDODunzps5i3G1k
5,716
Handle empty audio
{ "login": "v-yunbin", "id": 38179632, "node_id": "MDQ6VXNlcjM4MTc5NjMy", "avatar_url": "https://avatars.githubusercontent.com/u/38179632?v=4", "gravatar_id": "", "url": "https://api.github.com/users/v-yunbin", "html_url": "https://github.com/v-yunbin", "followers_url": "https://api.github.com/users/v-yunbin/followers", "following_url": "https://api.github.com/users/v-yunbin/following{/other_user}", "gists_url": "https://api.github.com/users/v-yunbin/gists{/gist_id}", "starred_url": "https://api.github.com/users/v-yunbin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/v-yunbin/subscriptions", "organizations_url": "https://api.github.com/users/v-yunbin/orgs", "repos_url": "https://api.github.com/users/v-yunbin/repos", "events_url": "https://api.github.com/users/v-yunbin/events{/privacy}", "received_events_url": "https://api.github.com/users/v-yunbin/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[]
"2023-04-07T09:51:40"
"2023-04-07T09:51:40"
null
NONE
null
Some audio paths exist, but they are empty, and an error will be reported when reading the audio path.How to use the filter function to avoid the empty audio path? when a audio is empty, when do resample , it will break: `array, sampling_rate = sf.read(f) array = librosa.resample(array, orig_sr=sampling_rate, target_sr=self.sampling_rate)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5716/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5716/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5715/comments
https://api.github.com/repos/huggingface/datasets/issues/5715/events
https://github.com/huggingface/datasets/issues/5715
1,657,479,788
I_kwDODunzps5iyyJs
5,715
Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List
{ "login": "jungbaepark", "id": 34066771, "node_id": "MDQ6VXNlcjM0MDY2Nzcx", "avatar_url": "https://avatars.githubusercontent.com/u/34066771?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jungbaepark", "html_url": "https://github.com/jungbaepark", "followers_url": "https://api.github.com/users/jungbaepark/followers", "following_url": "https://api.github.com/users/jungbaepark/following{/other_user}", "gists_url": "https://api.github.com/users/jungbaepark/gists{/gist_id}", "starred_url": "https://api.github.com/users/jungbaepark/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jungbaepark/subscriptions", "organizations_url": "https://api.github.com/users/jungbaepark/orgs", "repos_url": "https://api.github.com/users/jungbaepark/repos", "events_url": "https://api.github.com/users/jungbaepark/events{/privacy}", "received_events_url": "https://api.github.com/users/jungbaepark/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi! \r\n\r\nYou can use [`.set_format(\"np\")`](https://huggingface.co/docs/datasets/process#format) to get NumPy arrays (or Pytorch tensors with `.set_format(\"torch\")`) in `__getitem__`.\r\n\r\nAlso, have you been able to reproduce the linked PyTorch issue with a HF dataset?\r\n " ]
"2023-04-06T13:57:48"
"2023-04-07T14:38:06"
null
NONE
null
### Feature request There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader: Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict". https://github.com/pytorch/pytorch/issues/13246 With huggingface datasets, unfortunately, the default return type is the list, so the problem is raised too often if we do not set anything for the issue. However, this issue can be released when the returning output is fixed in length. Therefore, I request the mode, returning outputs with fixed length (e.g. numpy array) rather than list. The design would be good when we load datasets as ```python load_dataset(..., with_return_as_fixed_tensor=True) ``` ### Motivation The general solution for this issue is already in the comments: https://github.com/pytorch/pytorch/issues/13246#issuecomment-905703662 : Numpy or Pandas seems not to have problems, while both have the string type. (I'm not sure that the sequence of huggingface datasets can solve this problem as well) ### Your contribution I'll read it ! thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5715/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5715/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5714/comments
https://api.github.com/repos/huggingface/datasets/issues/5714/events
https://github.com/huggingface/datasets/pull/5714
1,657,388,033
PR_kwDODunzps5NxIOc
5,714
Fix xnumpy_load for .npz files
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006498 / 0.011353 (-0.004855) | 0.004406 / 0.011008 (-0.006602) | 0.097136 / 0.038508 (0.058628) | 0.027711 / 0.023109 (0.004601) | 0.303092 / 0.275898 (0.027194) | 0.336804 / 0.323480 (0.013324) | 0.004838 / 0.007986 (-0.003148) | 0.004533 / 0.004328 (0.000204) | 0.075062 / 0.004250 (0.070812) | 0.035105 / 0.037052 (-0.001947) | 0.310245 / 0.258489 (0.051756) | 0.347086 / 0.293841 (0.053245) | 0.030867 / 0.128546 (-0.097679) | 0.011436 / 0.075646 (-0.064211) | 0.320728 / 0.419271 (-0.098544) | 0.042303 / 0.043533 (-0.001230) | 0.308177 / 0.255139 (0.053038) | 0.333673 / 0.283200 (0.050473) | 0.084736 / 0.141683 (-0.056947) | 1.477391 / 1.452155 (0.025237) | 1.530399 / 1.492716 (0.037682) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212698 / 0.018006 (0.194692) | 0.409098 / 0.000490 (0.408608) | 0.004202 / 0.000200 (0.004002) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.022725 / 0.037411 (-0.014686) | 0.095866 / 0.014526 (0.081340) | 0.104153 / 0.176557 (-0.072404) | 0.162964 / 0.737135 (-0.574171) | 0.106505 / 0.296338 (-0.189834) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431336 / 0.215209 (0.216127) | 4.283290 / 2.077655 (2.205635) | 1.982418 / 1.504120 (0.478298) | 1.762104 / 1.541195 (0.220909) | 1.807528 / 1.468490 (0.339038) | 0.695507 / 4.584777 (-3.889270) | 3.376299 / 3.745712 (-0.369413) | 1.856642 / 5.269862 (-3.413219) | 1.154258 / 4.565676 (-3.411419) | 0.082749 / 0.424275 (-0.341526) | 0.012289 / 0.007607 (0.004682) | 0.525842 / 0.226044 (0.299798) | 5.285764 / 2.268929 (3.016835) | 2.389926 / 55.444624 (-53.054698) | 2.021830 / 6.876477 (-4.854646) | 2.107460 / 2.142072 (-0.034612) | 0.808118 / 4.805227 (-3.997109) | 0.150791 / 6.500664 (-6.349873) | 0.065825 / 0.075469 (-0.009644) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206939 / 1.841788 (-0.634849) | 13.795902 / 8.074308 (5.721594) | 14.107950 / 10.191392 (3.916558) | 0.144300 / 0.680424 (-0.536124) | 0.016478 / 0.534201 (-0.517723) | 0.379395 / 0.579283 (-0.199888) | 0.388437 / 0.434364 (-0.045927) | 0.451443 / 0.540337 (-0.088894) | 0.523142 / 1.386936 (-0.863794) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006503 / 0.011353 (-0.004850) | 0.004578 / 0.011008 (-0.006430) | 0.076278 / 0.038508 (0.037770) | 0.028052 / 0.023109 (0.004943) | 0.337873 / 0.275898 (0.061975) | 0.371368 / 0.323480 (0.047888) | 0.005086 / 0.007986 (-0.002899) | 0.003354 / 0.004328 (-0.000975) | 0.076876 / 0.004250 (0.072625) | 0.039146 / 0.037052 (0.002093) | 0.340299 / 0.258489 (0.081810) | 0.381209 / 0.293841 (0.087368) | 0.031771 / 0.128546 (-0.096775) | 0.011670 / 0.075646 (-0.063976) | 0.085156 / 0.419271 (-0.334116) | 0.041990 / 0.043533 (-0.001543) | 0.338644 / 0.255139 (0.083505) | 0.362461 / 0.283200 (0.079262) | 0.089772 / 0.141683 (-0.051911) | 1.480341 / 1.452155 (0.028187) | 1.562815 / 1.492716 (0.070099) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.205700 / 0.018006 (0.187694) | 0.402206 / 0.000490 (0.401716) | 0.001212 / 0.000200 (0.001012) | 0.000070 / 0.000054 (0.000016) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025172 / 0.037411 (-0.012240) | 0.100959 / 0.014526 (0.086433) | 0.108464 / 0.176557 (-0.068093) | 0.161321 / 0.737135 (-0.575814) | 0.114245 / 0.296338 (-0.182093) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.437425 / 0.215209 (0.222216) | 4.362212 / 2.077655 (2.284557) | 2.068815 / 1.504120 (0.564695) | 1.864089 / 1.541195 (0.322894) | 1.909038 / 1.468490 (0.440548) | 0.696097 / 4.584777 (-3.888680) | 3.358628 / 3.745712 (-0.387084) | 2.999085 / 5.269862 (-2.270777) | 1.533917 / 4.565676 (-3.031760) | 0.083010 / 0.424275 (-0.341266) | 0.012372 / 0.007607 (0.004765) | 0.539926 / 0.226044 (0.313882) | 5.438326 / 2.268929 (3.169397) | 2.498581 / 55.444624 (-52.946043) | 2.153359 / 6.876477 (-4.723117) | 2.177891 / 2.142072 (0.035819) | 0.803169 / 4.805227 (-4.002059) | 0.151079 / 6.500664 (-6.349585) | 0.065981 / 0.075469 (-0.009489) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.336682 / 1.841788 (-0.505106) | 14.133055 / 8.074308 (6.058747) | 14.033972 / 10.191392 (3.842580) | 0.152109 / 0.680424 (-0.528315) | 0.016475 / 0.534201 (-0.517726) | 0.387808 / 0.579283 (-0.191475) | 0.378347 / 0.434364 (-0.056017) | 0.484732 / 0.540337 (-0.055606) | 0.569907 / 1.386936 (-0.817029) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1c4ec00511868bd881e84a6f7e0333648d833b8e \"CML watermark\")\n" ]
"2023-04-06T13:01:45"
"2023-04-07T09:23:54"
"2023-04-07T09:16:57"
MEMBER
null
PR: - #5626 implemented support for streaming `.npy` files by using `numpy.load`. However, it introduced a bug when used with `.npz` files, within a context manager: ``` ValueError: seek of closed file ``` or in streaming mode: ``` ValueError: I/O operation on closed file. ``` This PR fixes the bug and tests for both `.npy` and `.npz` files. Fix #5711.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5714/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5714", "html_url": "https://github.com/huggingface/datasets/pull/5714", "diff_url": "https://github.com/huggingface/datasets/pull/5714.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5714.patch", "merged_at": "2023-04-07T09:16:57" }
true
https://api.github.com/repos/huggingface/datasets/issues/5713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5713/comments
https://api.github.com/repos/huggingface/datasets/issues/5713/events
https://github.com/huggingface/datasets/issues/5713
1,657,141,251
I_kwDODunzps5ixfgD
5,713
ArrowNotImplementedError when loading dataset from the hub
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi Julien ! This sounds related to https://github.com/huggingface/datasets/issues/5695 - TL;DR: you need to have shards smaller than 2GB to avoid this issue\r\n\r\nThe number of rows per shard is computed using an estimated size of the full dataset, which can sometimes lead to shards bigger than `max_shard_size`. The estimation is currently done using the first samples of the dataset (which can surely be improved). We should probably open an issue to fix this once and for all.\r\n\r\nAnyway for your specific dataset I'd suggest you to pass `num_shards` instead of `max_shard_size` for now, and make sure to have enough shards to end up with shards smaller than 2GB", "Hi Quentin! Thanks a lot! Using `num_shards` instead of `max_shard_size` works as expected.\r\n\r\nIndeed the way you describe how the size is computed cannot really work with the dataset I'm building as all the image doesn't have the same resolution and then size. Opening an issue on this might be a good idea." ]
"2023-04-06T10:27:22"
"2023-04-06T13:06:22"
"2023-04-06T13:06:21"
CONTRIBUTOR
null
### Describe the bug Hello, I have created a dataset by using the image loader. Once the dataset is created I try to download it and I get the error: ``` Traceback (most recent call last): File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single for _, table in generator: File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables for batch_idx, record_batch in enumerate( File "pyarrow/_parquet.pyx", line 1323, in iter_batches File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Steps to reproduce the bug Create the dataset and push it to the hub: ```python from datasets import load_dataset dataset = load_dataset("imagefolder", data_dir="/path/to/dataset") dataset.push_to_hub("org/dataset-name", private=True, max_shard_size="1GB") ``` Then use it: ```python from datasets import load_dataset dataset = load_dataset("org/dataset-name") ``` ### Expected behavior To properly download and use the pushed dataset. Something else to note is that I specified to have shards of 1GB max, but at the end, for the train set, it is an almost 7GB single file that is pushed. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5713/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5712/comments
https://api.github.com/repos/huggingface/datasets/issues/5712/events
https://github.com/huggingface/datasets/issues/5712
1,655,972,106
I_kwDODunzps5itCEK
5,712
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
{ "login": "rcasero", "id": 1219084, "node_id": "MDQ6VXNlcjEyMTkwODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcasero", "html_url": "https://github.com/rcasero", "followers_url": "https://api.github.com/users/rcasero/followers", "following_url": "https://api.github.com/users/rcasero/following{/other_user}", "gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcasero/subscriptions", "organizations_url": "https://api.github.com/users/rcasero/orgs", "repos_url": "https://api.github.com/users/rcasero/repos", "events_url": "https://api.github.com/users/rcasero/events{/privacy}", "received_events_url": "https://api.github.com/users/rcasero/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Closing since this is a duplicate of #5711", "> Closing since this is a duplicate of #5711\r\n\r\nSorry @mariosasko , my internet went down went submitting the issue, and somehow it ended up creating a duplicate" ]
"2023-04-05T16:47:10"
"2023-04-06T08:32:37"
"2023-04-05T17:17:44"
NONE
null
### Describe the bug Hi, I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1. ```python ds = datasets.load_dataset(path=dataset_dir, name=configuration, data_dir=dataset_dir, cache_dir=cache_dir, aux_dir=aux_dir, # download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, num_proc=18) ``` When upgrading datasets to 2.11.0, it fails with error ``` Traceback (most recent call last): File "<string>", line 2, in <module> File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare super()._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators self.some_function() File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function() x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()}) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__ bytes = self.zip.open(key) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open fheader = zef_file.read(sizeFileHeader) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read self._file.seek(self._pos) ValueError: seek of closed file ``` ### Steps to reproduce the bug Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()` ```python with np.load(filename) as fp: x_df = pd.DataFrame({'feature': fp['x'].tolist()}) ``` I'll try to generate a short snippet that reproduces the error. ### Expected behavior I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.12.0 - PyArrow version: 11.0.0 - Pandas version: 1.5.2 - numpy: 1.24.2 - This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5712/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5712/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5711/comments
https://api.github.com/repos/huggingface/datasets/issues/5711/events
https://github.com/huggingface/datasets/issues/5711
1,655,971,647
I_kwDODunzps5itB8_
5,711
load_dataset in v2.11.0 raises "ValueError: seek of closed file" in np.load()
{ "login": "rcasero", "id": 1219084, "node_id": "MDQ6VXNlcjEyMTkwODQ=", "avatar_url": "https://avatars.githubusercontent.com/u/1219084?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rcasero", "html_url": "https://github.com/rcasero", "followers_url": "https://api.github.com/users/rcasero/followers", "following_url": "https://api.github.com/users/rcasero/following{/other_user}", "gists_url": "https://api.github.com/users/rcasero/gists{/gist_id}", "starred_url": "https://api.github.com/users/rcasero/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rcasero/subscriptions", "organizations_url": "https://api.github.com/users/rcasero/orgs", "repos_url": "https://api.github.com/users/rcasero/repos", "events_url": "https://api.github.com/users/rcasero/events{/privacy}", "received_events_url": "https://api.github.com/users/rcasero/received_events", "type": "User", "site_admin": false }
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "It seems like https://github.com/huggingface/datasets/pull/5626 has introduced this error. \r\n\r\ncc @albertvillanova \r\n\r\nI think replacing:\r\nhttps://github.com/huggingface/datasets/blob/0803a006db1c395ac715662cc6079651f77c11ea/src/datasets/download/streaming_download_manager.py#L777-L778\r\nwith:\r\n```python\r\nreturn np.load(xopen(filepath_or_buffer, \"rb\", use_auth_token=use_auth_token), *args, **kwargs)\r\n```\r\nshould fix the issue.\r\n\r\n(Maybe this is also worth doing a patch release afterward)", "Thanks for reporting, @rcasero.\r\n\r\nI can have a look..." ]
"2023-04-05T16:46:49"
"2023-04-07T09:16:59"
"2023-04-07T09:16:59"
NONE
null
### Describe the bug Hi, I have some `dataset_load()` code of a custom offline dataset that works with datasets v2.10.1. ```python ds = datasets.load_dataset(path=dataset_dir, name=configuration, data_dir=dataset_dir, cache_dir=cache_dir, aux_dir=aux_dir, # download_mode=datasets.DownloadMode.FORCE_REDOWNLOAD, num_proc=18) ``` When upgrading datasets to 2.11.0, it fails with error ``` Traceback (most recent call last): File "<string>", line 2, in <module> File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset builder_instance.download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare self._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 1651, in _download_and_prepare super()._download_and_prepare( File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/datasets/builder.py", line 964, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 682, in _split_generators self.some_function() File "/home/ramon.casero/.cache/huggingface/modules/datasets_modules/datasets/71f67f69e6e00e139903a121f96b71f39b65a6b6aaeb0862e6a5da3a3f565b4c/mydataset.py", line 1314, in some_function() x_df = pd.DataFrame({'cell_type_descriptor': fp['x'].tolist()}) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/site-packages/numpy/lib/npyio.py", line 248, in __getitem__ bytes = self.zip.open(key) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 1530, in open fheader = zef_file.read(sizeFileHeader) File "/home/ramon.casero/opt/miniconda3/envs/myenv/lib/python3.10/zipfile.py", line 744, in read self._file.seek(self._pos) ValueError: seek of closed file ``` ### Steps to reproduce the bug Sorry, I cannot share the data or code because they are not mine to share, but the point of failure is a call in `some_function()` ```python with np.load(embedding_filename) as fp: x_df = pd.DataFrame({'feature': fp['x'].tolist()}) ``` I'll try to generate a short snippet that reproduces the error. ### Expected behavior I would expect that `load_dataset` works on the custom datasets generation script for v2.11.0 the same way it works for 2.10.1, without making `np.load()` give a `ValueError: seek of closed file` error. ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-4.18.0-483.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.8 - Huggingface_hub version: 0.12.0 - PyArrow version: 11.0.0 - Pandas version: 1.5.2 - numpy: 1.24.2 - This is an offline dataset that uses `datasets.config.HF_DATASETS_OFFLINE = True` in the generation script.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5711/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5710/comments
https://api.github.com/repos/huggingface/datasets/issues/5710/events
https://github.com/huggingface/datasets/issues/5710
1,655,703,534
I_kwDODunzps5isAfu
5,710
OSError: Memory mapping file failed: Cannot allocate memory
{ "login": "Saibo-creator", "id": 53392976, "node_id": "MDQ6VXNlcjUzMzkyOTc2", "avatar_url": "https://avatars.githubusercontent.com/u/53392976?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Saibo-creator", "html_url": "https://github.com/Saibo-creator", "followers_url": "https://api.github.com/users/Saibo-creator/followers", "following_url": "https://api.github.com/users/Saibo-creator/following{/other_user}", "gists_url": "https://api.github.com/users/Saibo-creator/gists{/gist_id}", "starred_url": "https://api.github.com/users/Saibo-creator/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Saibo-creator/subscriptions", "organizations_url": "https://api.github.com/users/Saibo-creator/orgs", "repos_url": "https://api.github.com/users/Saibo-creator/repos", "events_url": "https://api.github.com/users/Saibo-creator/events{/privacy}", "received_events_url": "https://api.github.com/users/Saibo-creator/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi! This error means that PyArrow's internal [`mmap`](https://man7.org/linux/man-pages/man2/mmap.2.html) call failed to allocate memory, which can be tricky to debug. Since this error is more related to PyArrow than us, I think it's best to report this issue in their [repo](https://github.com/apache/arrow) (they are more experienced on this matter). Also, googling \"mmap cannot allocate memory\" returns some approaches to solving this problem." ]
"2023-04-05T14:11:26"
"2023-04-05T17:09:28"
null
NONE
null
### Describe the bug Hello, I have a series of datasets each of 5 GB, 600 datasets in total. So together this makes 3TB. When I trying to load all the 600 datasets into memory, I get the above error message. Is this normal because I'm hitting the max size of memory mapping of the OS? Thank you ```terminal 0_21/cache-e9c42499f65b1881.arrow load_hf_datasets_from_disk: 82%|████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 494/600 [07:26<01:35, 1.11it/s] Traceback (most recent call last): File "example_load_genkalm_dataset.py", line 35, in <module> multi_ds.post_process(max_node_num=args.max_node_num,max_seq_length=args.max_seq_length,delay=args.delay) File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 142, in post_process genkalm_dataset = GenKaLM_Dataset.from_hf_dataset(path_or_name=ds_path, max_seq_length=self.max_seq_length, File "/home/geng/GenKaLM/src/dataloader/dataset.py", line 47, in from_hf_dataset hf_ds = load_from_disk(path_or_name) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/load.py", line 1848, in load_from_disk return Dataset.load_from_disk(dataset_path, keep_in_memory=keep_in_memory, storage_options=storage_options) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1549, in load_from_disk arrow_table = concat_tables( File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1805, in concat_tables tables = list(tables) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1550, in <genexpr> table_cls.from_file(Path(dataset_path, data_file["filename"]).as_posix()) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 1065, in from_file table = _memory_mapped_arrow_table_from_file(filename) File "/home/geng/.conda/envs/genkalm/lib/python3.8/site-packages/datasets/table.py", line 50, in _memory_mapped_arrow_table_from_file memory_mapped_stream = pa.memory_map(filename) File "pyarrow/io.pxi", line 950, in pyarrow.lib.memory_map File "pyarrow/io.pxi", line 911, in pyarrow.lib.MemoryMappedFile._open File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 115, in pyarrow.lib.check_status OSError: Memory mapping file failed: Cannot allocate memory ``` ### Steps to reproduce the bug Sorry I can not provide a reproducible code as the data is stored on my server and it's too large to share. ### Expected behavior I expect the 3TB of data can be fully mapped to memory ### Environment info - `datasets` version: 2.9.0 - Platform: Linux-4.15.0-204-generic-x86_64-with-debian-buster-sid - Python version: 3.7.6 - PyArrow version: 11.0.0 - Pandas version: 1.0.1
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5710/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5709/comments
https://api.github.com/repos/huggingface/datasets/issues/5709/events
https://github.com/huggingface/datasets/issues/5709
1,655,423,503
I_kwDODunzps5iq8IP
5,709
Manually dataset info made not taken into account
{ "login": "jplu", "id": 959590, "node_id": "MDQ6VXNlcjk1OTU5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jplu", "html_url": "https://github.com/jplu", "followers_url": "https://api.github.com/users/jplu/followers", "following_url": "https://api.github.com/users/jplu/following{/other_user}", "gists_url": "https://api.github.com/users/jplu/gists{/gist_id}", "starred_url": "https://api.github.com/users/jplu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jplu/subscriptions", "organizations_url": "https://api.github.com/users/jplu/orgs", "repos_url": "https://api.github.com/users/jplu/repos", "events_url": "https://api.github.com/users/jplu/events{/privacy}", "received_events_url": "https://api.github.com/users/jplu/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "hi @jplu ! Did I understand you correctly that you create the dataset, push it to the Hub with `.push_to_hub` and you see a `dataset_infos.json` file there, then you edit this file, load the dataset with `load_dataset` and you don't see any changes in `.info` attribute of a dataset object? \r\n\r\nThis is actually weird that when you push your dataset to the Hub, a `dataset_infos.json` file is created, because this file is deprecated and it should create `README.md` with the `dataset_info` field instead. Some keys are also deprecated, like \"supervised_keys\" and \"task_templates\".\r\n\r\nCan you please provide a toy reproducible example of how you create and push the dataset? And also why do you want to change this file, especially the number of bytes and examples?", "Hi @polinaeterna Yes I have created the dataset with `Dataset.from_dict` applied some updates afterward and when I pushed to the hub I had a `dataset_infos.json` file and there was a `README.md` file as well.\r\n\r\nI didn't know that the JSON file was deprecated. So I have built my dataset with `ImageBuilder` instead and now it works like a charm without having to touch anything.\r\n\r\nI haven't succeed to reproduce the creation of the JSON file with a toy example, hence, I certainly did some mistakes when I have manipulated my dataset manually at first. My bad." ]
"2023-04-05T11:15:17"
"2023-04-06T08:52:20"
"2023-04-06T08:52:19"
CONTRIBUTOR
null
### Describe the bug Hello, I'm manually building an image dataset with the `from_dict` approach. I also build the features with the `cast_features` methods. Once the dataset is created I push it on the hub, and a default `dataset_infos.json` file seems to have been automatically added to the repo in same time. Hence I update it manually with all the missing info, but when I download the dataset the info are never updated. Former `dataset_infos.json` file: ``` {"default": { "description": "", "citation": "", "homepage": "", "license": "", "features": { "image": { "_type": "Image" }, "labels": { "names": [ "Fake", "Real" ], "_type": "ClassLabel" } }, "splits": { "validation": { "name": "validation", "num_bytes": 901010094.0, "num_examples": 3200, "dataset_name": null }, "train": { "name": "train", "num_bytes": 901010094.0, "num_examples": 3200, "dataset_name": null } }, "download_size": 1802008414, "dataset_size": 1802020188.0, "size_in_bytes": 3604028602.0 }} ``` After I update it manually it looks like: ``` { "bstrai--deepfake-detection":{ "description":"", "citation":"", "homepage":"", "license":"", "features":{ "image":{ "decode":true, "id":null, "_type":"Image" }, "labels":{ "num_classes":2, "names":[ "Fake", "Real" ], "id":null, "_type":"ClassLabel" } }, "supervised_keys":{ "input":"image", "output":"labels" }, "task_templates":[ { "task":"image-classification", "image_column":"image", "label_column":"labels" } ], "config_name":null, "splits":{ "validation":{ "name":"validation", "num_bytes":36627822, "num_examples":123, "dataset_name":"deepfake-detection" }, "train":{ "name":"train", "num_bytes":901023694, "num_examples":3200, "dataset_name":"deepfake-detection" } }, "download_checksums":null, "download_size":937562209, "dataset_size":937651516, "size_in_bytes":1875213725 } } ``` Anything I should do to have the new infos in the `dataset_infos.json` to be taken into account? Or it is not possible yet? Thanks! ### Steps to reproduce the bug - ### Expected behavior - ### Environment info - `datasets` version: 2.11.0 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.10 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5709/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5708/comments
https://api.github.com/repos/huggingface/datasets/issues/5708/events
https://github.com/huggingface/datasets/issues/5708
1,655,023,642
I_kwDODunzps5ipaga
5,708
Dataset sizes are in MiB instead of MB in dataset cards
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
open
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false }
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "repos_url": "https://api.github.com/users/albertvillanova/repos", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "type": "User", "site_admin": false } ]
null
[ "Example of bulk edit: https://huggingface.co/datasets/aeslc/discussions/5", "looks great! \r\n\r\nDo you encode the fact that you've already converted a dataset? (to not convert it twice) or do you base yourself on the info contained in `dataset_info`", "I am only looping trough the dataset cards, assuming that all of them were created with MiB.\r\n\r\nI agree we should only run the bulk edit once for all canonical datasets: I'm using a for-loop over canonical datasets.", "yes, worst case, we have this in structured data:\r\n\r\n<img width=\"337\" alt=\"image\" src=\"https://user-images.githubusercontent.com/326577/230037051-06caddcb-08c8-4953-a710-f3d122917db3.png\">\r\n", "I have just included as well the conversion from MB to GB if necessary. See: \r\n- https://huggingface.co/datasets/bookcorpus/discussions/2/files\r\n- https://huggingface.co/datasets/asnq/discussions/2/files", "Nice. Is it another loop? Because in https://huggingface.co/datasets/amazon_us_reviews/discussions/2/files we have `32377.29 MB` for example", "First, I tested some batches to check the changes made. Then I incorporated the MB to GB conversion. Now I'm running the rest.", "The bulk edit parsed 751 canonical datasets and updated 166.", "Thanks a lot!\r\n\r\nThe sizes now match as expected!\r\n\r\n<img width=\"1446\" alt=\"Capture d’écran 2023-04-05 à 16 10 15\" src=\"https://user-images.githubusercontent.com/1676121/230107044-ac2a76ea-a4fe-4e81-a925-f464b85f5edd.png\">\r\n", "I made another bulk edit of ancient canonical datasets that were moved to community organization. I have parsed 11 datasets and opened a PR on 3 of them:\r\n- [ ] \"allenai/scicite\": https://huggingface.co/datasets/allenai/scicite/discussions/3\r\n- [ ] \"allenai/scifact\": https://huggingface.co/datasets/allenai/scifact/discussions/2\r\n- [ ] \"dair-ai/emotion\": https://huggingface.co/datasets/dair-ai/emotion/discussions/6" ]
"2023-04-05T06:36:03"
"2023-04-07T06:20:32"
null
MEMBER
null
As @severo reported in an internal discussion (https://github.com/huggingface/moon-landing/issues/5929): Now we show the dataset size: - from the dataset card (in the side column) - from the datasets-server (in the viewer) But, even if the size is the same, we see a mismatch because the viewer shows MB, while the info from the README generally shows MiB (even if it's written MB -> https://huggingface.co/datasets/blimp/blob/main/README.md?code=true#L1932) <img width="664" alt="Capture d’écran 2023-04-04 à 10 16 01" src="https://user-images.githubusercontent.com/1676121/229730887-0bd8fa6e-9462-46c6-bd4e-4d2c5784cabb.png"> TODO: Values to be fixed in: `Size of downloaded dataset files:`, `Size of the generated dataset:` and `Total amount of disk used:` - [x] Bulk edit on the Hub to fix this in all canonical datasets - [x] Bulk PR on the Hub to fix ancient canonical datasets that were moved to organizations
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5708/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5706/comments
https://api.github.com/repos/huggingface/datasets/issues/5706/events
https://github.com/huggingface/datasets/issues/5706
1,653,545,835
I_kwDODunzps5ijxtr
5,706
Support categorical data types for Parquet
{ "login": "kklemon", "id": 1430243, "node_id": "MDQ6VXNlcjE0MzAyNDM=", "avatar_url": "https://avatars.githubusercontent.com/u/1430243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kklemon", "html_url": "https://github.com/kklemon", "followers_url": "https://api.github.com/users/kklemon/followers", "following_url": "https://api.github.com/users/kklemon/following{/other_user}", "gists_url": "https://api.github.com/users/kklemon/gists{/gist_id}", "starred_url": "https://api.github.com/users/kklemon/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kklemon/subscriptions", "organizations_url": "https://api.github.com/users/kklemon/orgs", "repos_url": "https://api.github.com/users/kklemon/repos", "events_url": "https://api.github.com/users/kklemon/events{/privacy}", "received_events_url": "https://api.github.com/users/kklemon/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[ "Hi ! We could definitely a type that holds the categories and uses a DictionaryType storage. There's a ClassLabel type that is similar with a 'names' parameter (similar to a id2label in deep learning frameworks) that uses an integer array as storage.\r\n\r\nIt can be added in `features.py`. Here are some pointers:\r\n- the conversion from HF type to PyArrow type is done in `get_nested_type`\r\n- the conversion from Pyarrow type to HF type is done in `generate_from_arrow_type`\r\n- `encode_nested_example` and `decode_nested_example` are used to do user's value (what users see) <-> storage value (what is in the pyarrow.array) if there's any conversion to do" ]
"2023-04-04T09:45:35"
"2023-04-04T16:37:25"
null
NONE
null
### Feature request Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parquet file with categorical columns: ```python import pandas as pd import pyarrow.parquet as pq from datasets import load_dataset # Create categorical sample DataFrame df = pd.DataFrame({'type': ['foo', 'bar']}).astype('category') df.to_parquet('data.parquet') # Read back as pyarrow table table = pq.read_table('data.parquet') print(table.schema) # type: dictionary<values=string, indices=int32, ordered=0> # Load with huggingface datasets load_dataset('parquet', data_files='data.parquet') ``` Error: ``` Traceback (most recent call last): File ".venv/lib/python3.10/site-packages/datasets/builder.py", line 1875, in _prepare_split_single writer.write_table(table) File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 566, in write_table self._build_writer(inferred_schema=pa_table.schema) File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 379, in _build_writer inferred_features = Features.from_arrow_schema(inferred_schema) File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in from_arrow_schema obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in <dictcomp> obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema} File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1361, in generate_from_arrow_type raise NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table NotImplementedError ``` ### Motivation Categorical data types, as offered by Pandas and implemented with the `DictionaryType` dtype in `pyarrow` can significantly reduce dataset size and are a handy way to turn textual features into numerical representations and back. Lack of support in Huggingface datasets greatly reduces compatibility with a common Pandas / Parquet feature. ### Your contribution I could provide a PR. However, it would be nice to have an initial complexity estimate from one of the core developers first.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5706/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5705/comments
https://api.github.com/repos/huggingface/datasets/issues/5705/events
https://github.com/huggingface/datasets/issues/5705
1,653,500,383
I_kwDODunzps5ijmnf
5,705
Getting next item from IterableDataset took forever.
{ "login": "HongtaoYang", "id": 16588434, "node_id": "MDQ6VXNlcjE2NTg4NDM0", "avatar_url": "https://avatars.githubusercontent.com/u/16588434?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HongtaoYang", "html_url": "https://github.com/HongtaoYang", "followers_url": "https://api.github.com/users/HongtaoYang/followers", "following_url": "https://api.github.com/users/HongtaoYang/following{/other_user}", "gists_url": "https://api.github.com/users/HongtaoYang/gists{/gist_id}", "starred_url": "https://api.github.com/users/HongtaoYang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/HongtaoYang/subscriptions", "organizations_url": "https://api.github.com/users/HongtaoYang/orgs", "repos_url": "https://api.github.com/users/HongtaoYang/repos", "events_url": "https://api.github.com/users/HongtaoYang/events{/privacy}", "received_events_url": "https://api.github.com/users/HongtaoYang/received_events", "type": "User", "site_admin": false }
[]
closed
false
null
[]
null
[ "Hi! It can take some time to iterate over Parquet files as big as yours, convert the samples to Python, and find the first one that matches a filter predicate before yielding it...", "Thanks @mariosasko, I figured it was the filter operation. I'm closing this issue because it is not a bug, it is the expected beheaviour." ]
"2023-04-04T09:16:17"
"2023-04-05T23:35:41"
"2023-04-05T23:35:41"
NONE
null
### Describe the bug I have a large dataset, about 500GB. The format of the dataset is parquet. I then load the dataset and try to get the first item ```python def get_one_item(): dataset = load_dataset("path/to/datafiles", split="train", cache_dir=".", streaming=True) dataset = dataset.filter(lambda example: example['text'].startswith('Ar')) print(next(iter(dataset))) ``` However, this function never finish. I waited ~10mins, the function was still running so I killed the process. I'm now using `line_profiler` to profile how long it would take to return one item. I'll be patient and wait for as long as it needs. I suspect the filter operation is the reason why it took so long. Can I get some possible reasons behind this? ### Steps to reproduce the bug Unfortunately without my data files, there is no way to reproduce this bug. ### Expected behavior With `IteralbeDataset`, I expect the first item to be returned instantly. ### Environment info - datasets version: 2.11.0 - python: 3.7.12
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5705/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5704/comments
https://api.github.com/repos/huggingface/datasets/issues/5704/events
https://github.com/huggingface/datasets/pull/5704
1,653,471,356
PR_kwDODunzps5NkEvJ
5,704
5537 speedup load
{ "login": "semajyllek", "id": 35013374, "node_id": "MDQ6VXNlcjM1MDEzMzc0", "avatar_url": "https://avatars.githubusercontent.com/u/35013374?v=4", "gravatar_id": "", "url": "https://api.github.com/users/semajyllek", "html_url": "https://github.com/semajyllek", "followers_url": "https://api.github.com/users/semajyllek/followers", "following_url": "https://api.github.com/users/semajyllek/following{/other_user}", "gists_url": "https://api.github.com/users/semajyllek/gists{/gist_id}", "starred_url": "https://api.github.com/users/semajyllek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/semajyllek/subscriptions", "organizations_url": "https://api.github.com/users/semajyllek/orgs", "repos_url": "https://api.github.com/users/semajyllek/repos", "events_url": "https://api.github.com/users/semajyllek/events{/privacy}", "received_events_url": "https://api.github.com/users/semajyllek/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Awesome ! cc @mariosasko :)", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5704). All of your documentation changes will be reflected on that endpoint.", "Hi, thanks for working on this!\r\n\r\nYour solution only works if the `root` is `\"\"`, e.g., this would yield an incorrect result:\r\n```python\r\ndset = load_dataset(\"user/hf-dataset-repo\", data_dir=\"path/to/data_dir\")\r\n```\r\n\r\nAlso, the `HfFileSystem` implementation in `datasets` will be replaced with the more powerful [one](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_file_system.py) from `huggingface_hub` soon (I plan to open a PR that makes `find` much faster in the coming days). \r\n\r\nSo I don't think we want to merge this PR in the current state, but thanks again for the effort.\r\n\r\n (I'll comment on the original issue to propose a cleaner solution)", "Ooof. Sorry, I should have checked that more thoroughly then! I would say we could just add that check and only use my approach if the root is \"\", which would still be faster in many cases, but it sounds like you have a better solution on the way. Thanks for the feedback Mario." ]
"2023-04-04T08:58:14"
"2023-04-07T16:10:55"
null
NONE
null
I reimplemented fsspec.spec.glob() in `hffilesystem.py` as `_glob`, used it in `_resolve_single_pattern_in_dataset_repository` only, and saw a 20% speedup in times to load the config, on average. That's not much when usually this step takes only 2-3 seconds for most datasets, but in this particular case, `bigcode/the-stack-dedup` , the loading time to get the config (not download the entire 6tb dataset, of course), went from ~170 secs to ~20 secs. What makes this work is this code in `_glob`: ``` if self.dir_cache is not None: allpaths = self.dir_cache else: allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs) ``` I also had to `import glob.has_magic( )` for `_glob()` (confusing, I know). I hope there is no issue with copying most of the code from `fsspec.spec.glob`, as it is a BSD 3-Clause License, and I left a comment about this in the docstring of` _glob()`, that we may want to delete. As mentioned, I evaluated the speedup across a random selection of about 1000 datasets (not all 27k+), and verified that old_config.eq(new_method_config) with the build in method, but deleted this test and related code changes on the subsequent commit. It's in the commit history if anyone wants to see it. (Note this does not include the outlier of `bigcode/the-stack-dedup` | | old_time | new _time | diff | pct_diff | | -- | -- | -- | -- | -- | | mean | 3.340 | 2.642 | 0.698 | 18.404 | | min | 2.024 | 1.976 | -0.840 | -37.634 | | max | 66.582 | 41.517 | 30.927 | 85.538 |
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5704/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5704", "html_url": "https://github.com/huggingface/datasets/pull/5704", "diff_url": "https://github.com/huggingface/datasets/pull/5704.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5704.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5703/comments
https://api.github.com/repos/huggingface/datasets/issues/5703/events
https://github.com/huggingface/datasets/pull/5703
1,653,158,955
PR_kwDODunzps5NjCCV
5,703
[WIP][Test, Please ignore] Investigate performance impact of using multiprocessing only
{ "login": "hvaara", "id": 1535968, "node_id": "MDQ6VXNlcjE1MzU5Njg=", "avatar_url": "https://avatars.githubusercontent.com/u/1535968?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hvaara", "html_url": "https://github.com/hvaara", "followers_url": "https://api.github.com/users/hvaara/followers", "following_url": "https://api.github.com/users/hvaara/following{/other_user}", "gists_url": "https://api.github.com/users/hvaara/gists{/gist_id}", "starred_url": "https://api.github.com/users/hvaara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hvaara/subscriptions", "organizations_url": "https://api.github.com/users/hvaara/orgs", "repos_url": "https://api.github.com/users/hvaara/repos", "events_url": "https://api.github.com/users/hvaara/events{/privacy}", "received_events_url": "https://api.github.com/users/hvaara/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "`multiprocess` uses `dill` instead of `pickle` for pickling shared objects and, as such, can pickle more types than `multiprocessing`. And I don't think this is something we want to change :).", "That makes sense to me, and I don't think you should merge this change. I was only curious about the performance impact. I saw the benchmarks that was produced in other PRs, and wanted to get a better understanding of it. I created this PR to see if it got automatically added here.\r\n\r\nIs there a way I can generate those benchmarks myself?", "You can find some speed comparisons between dill and pickle on SO if you google \"dill vs pickle speed\".\r\n\r\nAnd for the benchmarks, you can generate them locally with DVC running this code from the repo root: https://github.com/huggingface/datasets/blob/0803a006db1c395ac715662cc6079651f77c11ea/.github/workflows/benchmarks.yaml#L23-L47." ]
"2023-04-04T04:37:49"
"2023-04-05T12:59:41"
null
NONE
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5703/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5703/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5703", "html_url": "https://github.com/huggingface/datasets/pull/5703", "diff_url": "https://github.com/huggingface/datasets/pull/5703.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5703.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5702/comments
https://api.github.com/repos/huggingface/datasets/issues/5702/events
https://github.com/huggingface/datasets/issues/5702
1,653,104,720
I_kwDODunzps5iiGBQ
5,702
Is it possible or how to define a `datasets.Sequence` that could potentially be either a dict, a str, or None?
{ "login": "gitforziio", "id": 10508116, "node_id": "MDQ6VXNlcjEwNTA4MTE2", "avatar_url": "https://avatars.githubusercontent.com/u/10508116?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gitforziio", "html_url": "https://github.com/gitforziio", "followers_url": "https://api.github.com/users/gitforziio/followers", "following_url": "https://api.github.com/users/gitforziio/following{/other_user}", "gists_url": "https://api.github.com/users/gitforziio/gists{/gist_id}", "starred_url": "https://api.github.com/users/gitforziio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gitforziio/subscriptions", "organizations_url": "https://api.github.com/users/gitforziio/orgs", "repos_url": "https://api.github.com/users/gitforziio/repos", "events_url": "https://api.github.com/users/gitforziio/events{/privacy}", "received_events_url": "https://api.github.com/users/gitforziio/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Hi ! `datasets` uses Apache Arrow as backend to store the data, and it requires each column to have a fixed type. Therefore a column can't have a mix of dicts/lists/strings.\r\n\r\nThough it's possible to have one (nullable) field for each type:\r\n```python\r\nfeatures = Features({\r\n \"text_alone\": Value(\"string\"),\r\n \"text_with_idxes\": {\r\n \"text\": Value(\"string\"),\r\n \"idxes\": Value(\"int64\")\r\n }\r\n})\r\n```\r\n\r\nbut you'd have to reformat your data fiels or define a [dataset loading script](https://huggingface.co/docs/datasets/dataset_script) to apply the appropriate parsing.\r\n\r\nAlternatively we could explore supporting the Arrow [Union](https://arrow.apache.org/docs/python/generated/pyarrow.UnionType.html) type which could solve this issue, but I don't know if it's well supported in python and with the rest of the ecosystem like Parquet", "@lhoestq Thank you! I further wonder if it's possible to use list subscripts as keys of a feature? Like\r\n```python\r\nfeatures = Features({\r\n 0: Value(\"string\"),\r\n 1: {\r\n \"text\": Value(\"string\"),\r\n \"idxes\": [Value(\"int64\")]\r\n },\r\n 2: Value(\"string\"),\r\n # ...\r\n})\r\n```", "Column names need to be strings, so you could use \"1\", \"2\", etc. or give appropriate column names", "@lhoestq Got it. Thank you!" ]
"2023-04-04T03:20:43"
"2023-04-05T14:15:18"
"2023-04-05T14:15:17"
NONE
null
### Feature request Hello! Apologies if my question sounds naive: I was wondering if it’s possible, or how one would go about defining a 'datasets.Sequence' element in datasets.Features that could potentially be either a dict, a str, or None? Specifically, I’d like to define a feature for a list that contains 18 elements, each of which has been pre-defined as either a `dict or None` or `str or None` - as demonstrated in the slightly misaligned data provided below: ```json [ [ {"text":"老妇人","idxes":[0,1,2]},null,{"text":"跪","idxes":[3]},null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,null,null,null,null,null,null,null,null,null], [ {"text":"那些水","idxes":[13,14,15]},null,{"text":"舀","idxes":[11]},null,null,null,null,null,{"text":"在那坑里","idxes":[4,5,6,7]},null,{"text":"出","idxes":[12]},null,null,null,null,null,null,null], [ {"text":"水","idxes":[38]}, null, {"text":"舀","idxes":[40]}, "假", // note this is just a standalone string null,null,null,{"text":"坑里","idxes":[35,36]},null,null,null,null,null,null,null,null,null,null]] ``` ### Motivation I'm currently working with a dataset of the following structure and I couldn't find a solution in the [documentation](https://huggingface.co/docs/datasets/v2.11.0/en/package_reference/main_classes#datasets.Features). ```json {"qid":"3-train-1058","context":"桑桑害怕了。从玉米地里走到田埂上,他遥望着他家那幢草房子里的灯光,知道母亲没有让他回家的意思,很伤感,有点想哭。但没哭,转身朝阿恕家走去。","corefs":[[{"text":"桑桑","idxes":[0,1]},{"text":"他","idxes":[17]}]],"non_corefs":[],"outputs":[[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[11]},null,null,null,null,null,{"text":"从玉米地里","idxes":[6,7,8,9,10]},{"text":"到田埂上","idxes":[12,13,14,15]},null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},null,{"text":"走","idxes":[66]},null,null,null,null,null,null,null,{"text":"转身朝阿恕家去","idxes":[60,61,62,63,64,65,67]},null,null,null,null,null,null,null],[{"text":"灯光","idxes":[30,31]},null,null,null,null,null,null,{"text":"草房子里","idxes":[25,26,27,28]},null,null,null,null,null,null,null,null,null,null],[{"text":"他","idxes":[17]},{"text":"他家那幢草房子","idxes":[21,22,23,24,25,26,27]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"远"],[{"text":"他","idxes":[17]},{"text":"阿恕家","idxes":[63,64,65]},null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,"变近"]]} ``` ### Your contribution I'm going to provide the dataset at https://huggingface.co/datasets/2030NLP/SpaCE2022 .
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5702/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5701/comments
https://api.github.com/repos/huggingface/datasets/issues/5701/events
https://github.com/huggingface/datasets/pull/5701
1,652,931,399
PR_kwDODunzps5NiSCy
5,701
Add Dataset.from_spark
{ "login": "maddiedawson", "id": 106995444, "node_id": "U_kgDOBmCe9A", "avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maddiedawson", "html_url": "https://github.com/maddiedawson", "followers_url": "https://api.github.com/users/maddiedawson/followers", "following_url": "https://api.github.com/users/maddiedawson/following{/other_user}", "gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}", "starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions", "organizations_url": "https://api.github.com/users/maddiedawson/orgs", "repos_url": "https://api.github.com/users/maddiedawson/repos", "events_url": "https://api.github.com/users/maddiedawson/events{/privacy}", "received_events_url": "https://api.github.com/users/maddiedawson/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5701). All of your documentation changes will be reflected on that endpoint.", "@mariosasko Would you or another HF datasets maintainer be able to review this, please?", "Amazing ! Great job @maddiedawson \r\n\r\nDo you know if it's possible to also support writing to Parquet using the HF ParquetWriter if `file_format=\"parquet\"` ?\r\n\r\nParquet is often used when people want to stream the data to train models - which is suitable for big datasets. On the other hand Arrow is generally used for local memory mapping with random access.\r\n\r\n> Please note there was a previous PR adding this functionality\r\n\r\nAm I right to say that it uses the spark workers to prepare the Arrow files ? If so this should make the data preparation fast and won't fill up the executor's memory as in the previously proposed PR", "Thanks for taking a look! Unlike the previous PR's approach, this implementation takes advantage of Spark mapping to distribute file writing over multiple tasks. (Also it doesn't load the entire dataset into memory :) )\r\n\r\nSupporting Parquet here sgtm; I'll modify the PR.\r\n\r\nI also updated the PR description with a common Spark-HF use case that we want to improve." ]
"2023-04-03T23:51:29"
"2023-04-10T04:47:15"
null
NONE
null
Adds static method Dataset.from_spark to create datasets from Spark DataFrames. This approach alleviates users of the need to materialize their dataframe---a common use case is that the user loads their dataset into a dataframe, uses Spark to apply some transformation to some of the columns, and then wants to train on the dataset. https://github.com/huggingface/datasets/issues/5678
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5701/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5701/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5701", "html_url": "https://github.com/huggingface/datasets/pull/5701", "diff_url": "https://github.com/huggingface/datasets/pull/5701.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5701.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5700/comments
https://api.github.com/repos/huggingface/datasets/issues/5700/events
https://github.com/huggingface/datasets/pull/5700
1,652,527,530
PR_kwDODunzps5Ng6g_
5,700
fix: fix wrong modification of the 'cache_file_name' -related paramet…
{ "login": "FrancoisNoyez", "id": 47528215, "node_id": "MDQ6VXNlcjQ3NTI4MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FrancoisNoyez", "html_url": "https://github.com/FrancoisNoyez", "followers_url": "https://api.github.com/users/FrancoisNoyez/followers", "following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}", "gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}", "starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions", "organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs", "repos_url": "https://api.github.com/users/FrancoisNoyez/repos", "events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}", "received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Have you tried to set the cache file names if `keep_in_memory`is True ?\r\n\r\n```diff\r\n- if self.cache_files:\r\n+ if self.cache_files and not keep_in_memory:\r\n```\r\n\r\nThis way it doesn't change the indice cache arguments and leave them as `None`", "@lhoestq \r\nRegarding what you suggest:\r\nThe thing is, if cached files already exist and do correspond to the split that we are currently trying to perform, then it would be a shame not to use them, would it not? So I don't think that we should necessarily bypass this step in the method (corresponding to the reading of already existing data), if 'keep_in_memory' = True. For me, 'keep_in_memory' = True is supposed to mean \"don't cache the output of this method\", but it should say nothing regarding what to do with potentially already existing cached data, should it?\r\nBesides, even if we do what you suggest, and do only that (so, not the modifs that I suggested), then, assuming that 'keep_in_memory' = False and that there exist cached files, if the following check on the existence of cached files with specific name fails, we will still have ended up modifying an input value which will be then used in the remaining of the method, potentially altering the behavior that the user intended the method's call to have. Basically, the issue with what you suggest is that we can't guaranty that we won't continue with the remaining of the method even if this condition is met. Because of that, in my opinion, the best way to not have to worry about potential, unwanted side effects in the rest of the code is to not modify those variables in place, and so, here, to use other variables.\r\nSo, I'm sorry, but for those two reasons, I don't think that what you are suggesting addresses the problems which are described in the opened issue.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5700). All of your documentation changes will be reflected on that endpoint.", "Makes sense ! Therefore removing the ValueError messages sounds good to me, thanks for detailing.\r\n\r\nThen I think it's fine to keep using the same variables for the cache file names is enough instead of defining new ones - it doesn't alter the behavior of the function. Otherwise it would feel a bit confusing to have similar variables with slightly modified names just for that", "Ok for the removing the ValueError exceptions, thanks.\r\n\r\nThat said, it seems to me like we should still find a way not to modify the values input by the user, insofar as they can be used elsewhere down the line in the program. Sure, here, by removing the raising of those ValueError exceptions, we have fixed one use cases were allowing this modification actually caused an issue, but maybe there are other use cases where this would also caused an issue? Also, maybe in the future we will add other functionalities which will depend on the values of those input parameters, with then new risks of such an issue occurring?\r\nThat's why, in order not to have to worry about that, and in order to make the code a bit more future -proof, I suggest that make sure those input values are not modified.\r\n\r\nOne way that I did this is to create different but similar looking variable names. If you find this confusing, we can always add a comment.\r\nAnother way would be to not store the result of the conditional definition of the values (the '\\_cache_file_name = (... if condition else ...)' in my proposition of code), and to use it every time we need. But since we use those new variables at least twice, that creates code redundancy, which is not great either.\r\nFinally, a third way that I can imagine would be to put all this logic into its own method, which would then encapsulate it, and protect the remaining of the 'train_test_split' code from all unintended side effect that this logic can currently cause. This one is probably best. Also, maybe it could be used to remove some code redundancy elsewhere in the definition of the Dataset class? I have not checked if such a code redundancy exists.", "We're already replacing the user's input by default values automatically in other methods, it's fine to do it here as well and actually fits the library's style.\r\n\r\nNote that the case where it would reload the cache even if `keep_in_memory=True` is not implemented though, but it should be easy to add in `_select_with_indices_mapping`:\r\n- add keep_in_memory in `_new_dataset_with_indices` that uses InMemoryTable.from_file\r\n- inside `_select_with_indices_mapping` return the dataset from `_new_dataset_with_indices` if:\r\n - `keep_in_memory=True`\r\n - and `indices_cache_file_name` is not None and exists \r\n - and `is_caching_enabled()`\r\n\r\nBecause if we let it this way it would recreate the cache file unfortunately", "> We're already replacing the user's input by default values automatically in other methods, it's fine to do it here as well and actually fits the library's style.\r\n\r\nI think the fact that it's a style of the library is not really an argument in itself; however, after thinking through it several times, I think I know see why your solution is acceptable: as soon as the user specifies that 'keep_in_memory=True', they should not care anymore about the value of the '\\_indices_cache_file_name' variables, since from their point of view those are now irrelevant. So it's \"fine\" if we allow ourselves to modify the value of those variables, if it helps the internal code being more concise.\r\nStill, I find that it's a bit unintuitive, and a risk as far as future evolution of the method / of the code is concerned; someone tasked with doing that would need to have the knowledge of a lot of, if not all, the other methods of the class, in order to understand the potentially far-reaching impact of some modifications made to this portion of the code. But I guess that's a choice which is the library's owners to make. Also, if we use your proposed solution, as I explained, we can't get the benefit of potentially reusing possibly already existing cached data.\r\nOn that note...\r\n\r\n> Note that the case where it would reload the cache even if `keep_in_memory=True` is not implemented though\r\n\r\nI'm not sure what you mean here:\r\nWithin the current code trying to load up the potentially already existing split data, there is no trace of the 'keep_in_memory' variable. So why do you say that 'the case where it would reload the cache even if keep_in_memory=True is not implemented' (I assume that you mean 'currently implemented')? Surely, currently, this bit of code works regardless of the value of the 'keep_in_memory' variable', does it not?" ]
"2023-04-03T18:05:26"
"2023-04-06T17:17:27"
null
NONE
null
…ers values in 'train_test_split' + fix bad interaction between 'keep_in_memory' and 'cache_file_name' -related parameters (#5699)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5700/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5700", "html_url": "https://github.com/huggingface/datasets/pull/5700", "diff_url": "https://github.com/huggingface/datasets/pull/5700.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5700.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/5699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5699/comments
https://api.github.com/repos/huggingface/datasets/issues/5699/events
https://github.com/huggingface/datasets/issues/5699
1,652,437,419
I_kwDODunzps5ifjGr
5,699
Issue when wanting to split in memory a cached dataset
{ "login": "FrancoisNoyez", "id": 47528215, "node_id": "MDQ6VXNlcjQ3NTI4MjE1", "avatar_url": "https://avatars.githubusercontent.com/u/47528215?v=4", "gravatar_id": "", "url": "https://api.github.com/users/FrancoisNoyez", "html_url": "https://github.com/FrancoisNoyez", "followers_url": "https://api.github.com/users/FrancoisNoyez/followers", "following_url": "https://api.github.com/users/FrancoisNoyez/following{/other_user}", "gists_url": "https://api.github.com/users/FrancoisNoyez/gists{/gist_id}", "starred_url": "https://api.github.com/users/FrancoisNoyez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FrancoisNoyez/subscriptions", "organizations_url": "https://api.github.com/users/FrancoisNoyez/orgs", "repos_url": "https://api.github.com/users/FrancoisNoyez/repos", "events_url": "https://api.github.com/users/FrancoisNoyez/events{/privacy}", "received_events_url": "https://api.github.com/users/FrancoisNoyez/received_events", "type": "User", "site_admin": false }
[]
open
false
null
[]
null
[ "Hi ! Good catch, this is wrong indeed and thanks for opening a PR :)" ]
"2023-04-03T17:00:07"
"2023-04-04T16:52:42"
null
NONE
null
### Describe the bug **In the 'train_test_split' method of the Dataset class** (defined datasets/arrow_dataset.py), **if 'self.cache_files' is not empty**, then, **regarding the input parameters 'train_indices_cache_file_name' and 'test_indices_cache_file_name', if they are None**, we modify them to make them not None, to see if we can just provide back / work from cached data. But if we can't provide cached data, we move on with the call to the method, except those two values are not None anymore, which will conflict with the use of the 'keep_in_memory' parameter down the line. Indeed, at some point we end up calling the 'select' method, **and if 'keep_in_memory' is True**, since the value of this method's parameter 'indices_cache_file_name' is now not None anymore, **an exception is raised, whose message is "Please use either 'keep_in_memory' or 'indices_cache_file_name' but not both.".** Because of that, it's impossible to perform a train / test split of a cached dataset while requesting that the result not be cached. Which is inconvenient when one is just performing experiments, with no intention of caching the result. Aside from this being inconvenient, **the code which lead up to that situation seems simply wrong** to me: the input variable should not be modified so as to change the user's intention just to perform a test, if that test can fail and respecting the user's intention is necessary to proceed in that case. To fix this, I suggest to use other variables / other variable names, in order to host the value(s) needed to perform the test, so as not to change the originally input values needed by the rest of the method's code. Also, **I don't see why an exception should be raised when the 'select' method is called with both 'keep_in_memory'=True and 'indices_cache_file_name'!=None**: should the use of 'keep_in_memory' not prevail anyway, specifying that the user does not want to perform caching, and so making irrelevant the value of 'indices_cache_file_name'? This is indeed what happens when we look further in the code, in the '\_select_with_indices_mapping' method: when 'keep_in_memory' is True, then the value of indices_cache_file_name does not matter, the data will be written to a stream buffer anyway. Hence I suggest to remove the raising of exception in those circumstances. Notably, to remove the raising of it in the 'select', '\_select_with_indices_mapping', 'shuffle' and 'map' methods. ### Steps to reproduce the bug ```python import datasets def generate_examples(): for i in range(10): yield {"id": i} dataset_ = datasets.Dataset.from_generator( generate_examples, keep_in_memory=False, ) dataset_.train_test_split( test_size=3, shuffle=False, keep_in_memory=True, train_indices_cache_file_name=None, test_indices_cache_file_name=None, ) ``` ### Expected behavior The result of the above code should be a DatasetDict instance. Instead, we get the following exception stack: ```python --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[3], line 1 ----> 1 dataset_.train_test_split( 2 test_size=3, 3 shuffle=False, 4 keep_in_memory=True, 5 train_indices_cache_file_name=None, 6 test_indices_cache_file_name=None, 7 ) File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs) 521 self_format = { 522 "type": self._format_type, 523 "format_kwargs": self._format_kwargs, 524 "columns": self._format_columns, 525 "output_all_columns": self._output_all_columns, 526 } 527 # apply actual function --> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 530 # re-apply format to the output File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 507 validate_fingerprint(kwargs[fingerprint_name]) 509 # Call actual function --> 511 out = func(dataset, *args, **kwargs) 513 # Update fingerprint of in-place transforms + update in-place history of transforms 515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:4428, in Dataset.train_test_split(self, test_size, train_size, shuffle, stratify_by_column, seed, generator, keep_in_memory, load_from_cache_file, train_indices_cache_file_name, test_indices_cache_file_name, writer_batch_size, train_new_fingerprint, test_new_fingerprint) 4425 test_indices = permutation[:n_test] 4426 train_indices = permutation[n_test : (n_test + n_train)] -> 4428 train_split = self.select( 4429 indices=train_indices, 4430 keep_in_memory=keep_in_memory, 4431 indices_cache_file_name=train_indices_cache_file_name, 4432 writer_batch_size=writer_batch_size, 4433 new_fingerprint=train_new_fingerprint, 4434 ) 4435 test_split = self.select( 4436 indices=test_indices, 4437 keep_in_memory=keep_in_memory, (...) 4440 new_fingerprint=test_new_fingerprint, 4441 ) 4443 return DatasetDict({"train": train_split, "test": test_split}) File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:528, in transmit_format.<locals>.wrapper(*args, **kwargs) 521 self_format = { 522 "type": self._format_type, 523 "format_kwargs": self._format_kwargs, 524 "columns": self._format_columns, 525 "output_all_columns": self._output_all_columns, 526 } 527 # apply actual function --> 528 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 529 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 530 # re-apply format to the output File ~/Work/Developments/datasets/src/datasets/fingerprint.py:511, in fingerprint_transform.<locals>._fingerprint.<locals>.wrapper(*args, **kwargs) 507 validate_fingerprint(kwargs[fingerprint_name]) 509 # Call actual function --> 511 out = func(dataset, *args, **kwargs) 513 # Update fingerprint of in-place transforms + update in-place history of transforms 515 if inplace: # update after calling func so that the fingerprint doesn't change if the function fails File ~/Work/Developments/datasets/src/datasets/arrow_dataset.py:3679, in Dataset.select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 3645 """Create a new dataset with rows selected following the list/array of indices. 3646 3647 Args: (...) 3676 ``` 3677 """ 3678 if keep_in_memory and indices_cache_file_name is not None: -> 3679 raise ValueError("Please use either `keep_in_memory` or `indices_cache_file_name` but not both.") 3681 if len(self.list_indexes()) > 0: 3682 raise DatasetTransformationNotAllowedError( 3683 "Using `.select` on a dataset with attached indexes is not allowed. You can first run `.drop_index() to remove your index and then re-add it." 3684 ) ValueError: Please use either `keep_in_memory` or `indices_cache_file_name` but not both. ``` ### Environment info - `datasets` version: 2.11.1.dev0 - Platform: Linux-5.4.236-1-MANJARO-x86_64-with-glibc2.2.5 - Python version: 3.8.12 - Huggingface_hub version: 0.13.3 - PyArrow version: 11.0.0 - Pandas version: 2.0.0 *** *** EDIT: Now with a pull request to fix this [here](https://github.com/huggingface/datasets/pull/5700)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5699/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5698
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5698/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5698/comments
https://api.github.com/repos/huggingface/datasets/issues/5698/events
https://github.com/huggingface/datasets/issues/5698
1,652,183,611
I_kwDODunzps5ielI7
5,698
Add Qdrant as another search index
{ "login": "kacperlukawski", "id": 2649301, "node_id": "MDQ6VXNlcjI2NDkzMDE=", "avatar_url": "https://avatars.githubusercontent.com/u/2649301?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kacperlukawski", "html_url": "https://github.com/kacperlukawski", "followers_url": "https://api.github.com/users/kacperlukawski/followers", "following_url": "https://api.github.com/users/kacperlukawski/following{/other_user}", "gists_url": "https://api.github.com/users/kacperlukawski/gists{/gist_id}", "starred_url": "https://api.github.com/users/kacperlukawski/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kacperlukawski/subscriptions", "organizations_url": "https://api.github.com/users/kacperlukawski/orgs", "repos_url": "https://api.github.com/users/kacperlukawski/repos", "events_url": "https://api.github.com/users/kacperlukawski/events{/privacy}", "received_events_url": "https://api.github.com/users/kacperlukawski/received_events", "type": "User", "site_admin": false }
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
open
false
null
[]
null
[]
"2023-04-03T14:25:19"
"2023-04-03T14:25:19"
null
CONTRIBUTOR
null
### Feature request I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es ### Motivation ElasticSearch is a keyword-based search system, while FAISS is a vector search library. Vector database, such as Qdrant, is a different tool based on similarity (like FAISS) but is not limited to a single machine. It makes the vector database well-suited for bigger datasets and collaboration if several people want to access a particular dataset. ### Your contribution I can provide a PR implementing that functionality on my own.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/5698/reactions", "total_count": 6, "+1": 6, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/5698/timeline
null
null
null
null
false

Dataset Card for "Hugging Face GitHub Issues

Dataset Summary

GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are in English and concern the domain of datasets for NLP, computer vision, and beyond.

Supported Tasks and Leaderboards

Languages

English

Dataset Structure

Data Instances

Data Fields

Data Splits

Dataset Creation

Curation Rationale

Source Data

Initial Data Collection and Normalization

Who are the source language producers?

Annotations

Annotation process

Who are the annotators?

Personal and Sensitive Information

Considerations for Using the Data

Social Impact of Dataset

Discussion of Biases

Other Known Limitations

Additional Information

Dataset Curators

Licensing Information

Citation Information

Contributions

Downloads last month
0
Edit dataset card