id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,388,201,146
https://api.github.com/repos/huggingface/datasets/issues/5031
https://github.com/huggingface/datasets/pull/5031
5,031
Support hfh 0.10 implicit auth
closed
4
2022-09-27T18:37:49
2022-09-30T09:18:24
2022-09-30T09:15:59
lhoestq
[]
In huggingface-hub 0.10 the `token` parameter is deprecated for dataset_info and list_repo_files in favor of use_auth_token. Moreover if use_auth_token=None then the user's token is used implicitly. I took those two changes into account Close https://github.com/huggingface/datasets/issues/4990 TODO: - [x] fix tests We should wait hfh 0.10 to be relased first to make sure it works correctly before merging
true
1,388,061,340
https://api.github.com/repos/huggingface/datasets/issues/5030
https://github.com/huggingface/datasets/pull/5030
5,030
Fast dataset iter
closed
2
2022-09-27T16:44:51
2022-09-29T15:50:44
2022-09-29T15:48:17
mariosasko
[]
Use `pa.Table.to_reader` to make iteration over examples/batches faster in `Dataset.{__iter__, map}` TODO: * [x] benchmarking (the only benchmark for now - iterating over (single) examples of `bookcorpus` (75 mil examples) in Colab is approx. 2.3x faster) * [x] check if iterating over bigger chunks + slicing to fetch individual examples in `_iter` yields better performance
true
1,387,600,960
https://api.github.com/repos/huggingface/datasets/issues/5029
https://github.com/huggingface/datasets/pull/5029
5,029
Fix import in `ClassLabel` docstring example
closed
1
2022-09-27T11:35:29
2022-09-27T14:03:24
2022-09-27T12:27:50
alvarobartt
[]
This PR addresses a super-simple fix: adding a missing `import` to the `ClassLabel` docstring example, as it was formatted as `from datasets Features`, so it's been fixed to `from datasets import Features`.
true
1,386,272,533
https://api.github.com/repos/huggingface/datasets/issues/5028
https://github.com/huggingface/datasets/issues/5028
5,028
passing parameters to the method passed to Dataset.from_generator()
closed
1
2022-09-26T15:20:06
2022-10-03T13:00:00
2022-10-03T13:00:00
Basir-mahmood
[ "enhancement" ]
Big thanks for providing dataset creation via a generator. I want to ask whether there is any way that parameters can be passed to the method Dataset.from_generator() method, like as follows. ``` from datasets import Dataset def gen(param1): for idx in len(custom_dataset): yield custom_dataset[idx] + param1 ds = Dataset.from_generator(gen(param1)) ```
false
1,386,153,072
https://api.github.com/repos/huggingface/datasets/issues/5027
https://github.com/huggingface/datasets/pull/5027
5,027
Fix typo in error message
closed
1
2022-09-26T14:10:09
2022-09-27T12:28:03
2022-09-27T12:26:02
severo
[]
null
true
1,386,071,154
https://api.github.com/repos/huggingface/datasets/issues/5026
https://github.com/huggingface/datasets/pull/5026
5,026
patch CI_HUB_TOKEN_PATH with Path instead of str
closed
1
2022-09-26T13:19:01
2022-09-26T14:30:55
2022-09-26T14:28:45
Wauplin
[]
Should fix the tests for `huggingface_hub==0.10.0rc0` prerelease (see [failed CI](https://github.com/huggingface/datasets/actions/runs/3127805250/jobs/5074879144)). Related to [this thread](https://huggingface.slack.com/archives/C02V5EA0A95/p1664195165294559) (internal link). Note: this should be a backward compatible fix (e.g. works also with previous versions of `huggingface_hub`) I am not sure where to put the changes so feel free to cherry-pick the commit and close this one without merging. cc @lhoestq
true
1,386,011,239
https://api.github.com/repos/huggingface/datasets/issues/5025
https://github.com/huggingface/datasets/issues/5025
5,025
Custom Json Dataset Throwing Error when batch is False
closed
2
2022-09-26T12:38:39
2022-09-27T19:50:00
2022-09-27T19:50:00
jmandivarapu1
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. I tried to create my custom dataset using below code ``` from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D from torchvision import transforms from transformers import AutoProcessor # we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes, # based on the checkpoint we provide from the hub from datasets import load_dataset def prepare_examples(examples): #Some preporcessing for each image and text as all my data saved in cloud #For this reason I couldn't set the batch to True. encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels, truncation=True, padding="max_length") # encoding['pixel_values']=np.array(encoding['pixel_values']) return encoding dataset = load_dataset("json", data_files='issues.jsonl') processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False) features = dataset["train"].features column_names = dataset["train"].column_names # we need to define custom features for `set_format` (used later on) to work properly features = Features({ 'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'labels': Sequence(feature=Value(dtype='int64')), }) train_dataset = dataset["train"].map( prepare_examples, batched=False, remove_columns=column_names, features=features ) ``` It throws below error. ``` /opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type) 172 storage = to_pyarrow_listarray(data, pa_type) --> 173 return pa.ExtensionArray.from_storage(pa_type, storage) 174 /opt/conda/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.ExtensionArray.from_storage() TypeError: Incompatible storage type list<item: list<item: list<item: list<item: float>>>> for extension type extension<arrow.py_extension_type<Array3DExtensionType>> ``` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug ``` rom datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D from torchvision import transforms from transformers import AutoProcessor # we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes, # based on the checkpoint we provide from the hub from datasets import load_dataset def prepare_examples(examples): #Some preporcessing for each image and text as all my data saved in cloud encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels, truncation=True, padding="max_length") # encoding['pixel_values']=np.array(encoding['pixel_values']) return encoding dataset = load_dataset("json", data_files='issues.jsonl') processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False) features = dataset["train"].features column_names = dataset["train"].column_names # we need to define custom features for `set_format` (used later on) to work properly features = Features({ 'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)), 'input_ids': Sequence(feature=Value(dtype='int64')), 'attention_mask': Sequence(Value(dtype='int64')), 'bbox': Array2D(dtype="int64", shape=(512, 4)), 'labels': Sequence(feature=Value(dtype='int64')), }) train_dataset = dataset["train"].map( prepare_examples, batched=False, remove_columns=column_names, features=features ) ## Expected results A clear and concise description of the expected results. Expected would be similar to all the otherdatasets with no error. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Unix - Python version: 3.9 - PyArrow version: 9.0.0
false
1,385,947,624
https://api.github.com/repos/huggingface/datasets/issues/5024
https://github.com/huggingface/datasets/pull/5024
5,024
Fix string features of xcsr dataset
closed
1
2022-09-26T11:55:36
2022-09-28T07:56:18
2022-09-28T07:54:19
albertvillanova
[ "dataset contribution" ]
This PR fixes string features of `xcsr` dataset to avoid character splitting. Fix #5023. CC: @yangxqiao, @yuchenlin
true
1,385,881,112
https://api.github.com/repos/huggingface/datasets/issues/5023
https://github.com/huggingface/datasets/issues/5023
5,023
Text strings are split into lists of characters in xcsr dataset
closed
0
2022-09-26T11:11:50
2022-09-28T07:54:20
2022-09-28T07:54:20
albertvillanova
[ "dataset bug" ]
## Describe the bug Text strings are split into lists of characters. Example for "X-CSQA-en": ``` {'id': 'd3845adc08414fda', 'lang': 'en', 'question': {'stem': ['T', 'h', 'e', ' ', 'd', 'e', 'n', 't', 'a', 'l', ' ', 'o', 'f', 'f', 'i', 'c', 'e', ' ', 'h', 'a', 'n', 'd', 'l', 'e', 'd', ' ', 'a', ' ', 'l', 'o', 't', ' ', 'o', 'f', ' ', 'p', 'a', 't', 'i', 'e', 'n', 't', 's', ' ', 'w', 'h', 'o', ' ', 'e', 'x', 'p', 'e', 'r', 'i', 'e', 'n', 'c', 'e', 'd', ' ', 't', 'r', 'a', 'u', 'm', 'a', 't', 'i', 'c', ' ', 'm', 'o', 'u', 't', 'h', ' ', 'i', 'n', 'j', 'u', 'r', 'y', ',', ' ', 'w', 'h', 'e', 'r', 'e', ' ', 'w', 'e', 'r', 'e', ' ', 't', 'h', 'e', 's', 'e', ' ', 'p', 'a', 't', 'i', 'e', 'n', 't', 's', ' ', 'c', 'o', 'm', 'i', 'n', 'g', ' ', 'f', 'r', 'o', 'm', '?'], 'choices': [{'label': ['A'], 'text': ['t', 'o', 'w', 'n']}, {'label': ['B'], 'text': ['m', 'i', 'c', 'h', 'i', 'g', 'a', 'n']}, {'label': ['C'], 'text': ['h', 'o', 's', 'p', 'i', 't', 'a', 'l']}, {'label': ['D'], 'text': ['s', 'c', 'h', 'o', 'o', 'l', 's']}, {'label': ['E'], 'text': ['o', 'f', 'f', 'i', 'c', 'e', ' ', 'b', 'u', 'i', 'l', 'd', 'i', 'n', 'g']}]}, 'answerKey': 'C'} ## Steps to reproduce the bug ```python ds = load_dataset("datasets/xcsr", "X-CSQA-en", split="validation", streaming=True) item = next(iter(ds)) item ``` ## Expected results ``` {'id': 'd3845adc08414fda', 'lang': 'en', 'question': {'stem': 'The dental office handled a lot of patients who experienced traumatic mouth injury, where were these patients coming from?', 'choices': {'label': ['A', 'B', 'C', 'D', 'E'], 'text': ['town', 'michigan', 'hospital', 'schools', 'office building']}}, 'answerKey': 'C'} ```
false
1,385,432,859
https://api.github.com/repos/huggingface/datasets/issues/5022
https://github.com/huggingface/datasets/pull/5022
5,022
Fix languages of X-CSQA configs in xcsr dataset
closed
4
2022-09-26T05:13:39
2022-09-26T12:27:20
2022-09-26T10:57:30
albertvillanova
[ "dataset contribution" ]
Fix #5017. CC: @yangxqiao, @yuchenlin
true
1,385,351,250
https://api.github.com/repos/huggingface/datasets/issues/5021
https://github.com/huggingface/datasets/issues/5021
5,021
Split is inferred from filename and overrides metadata.jsonl
closed
3
2022-09-26T03:22:14
2022-09-29T08:07:50
2022-09-29T08:07:50
float-trip
[ "bug", "duplicate" ]
## Describe the bug Including the strings "test" or "train" anywhere in a filename causes `datasets` to infer the split and silently ignore all other files. This behavior is documented for directory names but not filenames: https://huggingface.co/docs/datasets/image_dataset#imagefolder ## Steps to reproduce the bug `metadata.jsonl` ```json {"file_name": "photo of a cat.jpg", "text": "a photo of a cat"} {"file_name": "photo of a dog.jpg", "text": "a photo of a dog"} {"file_name": "photo of a train.jpg", "text": "a photo of a train"} {"file_name": "photo of test tubes.jpg", "text": "a photo of test tubes"} ``` `bug.py` ```python from datasets import load_dataset dataset = load_dataset("dataset") print(dataset) # DatasetDict({ # train: Dataset({ # features: ['image', 'text'], # num_rows: 1 # }) # test: Dataset({ # features: ['image', 'text'], # num_rows: 1 # }) # }) for split in dataset: for n in dataset[split]: print(n['text']) # a photo of a train # a photo of test tubes ``` ## Expected results One single dataset with all four images / a warning for unused files / documentation of this behavior ## Actual results Only the images with "test" or "train" in the name are loaded ## Environment info - `datasets` version: 2.5.1 - Platform: macOS-12.5.1-x86_64-i386-64bit - Python version: 3.10.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
false
1,384,684,078
https://api.github.com/repos/huggingface/datasets/issues/5020
https://github.com/huggingface/datasets/pull/5020
5,020
Fix URLs of sbu_captions dataset
closed
1
2022-09-24T14:00:33
2022-09-28T07:20:20
2022-09-28T07:18:23
donglixp
[ "dataset contribution" ]
Forbidden You don't have permission to access /~vicente/sbucaptions/sbu-captions-all.tar.gz on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.2k-fips PHP/5.4.16 mod_fcgid/2.3.9 mod_wsgi/3.4 Python/2.7.5 mod_perl/2.0.11 Perl/v5.16.3 Server at [www.cs.virginia.edu](mailto:csroot@virginia.edu) Port 443
true
1,384,673,718
https://api.github.com/repos/huggingface/datasets/issues/5019
https://github.com/huggingface/datasets/pull/5019
5,019
Update swiss judgment prediction
closed
4
2022-09-24T13:28:57
2022-09-28T07:13:39
2022-09-28T05:48:50
JoelNiklaus
[ "dataset contribution" ]
Hi, I updated the dataset to include additional data made available recently. When I test it locally, it seems to work. However, I get the following error with the dummy data creation: `Dummy data generation done but dummy data test failed since splits ['train', 'validation', 'test'] have 0 examples for config 'fr'`. Do you know why this could be the case? Cheers, Joel
true
1,384,146,585
https://api.github.com/repos/huggingface/datasets/issues/5018
https://github.com/huggingface/datasets/pull/5018
5,018
Create all YAML dataset_info
closed
2
2022-09-23T18:08:15
2023-09-24T09:33:21
2022-10-03T17:08:05
lhoestq
[ "dataset contribution" ]
Following https://github.com/huggingface/datasets/pull/4926 Creates all the `dataset_info` YAML fields in the dataset cards The JSON are also updated using the simplified backward compatible format added in https://github.com/huggingface/datasets/pull/4926 Needs https://github.com/huggingface/datasets/pull/4926 to be merged first
true
1,384,022,463
https://api.github.com/repos/huggingface/datasets/issues/5017
https://github.com/huggingface/datasets/issues/5017
5,017
xcsr: X-CSQA simply uses english for all alleged non-english data
closed
1
2022-09-23T16:11:54
2022-09-26T10:57:31
2022-09-26T10:57:31
thesofakillers
[ "dataset bug" ]
## Describe the bug All the alleged non-english subcollections for the X-CSQA task in the [xcsr benchmark dataset ](https://huggingface.co/datasets/xcsr) seem to be copies of the english subcollection, rather than translations. This is in contrast to the data description: > we automatically translate the original CSQA and CODAH datasets, which only have English versions, to 15 other languages, forming development and test sets for studying X-CSR ## Steps to reproduce the bug ```python # let's say you want to load the french X-CSQA subcollection french = datasets.load_dataset("xcsr", "X-CSQA-fr") # for good measure, let's load english too english = datasets.load_dataset("xcsr", "X-CSQA-en") # let's inspect "".join(english['test'][0]['question']['stem']) # output: 'The people wanted to stop the parade, so what did they set up to thwart it?' "".join(french['test'][0]['question']['stem']) # output: 'The people wanted to stop the parade, so what did they set up to thwart it?' # what? Why are they both in english? # I've checked this for validation and train splits too, across many datapoints. It's all the same english dataset # maybe i need to look better? french['test'].unique('lang') # output: ['en'] # no, it's all english ``` ## Expected results Accessing a subcollection in language X should return a subcollection containg samples in language X ## Actual results Accessing a subcollection in language X returns a subcollection containing samples in English. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.5.1 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.3
false
1,383,883,058
https://api.github.com/repos/huggingface/datasets/issues/5016
https://github.com/huggingface/datasets/pull/5016
5,016
Fix tar extraction vuln
closed
1
2022-09-23T14:22:21
2022-09-29T12:42:26
2022-09-29T12:40:28
lhoestq
[]
Fix for CVE-2007-4559 Description: Directory traversal vulnerability in the (1) extract and (2) extractall functions in the tarfile module in Python allows user-assisted remote attackers to overwrite arbitrary files via a .. (dot dot) sequence in filenames in a TAR archive, a related issue to CVE-2001-1267. I fixed it by using the solution proposed in https://stackoverflow.com/questions/10060069/safely-extract-zip-or-tar-using-python It blocks extraction of files with an absolute path or double dots and symlinks.
true
1,383,485,558
https://api.github.com/repos/huggingface/datasets/issues/5015
https://github.com/huggingface/datasets/issues/5015
5,015
Transfer dataset scripts to Hub
closed
1
2022-09-23T08:48:10
2022-10-05T07:15:57
2022-10-05T07:15:57
albertvillanova
[]
Before merging: - #4974 TODO: - [x] Create label: ["dataset contribution"](https://github.com/huggingface/datasets/pulls?q=label%3A%22dataset+contribution%22) - [x] Create project: [Datasets: Transfer datasets to Hub](https://github.com/orgs/huggingface/projects/22/) - [x] PRs: - [x] Add dataset: we should recommend transfer all additions of datasets to the Hub, under the appropriate namespace; no more additions of datasets on GitHub - [x] Update dataset: in general, we should merge bug fixes; enhancements should be considered on a case-by-case basis, depending on whether there is a more suitable namespace on the Hub - [ ] Issues Finally: - [x] #4974 Let me know what you think! :hugs:
false
1,383,422,639
https://api.github.com/repos/huggingface/datasets/issues/5014
https://github.com/huggingface/datasets/issues/5014
5,014
I need to read the custom dataset in conll format
open
3
2022-09-23T07:49:42
2022-11-02T11:57:15
null
shell-nlp
[ "enhancement" ]
I need to read the custom dataset in conll format
false
1,383,415,971
https://api.github.com/repos/huggingface/datasets/issues/5013
https://github.com/huggingface/datasets/issues/5013
5,013
would huggingface like publish cpp binding for datasets package ?
closed
5
2022-09-23T07:42:49
2023-02-24T16:20:57
2023-02-24T16:20:57
mullerhai
[ "wontfix" ]
HI: I use cpp env libtorch, I like use hugggingface ,but huggingface not cpp binding, would you like publish cpp binding for it. thanks
false
1,382,851,096
https://api.github.com/repos/huggingface/datasets/issues/5012
https://github.com/huggingface/datasets/issues/5012
5,012
Force JSON format regardless of file naming on S3
closed
4
2022-09-22T18:28:15
2023-08-16T09:58:36
2023-08-16T09:58:36
junwang-wish
[ "enhancement" ]
I have a file on S3 created by Data Version Control, it looks like `s3://dvc/ac/badff5b134382a0f25248f1b45d7b2` but contains a json file. If I run ```python dataset = load_dataset( "json", data_files='s3://dvc/ac/badff5b134382a0f25248f1b45d7b2' ) ``` It gives me ``` InvalidSchema: No connection adapters were found for 's3://dvc/ac/badff5b134382a0f25248f1b45d7b2' ``` However, I cannot go ahead and change the names of the s3 file. Is there a way to "force" load a S3 url with certain decoder (JSON, CSV, etc.) regardless of s3 URL naming?
false
1,382,609,587
https://api.github.com/repos/huggingface/datasets/issues/5011
https://github.com/huggingface/datasets/issues/5011
5,011
Audio: `encode_example` fails with IndexError
closed
1
2022-09-22T15:07:27
2022-09-23T09:05:18
2022-09-23T09:05:18
sanchit-gandhi
[ "bug" ]
## Describe the bug Loading the dataset [earnings-22](https://huggingface.co/datasets/sanchit-gandhi/earnings22_split) from the Hub yields an Index Error. I created this dataset locally and then pushed to hub at the specified URL. Thus, I expect the dataset should work out-of-the-box! Indeed, the dataset viewer functions correctly, and there were no issues when I had the dataset locally. Don't think it's a sound file bug as the version matches what worked previously. Update: the bug appeared for me on a GPU, mysteriously on a TPU I can't repro and it downloads correctly... ## Steps to reproduce the bug ```python from datasets import load_dataset earnings22 = load_dataset("sanchit-gandhi/earnings22_split") ``` ## Expected results ``` >>> earnings22 DatasetDict({ validation: Dataset({ features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'], num_rows: 2650 }) train: Dataset({ features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'], num_rows: 52006 }) test: Dataset({ features: ['source_id', 'audio', 'segment_id', 'sentence', 'start_ts', 'end_ts', 'id'], num_rows: 2735 }) }) ``` ## Actual results ``` Traceback (most recent call last): File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2764, in _map_single writer.write(example) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 451, in write self.write_examples_on_file() File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 409, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 508, in write_batch arrays.append(pa.array(typed_sequence)) File "pyarrow/array.pxi", line 231, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/arrow_writer.py", line 197, in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1683, in wrapper return func(array, *args, **kwargs) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/table.py", line 1795, in cast_array_to_feature return feature.cast_storage(array) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in cast_storage storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()]) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 190, in <listcomp> storage = pa.array([Audio().encode_example(x) if x is not None else None for x in storage.to_pylist()]) File "/opt/conda/envs/hf/lib/python3.8/site-packages/datasets/features/audio.py", line 92, in encode_example sf.write(buffer, value["array"], value["sampling_rate"], format="wav") File "/opt/conda/envs/hf/lib/python3.8/site-packages/soundfile.py", line 313, in write channels = data.shape[1] IndexError: tuple index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.3 Plus: - SoundFile version: 0.10.3.post1 cc @lhoestq @polinaeterna
false
1,382,308,799
https://api.github.com/repos/huggingface/datasets/issues/5010
https://github.com/huggingface/datasets/pull/5010
5,010
Add deprecation warning to multilingual_librispeech dataset card
closed
1
2022-09-22T11:41:59
2022-09-23T12:04:37
2022-09-23T12:02:45
albertvillanova
[ "dataset contribution" ]
Besides the current deprecation warning in the script of `multilingual_librispeech`, this PR adds a deprecation warning to its dataset card as well. The format of the deprecation warning is aligned with the one in the library documentation when docstrings contain the `<Deprecated/>` tag. Related to: - #4060
true
1,381,194,067
https://api.github.com/repos/huggingface/datasets/issues/5009
https://github.com/huggingface/datasets/issues/5009
5,009
Error loading StonyBrookNLP/tellmewhy dataset from hub even though local copy loads correctly
closed
8
2022-09-21T16:23:06
2022-09-29T13:07:29
2022-09-29T13:07:29
ykl7
[ "bug" ]
## Describe the bug I have added a new dataset with the identifier `StonyBrookNLP/tellmewhy` to the hub. When I load the individual files using my local copy using `dataset = datasets.load_dataset("json", data_files="data/train.jsonl")`, it loads the dataset correctly. However, when I try to load it from the hub, I get an error (pasted below). Additionally, `dataset = datasets.load_dataset("json", data_dir="data/")` throws the same error. ## Steps to reproduce the bug ```python dataset = datasets.load_dataset('StonyBrookNLP/tellmewhy') ``` ## Expected results Successfully load the `StonyBrookNLP/tellmewhy` dataset. ## Actual results ``` Using custom data configuration StonyBrookNLP--tellmewhy-82712924092694ff Downloading and preparing dataset json/StonyBrookNLP--tellmewhy to /home/yklal95/.cache/huggingface/datasets/StonyBrookNLP___json/StonyBrookNLP--tellmewhy-82712924092694ff/0.0.0/a3e658c4731e59120d44081ac10bf85dc7e1388126b92338344ce9661907f253... Downloading data files: 100%|██████████████████████████████| 3/3 [00:00<00:00, 957.46it/s] Extracting data files: 100%|███████████████████████████████| 3/3 [00:00<00:00, 299.14it/s] Traceback (most recent call last): File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 17, in <module> main(args) File "/home/yklal95/tmw-generalization/src/load_datasets.py", line 11, in main dataset = datasets.load_dataset(args.dataset_name) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/load.py", line 1746, in load_dataset builder_instance.download_and_prepare( File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/builder.py", line 1277, in _prepare_split writer.write_table(table) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/arrow_writer.py", line 524, in write_table pa_table = table_cast(pa_table, self._schema) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 2005, in table_cast return cast_table_to_schema(table, schema) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1969, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1681, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1822, in cast_array_to_feature casted_values = _c(array.values, feature.feature) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper return func(array, *args, **kwargs) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1853, in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1683, in wrapper return func(array, *args, **kwargs) File "/home/yklal95/anaconda3/envs/tmw-generalization/lib/python3.9/site-packages/datasets/table.py", line 1761, in array_cast raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") TypeError: Couldn't cast array of type int64 to null ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.15.0-121-generic-x86_64-with-glibc2.27 - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.5.0
false
1,381,090,903
https://api.github.com/repos/huggingface/datasets/issues/5008
https://github.com/huggingface/datasets/pull/5008
5,008
Re-apply input columns change
closed
1
2022-09-21T15:09:01
2022-09-22T13:57:36
2022-09-22T13:55:23
mariosasko
[]
Fixes the `filter` + `input_columns` combination, which is used in the `transformers` examples for instance. Revert #5006 (which in turn reverts #4971) Fix https://github.com/huggingface/datasets/issues/4858
true
1,381,007,607
https://api.github.com/repos/huggingface/datasets/issues/5007
https://github.com/huggingface/datasets/pull/5007
5,007
Add some note about running the transformers ci before a release
closed
1
2022-09-21T14:14:25
2022-09-22T10:16:14
2022-09-22T10:14:06
lhoestq
[]
null
true
1,380,968,395
https://api.github.com/repos/huggingface/datasets/issues/5006
https://github.com/huggingface/datasets/pull/5006
5,006
Revert input_columns change
closed
2
2022-09-21T13:49:20
2022-09-21T14:14:33
2022-09-21T14:11:57
lhoestq
[]
Revert https://github.com/huggingface/datasets/pull/4971 Fix https://github.com/huggingface/datasets/issues/5005
true
1,380,952,960
https://api.github.com/repos/huggingface/datasets/issues/5005
https://github.com/huggingface/datasets/issues/5005
5,005
Release 2.5.0 breaks transformers CI
closed
1
2022-09-21T13:39:19
2022-09-21T14:11:57
2022-09-21T14:11:57
albertvillanova
[ "bug" ]
## Describe the bug As reported by @lhoestq: > see https://app.circleci.com/pipelines/github/huggingface/transformers/47634/workflows/b491886b-e66e-4edb-af96-8b459e72aa25/jobs/564563 this is used here: [https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55[…]torch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py](https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55250e7da/examples/pytorch/speech-pretraining/run_wav2vec2_pretraining_no_trainer.py#L482-L488)
false
1,380,860,606
https://api.github.com/repos/huggingface/datasets/issues/5004
https://github.com/huggingface/datasets/pull/5004
5,004
Remove license tag file and validation
closed
1
2022-09-21T12:35:14
2022-09-22T11:47:41
2022-09-22T11:45:46
albertvillanova
[]
As requested, we are removing the validation of the licenses from `datasets` because this is done on the Hub. Fix #4994. Related to: - #4926, which is removing all the validation from `datasets`
true
1,380,617,353
https://api.github.com/repos/huggingface/datasets/issues/5003
https://github.com/huggingface/datasets/pull/5003
5,003
Fix missing use_auth_token in streaming docstrings
closed
1
2022-09-21T09:27:03
2022-09-21T16:24:01
2022-09-21T16:20:59
albertvillanova
[]
This PRs fixes docstrings: - adds the missing `use_auth_token` param - updates syntax of param types - adds params to docstrings without them - fixes return/yield types - fixes syntax
true
1,380,589,402
https://api.github.com/repos/huggingface/datasets/issues/5002
https://github.com/huggingface/datasets/issues/5002
5,002
Dataset Viewer issue for loubnabnl/humaneval-x
closed
2
2022-09-21T09:06:17
2022-09-21T11:49:49
2022-09-21T11:49:49
loubnabnl
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/loubnabnl/humaneval-x/viewer/ ### Description The dataset has subsets but the viewer gets stuck in the default subset even when I select another one (the data loading of the subsets works fine) ### Owner Yes
false
1,379,844,820
https://api.github.com/repos/huggingface/datasets/issues/5001
https://github.com/huggingface/datasets/pull/5001
5,001
Support loading XML datasets
open
3
2022-09-20T18:42:58
2024-05-22T22:13:25
null
albertvillanova
[]
CC: @davanstrien
true
1,379,709,398
https://api.github.com/repos/huggingface/datasets/issues/5000
https://github.com/huggingface/datasets/issues/5000
5,000
Dataset Viewer issue for asapp/slue
closed
9
2022-09-20T16:45:45
2022-09-27T07:04:03
2022-09-21T07:24:07
fwu-asapp
[]
### Link https://huggingface.co/datasets/asapp/slue/viewer/ ### Description Hi, I wonder how to get the dataset viewer of our slue dataset to work. Best, Felix ### Owner Yes
false
1,379,610,030
https://api.github.com/repos/huggingface/datasets/issues/4999
https://github.com/huggingface/datasets/pull/4999
4,999
Add EmptyDatasetError
closed
1
2022-09-20T15:28:05
2022-09-21T12:23:43
2022-09-21T12:21:24
lhoestq
[]
examples: from the hub: ```python Traceback (most recent call last): File "playground/ttest.py", line 3, in <module> print(load_dataset("lhoestq/empty")) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1686, in load_dataset **config_kwargs, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1458, in load_dataset_builder data_files=data_files, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1171, in dataset_module_factory raise e1 from None File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1162, in dataset_module_factory download_mode=download_mode, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 760, in get_module else get_data_patterns_in_dataset_repository(hfh_dataset_info, self.data_dir) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/data_files.py", line 678, in get_data_patterns_in_dataset_repository ) from None datasets.data_files.EmptyDatasetError: The dataset repository at 'lhoestq/empty' doesn't contain any data file. ``` from local directory: ```python Traceback (most recent call last): File "playground/ttest.py", line 3, in <module> print(load_dataset("playground/empty")) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1686, in load_dataset **config_kwargs, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1458, in load_dataset_builder data_files=data_files, File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1107, in dataset_module_factory path, data_dir=data_dir, data_files=data_files, download_mode=download_mode File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 625, in get_module else get_data_patterns_locally(base_path) File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/data_files.py", line 460, in get_data_patterns_locally raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data file") from None datasets.data_files.EmptyDatasetError: The directory at playground/empty doesn't contain any data file ``` Close https://github.com/huggingface/datasets/issues/4995
true
1,379,466,717
https://api.github.com/repos/huggingface/datasets/issues/4998
https://github.com/huggingface/datasets/pull/4998
4,998
Don't add a tag on the Hub on release
closed
1
2022-09-20T13:54:57
2022-09-20T14:11:46
2022-09-20T14:08:54
lhoestq
[]
Datasets with no namespace on the Hub have tags to redirect to the version of datasets where they come from. I’m about to remove them all because I think it looks bad/unexpected in the UI and it’s not actually useful Therefore I'm also disabling tagging. Note that the CI job will be completely removed in https://github.com/huggingface/datasets/pull/4974 anyway
true
1,379,430,711
https://api.github.com/repos/huggingface/datasets/issues/4997
https://github.com/huggingface/datasets/pull/4997
4,997
Add support for parsing JSON files in array form
closed
1
2022-09-20T13:31:26
2022-09-20T15:42:40
2022-09-20T15:40:06
mariosasko
[]
Support parsing JSON files in the array form (top-level object is an array). For simplicity, `json.load` is used for decoding. This means the entire file is loaded into memory. If requested, we can optimize this by introducing a param similar to `lines` in [`pandas.read_json`](https://pandas.pydata.org/docs/reference/api/pandas.read_json.html), which, if set to `True`, would allow us to read in chunks. Fixes https://github.com/huggingface/datasets/issues/4963
true
1,379,345,161
https://api.github.com/repos/huggingface/datasets/issues/4996
https://github.com/huggingface/datasets/issues/4996
4,996
Dataset Viewer issue for Jean-Baptiste/wikiner_fr
closed
2
2022-09-20T12:32:07
2022-09-27T12:35:44
2022-09-27T12:35:44
severo
[]
### Link https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr ### Description ``` Error code: StreamingRowsError Exception: FileNotFoundError Message: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/responses/first_rows.py", line 337, in get_first_rows_response rows = get_rows(dataset, config, split, streaming=True, rows_max_number=rows_max_number, hf_token=hf_token) File "/src/services/worker/src/worker/utils.py", line 123, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/responses/first_rows.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 718, in __iter__ for key, example in self._iter(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 708, in _iter yield from ex_iterable File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 112, in __iter__ yield from self.generate_examples_fn(**self.kwargs) File "/tmp/modules-cache/datasets_modules/datasets/Jean-Baptiste--wikiner_fr/683a580ba6ec769d508f7dfc603a651667b0ed3817b1ae5bfd45f97cc024923f/wikiner_fr.py", line 165, in _generate_examples dataset = Dataset.load_from_disk(filepath) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1210, in load_from_disk with open(Path(dataset_path, config.DATASET_STATE_JSON_FILENAME).as_posix(), encoding="utf-8") as state_file: FileNotFoundError: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json' ``` Is it an error with the dataset script, or the data itself, @huggingface/datasets? https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr/tree/main ### Owner No
false
1,379,108,482
https://api.github.com/repos/huggingface/datasets/issues/4995
https://github.com/huggingface/datasets/issues/4995
4,995
Get a specific Exception when the dataset has no data
closed
0
2022-09-20T09:31:59
2022-09-21T12:21:25
2022-09-21T12:21:25
severo
[ "enhancement", "dataset-viewer" ]
In the dataset viewer on the Hub (https://huggingface.co/datasets/glue/viewer), we would like (https://github.com/huggingface/moon-landing/issues/3882) to show a specific message when the repository lacks any data files. In that case, instead of showing a complex traceback, we want to show a call to action to help the user upload data. To do that, it would be very helpful to know for sure that the repository is missing any (supported) data files. It could be done by raising a custom exception, for example, `NoDataError`.
false
1,379,084,015
https://api.github.com/repos/huggingface/datasets/issues/4994
https://github.com/huggingface/datasets/issues/4994
4,994
delete the hardcoded license list in `datasets`
closed
0
2022-09-20T09:14:41
2022-09-22T11:45:47
2022-09-22T11:45:47
julien-c
[]
> Feel free to delete the license list in `datasets` [...] > > Also FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.) _Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238401662_ > [...], in my opinion we can just delete this file from `datasets`, the validation is happening hub-side anyways now? _Originally posted by @julien-c in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238390659_
false
1,379,044,435
https://api.github.com/repos/huggingface/datasets/issues/4993
https://github.com/huggingface/datasets/pull/4993
4,993
fix: avoid casting tuples after Dataset.map
closed
1
2022-09-20T08:45:16
2022-09-20T16:11:27
2022-09-20T13:08:29
szmoro
[]
This PR updates features.py to avoid casting tuples to lists when reading the results of Dataset.map as suggested by @lhoestq [here](https://github.com/huggingface/datasets/issues/4676#issuecomment-1187371367) in https://github.com/huggingface/datasets/issues/4676.
true
1,379,031,842
https://api.github.com/repos/huggingface/datasets/issues/4992
https://github.com/huggingface/datasets/pull/4992
4,992
Support streaming iwslt2017 dataset
closed
1
2022-09-20T08:35:41
2022-09-20T09:27:55
2022-09-20T09:15:24
albertvillanova
[]
Support streaming iwslt2017 dataset. Once this PR is merged: - [x] Remove old ".tgz" data files from the Hub.
true
1,378,898,752
https://api.github.com/repos/huggingface/datasets/issues/4991
https://github.com/huggingface/datasets/pull/4991
4,991
Fix missing tags in dataset cards
closed
1
2022-09-20T06:42:07
2022-09-22T12:25:32
2022-09-20T07:37:30
albertvillanova
[]
Fix missing tags in dataset cards: - aeslc - empathetic_dialogues - event2Mind - gap - iwslt2017 - newsgroup - qa4mre - scicite This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 - #4908 - #4921 - #4931 - #4979
true
1,378,120,806
https://api.github.com/repos/huggingface/datasets/issues/4990
https://github.com/huggingface/datasets/issues/4990
4,990
"no-token" is passed to `huggingface_hub` when token is `None`
closed
6
2022-09-19T15:14:40
2022-09-30T09:16:00
2022-09-30T09:16:00
Wauplin
[ "bug" ]
## Describe the bug In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I feel that here it is working because we assume the token will never be validated. https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L753 https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L1121 ## Expected results Pass `token=None` to `huggingface_hub`. ## Actual results `token="no-token"` is passed. ## Environment info `huggingface_hub v0.10.0dev`
false
1,376,832,233
https://api.github.com/repos/huggingface/datasets/issues/4989
https://github.com/huggingface/datasets/issues/4989
4,989
Running add_column() seems to corrupt existing sequence-type column info
closed
1
2022-09-17T17:42:05
2022-09-19T12:54:54
2022-09-19T12:54:54
derek-rocheleau
[ "bug" ]
I have a dataset that contains a column ("foo") that is a sequence type of length 4. So when I run .to_pandas() on it, the resulting dataframe correctly contains 4 columns - foo_0, foo_1, foo_2, foo_3. So the 1st row of the dataframe might look like: ds = load_dataset(...) df = ds.to_pandas() df: foo_0 | foo_1 | foo_2 | foo_3 0.0 | 1.0 | 2.0 | 3.0 If I run .add_column("new_col", data) on the dataset, and then .to_pandas() on the resulting new dataset, the resulting dataframe contains only 2 columns - foo, new_col. The values in column foo are lists of length 4, the 4 elements that should have been split into separate columns. Dataframe 1st row would be: ds = load_dataset(...) new_ds = ds.add_column("new_col", data) df = new_ds.to_pandas() df: foo | new_col [0.0, 1.0, 2.0, 3.0] | new_val I've explored the 2 datasets in a debugger and haven't noticed any changes to any attributes related to the foo column, but I can't determine why the dataframes are so different.
false
1,376,096,584
https://api.github.com/repos/huggingface/datasets/issues/4988
https://github.com/huggingface/datasets/issues/4988
4,988
Add `IterableDataset.from_generator` to the API
closed
3
2022-09-16T15:19:41
2022-10-05T12:10:49
2022-10-05T12:10:49
mariosasko
[ "enhancement", "good first issue" ]
We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator. cc @lhoestq
false
1,376,006,477
https://api.github.com/repos/huggingface/datasets/issues/4987
https://github.com/huggingface/datasets/pull/4987
4,987
Embed image/audio data in dl_and_prepare parquet
closed
1
2022-09-16T14:09:27
2022-09-16T16:24:47
2022-09-16T16:22:35
lhoestq
[]
Embed the bytes of the image or audio files in the Parquet files directly, instead of having a "path" that points to a local file. Indeed Parquet files are often used to share data or to be used by workers that may not have access to the local files.
true
1,375,895,035
https://api.github.com/repos/huggingface/datasets/issues/4986
https://github.com/huggingface/datasets/pull/4986
4,986
[doc] Fix broken snippet that had too many quotes
closed
2
2022-09-16T12:41:07
2022-09-16T22:12:21
2022-09-16T17:32:14
tomaarsen
[]
Hello! ### Pull request overview * Fix broken snippet in https://huggingface.co/docs/datasets/main/en/process that has too many quotes ### Details The snippet in question can be found here: https://huggingface.co/docs/datasets/main/en/process#map This screenshot shows the issue, there is a quote too many, causing the snippet to be colored incorrectly: ![image](https://user-images.githubusercontent.com/37621491/190640627-f7587362-0e44-4464-a5d1-a0b98df6986f.png) The change speaks for itself. Thank you for the detailed documentation, by the way. - Tom Aarsen
true
1,375,807,768
https://api.github.com/repos/huggingface/datasets/issues/4985
https://github.com/huggingface/datasets/pull/4985
4,985
Prefer split patterns from directories over split patterns from filenames
closed
4
2022-09-16T11:20:40
2022-11-02T11:54:28
2022-09-29T08:07:49
polinaeterna
[]
related to https://github.com/huggingface/datasets/issues/4895
true
1,375,690,330
https://api.github.com/repos/huggingface/datasets/issues/4984
https://github.com/huggingface/datasets/pull/4984
4,984
docs: ✏️ add links to the Datasets API
closed
2
2022-09-16T09:34:12
2022-09-16T13:10:14
2022-09-16T13:07:33
severo
[]
I added some links to the Datasets API in the docs. See https://github.com/huggingface/datasets-server/pull/566 for a companion PR in the datasets-server. The idea is to improve the discovery of the API through the docs. I'm a bit shy about pasting a lot of links to the API in the docs, so it's minimal for now. I'm interested in ideas to integrate the API better in these docs without being too much. cc @lhoestq @julien-c @albertvillanova @stevhliu.
true
1,375,667,654
https://api.github.com/repos/huggingface/datasets/issues/4983
https://github.com/huggingface/datasets/issues/4983
4,983
How to convert torch.utils.data.Dataset to huggingface dataset?
closed
15
2022-09-16T09:15:10
2023-12-14T20:54:15
2022-09-20T11:23:43
DEROOCE
[ "enhancement" ]
I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below: ```python from datasets import Dataset data = [[1, 2],[3, 4]] ds = Dataset.from_dict({"data": data}) ds = ds.with_format("torch") ds[0] ds[:2] ``` So is there something I miss, or there IS no function to convert `torch.utils.data.Dataset` to huggingface dataset. If so, is there any way to do this convert? Thanks.
false
1,375,604,693
https://api.github.com/repos/huggingface/datasets/issues/4982
https://github.com/huggingface/datasets/issues/4982
4,982
Create dataset_infos.json with VALIDATION and TEST splits
closed
3
2022-09-16T08:21:19
2022-09-28T07:59:39
2022-09-28T07:59:39
skalinin
[ "bug" ]
The problem is described in that [issue](https://github.com/huggingface/datasets/issues/4895#issuecomment-1247975569). > When I try to create data_infos.json using datasets-cli test Peter.py --save_infos --all_configs I get an error: > ValueError: Unknown split "test". Should be one of ['train']. > > The data_infos.json is created perfectly fine when I use only one split - datasets.Split.TRAIN > > You can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch) I tried to clear the cache folder, than I got an another error. I run: ``` git clone https://huggingface.co/datasets/sberbank-ai/Peter cd Peter git checkout add_splits # switch to a add_splits branch rm dataset_infos.json # remove local dataset_infos.json rm -r ~/.cache/huggingface # remove cached dataset_infos.json datasets-cli test Peter.py --save_infos --all_configs # trying to create new dataset_infos.json ``` The error message: ``` Using custom data configuration default Testing builder 'default' (1/1) Downloading and preparing dataset peter/default to /Users/kalinin/.cache/huggingface/datasets/peter/default/0.0.0/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d... Downloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 5160.63it/s] Extracting data files: 0%| | 0/4 [00:00<?, ?it/s]Traceback (most recent call last): File "/usr/local/bin/datasets-cli", line 8, in <module> sys.exit(main()) File "/usr/local/lib/python3.9/site-packages/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/usr/local/lib/python3.9/site-packages/datasets/commands/test.py", line 137, in run builder.download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 1227, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/usr/local/lib/python3.9/site-packages/datasets/builder.py", line 771, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/kalinin/.cache/huggingface/modules/datasets_modules/datasets/Peter/ef579519e140d6a40df2555996f26165f04c47557d7373709c8d7e7b4fd7465d/Peter.py", line 23, in _split_generators data_files = dl_manager.download_and_extract(_URLS) File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 431, in download_and_extract return self.extract(self.download(url_or_urls)) File "/usr/local/lib/python3.9/site-packages/datasets/download/download_manager.py", line 403, in extract extracted_paths = map_nested( File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 393, in map_nested mapped = [ File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 394, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/usr/local/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 330, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 213, in cached_path output_path = ExtractManager(cache_dir=download_config.cache_dir).extract( File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 46, in extract self.extractor.extract(input_path, output_path, extractor_format) File "/usr/local/lib/python3.9/site-packages/datasets/utils/extract.py", line 263, in extract with FileLock(lock_path): File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 399, in __init__ max_filename_length = os.statvfs(os.path.dirname(lock_file)).f_namemax FileNotFoundError: [Errno 2] No such file or directory: '' Exception ignored in: <function BaseFileLock.__del__ at 0x11caeec10> Traceback (most recent call last): File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 328, in __del__ self.release(force=True) File "/usr/local/lib/python3.9/site-packages/datasets/utils/filelock.py", line 303, in release with self._thread_lock: AttributeError: 'UnixFileLock' object has no attribute '_thread_lock' Extracting data files: 0%| | 0/4 [00:00<?, ?it/s] ``` Can you help me please? ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-x86_64-i386-64bit - Python version: 3.9.5 - PyArrow version: 9.0.0 - Pandas version: 1.2.4
false
1,375,086,773
https://api.github.com/repos/huggingface/datasets/issues/4981
https://github.com/huggingface/datasets/issues/4981
4,981
Can't create a dataset with `float16` features
open
8
2022-09-15T21:03:24
2025-06-12T11:47:42
null
dconathan
[ "bug" ]
## Describe the bug I can't create a dataset with `float16` features. I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same exact error. The bug seems to arise from `datasets` casting the values to `double` and then `pyarrow` doesn't know how to convert those back to `float16`... does that sound right? Is there a way to bypass this since it's not necessary in the `numpy` and `torch` cases? Thanks! ## Steps to reproduce the bug All of the following raise the following error with the same exact (as far as I can tell) traceback: ```python ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float ``` ```python from datasets import Dataset, Features, Value Dataset.from_dict({"x": [0.0, 1.0, 2.0]}, features=Features(x=Value("float16"))) import numpy as np Dataset.from_dict({"x": np.arange(3, dtype=np.float16)}, features=Features(x=Value("float16"))) import torch Dataset.from_dict({"x": torch.arange(3).to(torch.float16)}, features=Features(x=Value("float16"))) ``` ## Expected results A dataset with `float16` features is successfully created. ## Actual results ```python --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) Cell In [14], line 1 ----> 1 Dataset.from_dict({"x": [1.0, 2.0, 3.0]}, features=Features(x=Value("float16"))) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_dataset.py:870, in Dataset.from_dict(cls, mapping, features, info, split) 865 mapping = features.encode_batch(mapping) 866 mapping = { 867 col: OptimizedTypedSequence(data, type=features[col] if features is not None else None, col=col) 868 for col, data in mapping.items() 869 } --> 870 pa_table = InMemoryTable.from_pydict(mapping=mapping) 871 if info.features is None: 872 info.features = Features({col: ts.get_inferred_type() for col, ts in mapping.items()}) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:750, in InMemoryTable.from_pydict(cls, *args, **kwargs) 734 @classmethod 735 def from_pydict(cls, *args, **kwargs): 736 """ 737 Construct a Table from Arrow arrays or columns 738 (...) 748 :class:`datasets.table.Table`: 749 """ --> 750 return cls(pa.Table.from_pydict(*args, **kwargs)) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:3648, in pyarrow.lib.Table.from_pydict() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/table.pxi:5174, in pyarrow.lib._from_pydict() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:343, in pyarrow.lib.asarray() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:231, in pyarrow.lib.array() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:110, in pyarrow.lib._handle_arrow_array_protocol() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py:197, in TypedSequence.__arrow_array__(self, type) 192 # otherwise we can finally use the user's type 193 elif type is not None: 194 # We use cast_array_to_feature to support casting to custom types like Audio and Image 195 # Also, when trying type "string", we don't want to convert integers or floats to "string". 196 # We only do it if trying_type is False - since this is what the user asks for. --> 197 out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) 198 return out 199 except (TypeError, pa.lib.ArrowInvalid) as e: # handle type errors and overflows File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs) 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1682 else: -> 1683 return func(array, *args, **kwargs) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1853, in cast_array_to_feature(array, feature, allow_number_to_str) 1851 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str) 1852 elif not isinstance(feature, (Sequence, dict, list, tuple)): -> 1853 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) 1854 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1683, in _wrap_for_chunked_arrays.<locals>.wrapper(array, *args, **kwargs) 1681 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) 1682 else: -> 1683 return func(array, *args, **kwargs) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/datasets/table.py:1762, in array_cast(array, pa_type, allow_number_to_str) 1760 if pa.types.is_null(pa_type) and not pa.types.is_null(array.type): 1761 raise TypeError(f"Couldn't cast array of type {array.type} to {pa_type}") -> 1762 return array.cast(pa_type) 1763 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{pa_type}") File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/array.pxi:919, in pyarrow.lib.Array.cast() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/compute.py:389, in cast(arr, target_type, safe, options) 387 else: 388 options = CastOptions.safe(target_type) --> 389 return call_function("cast", [arr], options) File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:560, in pyarrow._compute.call_function() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/_compute.pyx:355, in pyarrow._compute.Function.call() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:144, in pyarrow.lib.pyarrow_internal_check_status() File ~/scratch/scratch-env-39/.venv/lib/python3.9/site-packages/pyarrow/error.pxi:121, in pyarrow.lib.check_status() ArrowNotImplementedError: Unsupported cast from double to halffloat using function cast_half_float ``` ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.9.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
false
1,374,868,083
https://api.github.com/repos/huggingface/datasets/issues/4980
https://github.com/huggingface/datasets/issues/4980
4,980
Make `pyarrow` optional
closed
3
2022-09-15T17:38:03
2022-09-16T17:23:47
2022-09-16T17:23:47
KOLANICH
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Is `pyarrow` really needed for every dataset? **Describe the solution you'd like** It is made optional. **Describe alternatives you've considered** Likely, no.
false
1,374,820,758
https://api.github.com/repos/huggingface/datasets/issues/4979
https://github.com/huggingface/datasets/pull/4979
4,979
Fix missing tags in dataset cards
closed
1
2022-09-15T16:51:03
2022-09-22T12:37:55
2022-09-15T17:12:09
albertvillanova
[]
Fix missing tags in dataset cards: - amazon_us_reviews - art - discofuse - indic_glue - ubuntu_dialogs_corpus This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 - #4908 - #4921 - #4931
true
1,374,271,504
https://api.github.com/repos/huggingface/datasets/issues/4978
https://github.com/huggingface/datasets/pull/4978
4,978
Update IndicGLUE download links
closed
1
2022-09-15T10:05:57
2022-09-15T22:00:20
2022-09-15T21:57:34
sumanthd17
[]
null
true
1,372,962,157
https://api.github.com/repos/huggingface/datasets/issues/4977
https://github.com/huggingface/datasets/issues/4977
4,977
Providing dataset size
open
3
2022-09-14T13:09:27
2022-09-15T16:03:58
null
sashavor
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Especially for big datasets like [LAION](https://huggingface.co/datasets/laion/laion2B-en/), it's hard to know exactly the downloaded size (because there are many files and you don't have their exact size when downloaded). **Describe the solution you'd like** Auto-populating the downloaded dataset size on the dataset page would be really useful, including that of each split (when there are some). **Describe alternatives you've considered** People should be adding this to dataset cards, but I don't think that is systematically the case :slightly_smiling_face: **Additional context** Mentioned to @lhoestq
false
1,372,322,382
https://api.github.com/repos/huggingface/datasets/issues/4976
https://github.com/huggingface/datasets/issues/4976
4,976
Hope to adapt Python3.9 as soon as possible
open
3
2022-09-14T04:42:22
2022-09-26T16:32:35
null
RedHeartSecretMan
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** A clear and concise description of what the problem is. **Describe the solution you'd like** A clear and concise description of what you want to happen. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. **Additional context** Add any other context about the feature request here.
false
1,371,703,691
https://api.github.com/repos/huggingface/datasets/issues/4975
https://github.com/huggingface/datasets/pull/4975
4,975
Add `fn_kwargs` param to `IterableDataset.map`
closed
4
2022-09-13T16:19:05
2023-05-05T16:53:43
2022-09-13T16:45:34
mariosasko
[]
Add the `fn_kwargs` parameter to `IterableDataset.map`. ("Resolves" https://discuss.huggingface.co/t/how-to-use-large-image-text-datasets-in-hugging-face-hub-without-downloading-for-free/22780/3)
true
1,371,682,020
https://api.github.com/repos/huggingface/datasets/issues/4974
https://github.com/huggingface/datasets/pull/4974
4,974
[GH->HF] Part 2: Remove all dataset scripts from github
closed
6
2022-09-13T16:01:12
2022-10-03T17:09:39
2022-10-03T17:07:32
lhoestq
[]
Now that all the datasets live on the Hub we can remove the /datasets directory that contains all the dataset scripts of this repository - [x] Needs https://github.com/huggingface/datasets/pull/4973 to be merged first - [x] and PR to be enabled on the Hub for non-namespaced datasets
true
1,371,600,074
https://api.github.com/repos/huggingface/datasets/issues/4973
https://github.com/huggingface/datasets/pull/4973
4,973
[GH->HF] Load datasets from the Hub
closed
2
2022-09-13T15:01:41
2023-09-24T10:06:02
2022-09-15T15:24:26
lhoestq
[]
Currently datasets with no namespace (e.g. squad, glue) are loaded from github. In this PR I changed this logic to use the Hugging Face Hub instead. This is the first step in removing all the dataset scripts in this repository related to discussions in https://github.com/huggingface/datasets/pull/4059 (I should have continued from this PR actually)
true
1,371,443,306
https://api.github.com/repos/huggingface/datasets/issues/4972
https://github.com/huggingface/datasets/pull/4972
4,972
Fix map batched with torch output
closed
1
2022-09-13T13:16:34
2022-09-20T09:42:02
2022-09-20T09:39:33
lhoestq
[]
Reported in https://discuss.huggingface.co/t/typeerror-when-applying-map-after-set-format-type-torch/23067/2 Currently it fails if one uses batched `map` and the map function returns a torch tensor. I fixed it for torch, tf, jax and pandas series.
true
1,370,319,516
https://api.github.com/repos/huggingface/datasets/issues/4971
https://github.com/huggingface/datasets/pull/4971
4,971
Preserve non-`input_colums` in `Dataset.map` if `input_columns` are specified
closed
1
2022-09-12T18:08:24
2022-09-13T13:51:08
2022-09-13T13:48:45
mariosasko
[]
Currently, if the `input_columns` list in `Dataset.map` is specified, the columns not in that list are dropped after the `map` transform. This makes the behavior inconsistent with `IterableDataset.map`. (It seems this issue was introduced by mistake in https://github.com/huggingface/datasets/pull/2246) Fix https://github.com/huggingface/datasets/issues/4858
true
1,369,433,074
https://api.github.com/repos/huggingface/datasets/issues/4970
https://github.com/huggingface/datasets/pull/4970
4,970
Support streaming nli_tr dataset
closed
1
2022-09-12T07:48:45
2022-09-12T08:45:04
2022-09-12T08:43:08
albertvillanova
[]
Support streaming nli_tr dataset. This PR removes legacy `codecs.open` and replaces it with `open` that supports passing encoding. Fix #3186.
true
1,369,334,740
https://api.github.com/repos/huggingface/datasets/issues/4969
https://github.com/huggingface/datasets/pull/4969
4,969
Fix data URL and metadata of vivos dataset
closed
1
2022-09-12T06:12:34
2022-09-12T07:16:15
2022-09-12T07:14:19
albertvillanova
[]
After contacting the authors of the VIVOS dataset to report that their data server is down, we have received a reply from Hieu-Thi Luong that their data is now hosted on Zenodo: https://doi.org/10.5281/zenodo.7068130 This PR updates their data URL and some metadata (homepage, citation and license). Fix #4936.
true
1,369,312,877
https://api.github.com/repos/huggingface/datasets/issues/4968
https://github.com/huggingface/datasets/pull/4968
4,968
Support streaming compguesswhat dataset
closed
1
2022-09-12T05:42:24
2022-09-12T08:00:06
2022-09-12T07:58:06
albertvillanova
[]
Support streaming `compguesswhat` dataset. Fix #3191.
true
1,369,092,452
https://api.github.com/repos/huggingface/datasets/issues/4967
https://github.com/huggingface/datasets/pull/4967
4,967
Strip "/" in local dataset path to avoid empty dataset name error
closed
2
2022-09-11T23:09:16
2022-09-29T10:46:21
2022-09-12T15:30:38
apohllo
[]
null
true
1,368,661,002
https://api.github.com/repos/huggingface/datasets/issues/4965
https://github.com/huggingface/datasets/issues/4965
4,965
[Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback()
closed
6
2022-09-10T15:55:49
2024-03-21T17:25:53
2023-07-21T14:45:50
hoangtnm
[ "bug" ]
## Describe the bug I'm trying to run `cast_column("audio", Audio())` on Apple M1 Pro, but it seems that it doesn't work. ## Steps to reproduce the bug ```python import datasets dataset = load_dataset("csv", data_files="./train.csv")["train"] dataset = dataset.map(lambda x: {"audio": str(DATA_DIR / "audio" / x["audio"])}) dataset = dataset.cast_column("audio", Audio()) dataset[0] ``` ## Expected results ``` {'audio': {'bytes': None, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav'}, 'english_transcription': 'I would like to set up a joint account with my partner', 'intent_class': 11, 'lang_id': 4, 'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~JOINT_ACCOUNT/602ba55abb1e6d0fbce92065.wav', 'transcription': 'I would like to set up a joint account with my partner'} ``` ## Actual results ````--------------------------------------------------------------------------- MemoryError Traceback (most recent call last) Input In [6], in <cell line: 1>() ----> 1 dataset[0] File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2165, in Dataset.__getitem__(self, key) 2163 def __getitem__(self, key): # noqa: F811 2164 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2165 return self._getitem( 2166 key, 2167 ) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/arrow_dataset.py:2150, in Dataset._getitem(self, key, decoded, **kwargs) 2148 formatter = get_formatter(format_type, features=self.features, decoded=decoded, **format_kwargs) 2149 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) -> 2150 formatted_output = format_table( 2151 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns 2152 ) 2153 return formatted_output File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:532, in format_table(table, key, formatter, format_columns, output_all_columns) 530 python_formatter = PythonFormatter(features=None) 531 if format_columns is None: --> 532 return formatter(pa_table, query_type=query_type) 533 elif query_type == "column": 534 if key in format_columns: File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:281, in Formatter.__call__(self, pa_table, query_type) 279 def __call__(self, pa_table: pa.Table, query_type: str) -> Union[RowFormat, ColumnFormat, BatchFormat]: 280 if query_type == "row": --> 281 return self.format_row(pa_table) 282 elif query_type == "column": 283 return self.format_column(pa_table) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:312, in PythonFormatter.format_row(self, pa_table) 310 row = self.python_arrow_extractor().extract_row(pa_table) 311 if self.decoded: --> 312 row = self.python_features_decoder.decode_row(row) 313 return row File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/formatting/formatting.py:221, in PythonFeaturesDecoder.decode_row(self, row) 220 def decode_row(self, row: dict) -> dict: --> 221 return self.features.decode_example(row) if self.features else row File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1647, in Features.decode_example(self, example, token_per_repo_id) 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1635 """Decode example with custom feature decoding. 1636 1637 Args: (...) 1644 :obj:`dict[str, Any]` 1645 """ -> 1647 return { 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1649 if self._column_requires_decoding[column_name] 1650 else value 1651 for column_name, (feature, value) in zip_dict( 1652 {key: value for key, value in self.items() if key in example}, example 1653 ) 1654 } File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1648, in <dictcomp>(.0) 1634 def decode_example(self, example: dict, token_per_repo_id: Optional[Dict[str, Union[str, bool, None]]] = None): 1635 """Decode example with custom feature decoding. 1636 1637 Args: (...) 1644 :obj:`dict[str, Any]` 1645 """ 1647 return { -> 1648 column_name: decode_nested_example(feature, value, token_per_repo_id=token_per_repo_id) 1649 if self._column_requires_decoding[column_name] 1650 else value 1651 for column_name, (feature, value) in zip_dict( 1652 {key: value for key, value in self.items() if key in example}, example 1653 ) 1654 } File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/features.py:1260, in decode_nested_example(schema, obj, token_per_repo_id) 1257 # Object with special decoding: 1258 elif isinstance(schema, (Audio, Image)): 1259 # we pass the token to read and decode files from private repositories in streaming mode -> 1260 return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None 1261 return obj File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:156, in Audio.decode_example(self, value, token_per_repo_id) 154 array, sampling_rate = self._decode_non_mp3_file_like(file) 155 else: --> 156 array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id) 157 return {"path": path, "array": array, "sampling_rate": sampling_rate} File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/datasets/features/audio.py:257, in Audio._decode_non_mp3_path_like(self, path, format, token_per_repo_id) 254 use_auth_token = None 256 with xopen(path, "rb", use_auth_token=use_auth_token) as f: --> 257 array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono) 258 return array, sampling_rate File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/util/decorators.py:88, in deprecate_positional_args.<locals>._inner_deprecate_positional_args.<locals>.inner_f(*args, **kwargs) 86 extra_args = len(args) - len(all_args) 87 if extra_args <= 0: ---> 88 return f(*args, **kwargs) 90 # extra_args > 0 91 args_msg = [ 92 "{}={}".format(name, arg) 93 for name, arg in zip(kwonly_args[:extra_args], args[-extra_args:]) 94 ] File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:164, in load(path, sr, mono, offset, duration, dtype, res_type) 161 else: 162 # Otherwise try soundfile first, and then fall back if necessary 163 try: --> 164 y, sr_native = __soundfile_load(path, offset, duration, dtype) 166 except RuntimeError as exc: 167 # If soundfile failed, try audioread instead 168 if isinstance(path, (str, pathlib.PurePath)): File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/librosa/core/audio.py:195, in __soundfile_load(path, offset, duration, dtype) 192 context = path 193 else: 194 # Otherwise, create the soundfile object --> 195 context = sf.SoundFile(path) 197 with context as sf_desc: 198 sr_native = sf_desc.samplerate File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:629, in SoundFile.__init__(self, file, mode, samplerate, channels, subtype, endian, format, closefd) 626 self._mode = mode 627 self._info = _create_info_struct(file, mode, samplerate, channels, 628 format, subtype, endian) --> 629 self._file = self._open(file, mode_int, closefd) 630 if set(mode).issuperset('r+') and self.seekable(): 631 # Move write position to 0 (like in Python file objects) 632 self.seek(0) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1179, in SoundFile._open(self, file, mode_int, closefd) 1177 file_ptr = _snd.sf_open_fd(file, mode_int, self._info, closefd) 1178 elif _has_virtual_io_attrs(file, mode_int): -> 1179 file_ptr = _snd.sf_open_virtual(self._init_virtual_io(file), 1180 mode_int, self._info, _ffi.NULL) 1181 else: 1182 raise TypeError("Invalid file: {0!r}".format(self.name)) File ~/miniconda3/envs/rodan/lib/python3.8/site-packages/soundfile.py:1197, in SoundFile._init_virtual_io(self, file) 1194 def _init_virtual_io(self, file): 1195 """Initialize callback functions for sf_open_virtual().""" 1196 @_ffi.callback("sf_vio_get_filelen") -> 1197 def vio_get_filelen(user_data): 1198 curr = file.tell() 1199 file.seek(0, SEEK_END) MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https://cffi.readthedocs.io/en/latest/using.html#callbacks ``` ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
false
1,368,617,322
https://api.github.com/repos/huggingface/datasets/issues/4964
https://github.com/huggingface/datasets/issues/4964
4,964
Column of arrays (2D+) are using unreasonably high memory
open
10
2022-09-10T13:07:22
2022-09-22T18:29:22
null
vigsterkr
[ "bug" ]
## Describe the bug When trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage. ## Steps to reproduce the bug ```python from datasets import Dataset, Features, Array2D, Array3D import numpy as np column_name = "a" array_shape = (64, 64, 3) data = np.random.random((10000,) + array_shape) dataset = Dataset.from_dict({column_name: data}, features=Features({column_name: Array3D(shape=array_shape, dtype="float64")})) ``` the code above will use about 10Gb of RAM while constructing the `dataset` object. The code below will use roughly the same amount of memory (and time) when trying to actually access the data itself of that column. ```python from datasets import Dataset import numpy as np column_name = "a" array_shape = (64, 64, 3) data = np.random.random((10000,) + array_shape) dataset = Dataset.from_dict({column_name: data}) dataset[column_name] ``` ## Expected results Some memory overhead, but not like as it is now and certainly not an overhead of such runtime that is currently happening. ## Actual results Enormous memory- and runtime overhead. ## Environment info - `datasets` version: 2.3.2 - Platform: macOS-12.5.1-arm64-arm-64bit - Python version: 3.8.13 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
false
1,368,201,188
https://api.github.com/repos/huggingface/datasets/issues/4963
https://github.com/huggingface/datasets/issues/4963
4,963
Dataset without script does not support regular JSON data file
closed
1
2022-09-09T18:45:33
2022-09-20T15:40:07
2022-09-20T15:40:07
julien-c
[]
### Link https://huggingface.co/datasets/julien-c/label-studio-my-dogs ### Description <img width="1115" alt="image" src="https://user-images.githubusercontent.com/326577/189422048-7e9c390f-bea7-4521-a232-43f049ccbd1f.png"> ### Owner Yes
false
1,368,155,365
https://api.github.com/repos/huggingface/datasets/issues/4962
https://github.com/huggingface/datasets/pull/4962
4,962
Update setup.py
closed
2
2022-09-09T17:57:56
2022-09-12T14:33:04
2022-09-12T14:33:04
DCNemesis
[]
exclude broken version of fsspec. See the [related issue](https://github.com/huggingface/datasets/issues/4961)
true
1,368,124,033
https://api.github.com/repos/huggingface/datasets/issues/4961
https://github.com/huggingface/datasets/issues/4961
4,961
fsspec 2022.8.2 breaks xopen in streaming mode
closed
6
2022-09-09T17:26:55
2022-09-12T17:45:50
2022-09-12T14:32:05
DCNemesis
[ "bug" ]
## Describe the bug When fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable. ## Steps to reproduce the bug ```python import datasets data = datasets.load_dataset('MLCommons/ml_spoken_words', 'id_wav', split='train', streaming=True) ``` ## Expected results Dataset should load as iterator. ## Actual results ``` [/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1737 # Return iterable dataset in case of streaming 1738 if streaming: -> 1739 return builder_instance.as_streaming_dataset(split=split) 1740 1741 # Some datasets are already processed on the HF google storage [/usr/local/lib/python3.7/dist-packages/datasets/builder.py](https://localhost:8080/#) in as_streaming_dataset(self, split, base_path) 1023 ) 1024 self._check_manual_download(dl_manager) -> 1025 splits_generators = {sg.name: sg for sg in self._split_generators(dl_manager)} 1026 # By default, return all splits 1027 if split is None: [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _split_generators(self, dl_manager) 182 name=datasets.Split.TRAIN, 183 gen_kwargs={ --> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages], 185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in 186 self.config.languages] if not dl_manager.is_streaming else None, [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in <listcomp>(.0) 182 name=datasets.Split.TRAIN, 183 gen_kwargs={ --> 184 "audio_archives": [download_audio(split="train", lang=lang) for lang in self.config.languages], 185 "local_audio_archives_paths": [download_extract_audio(split="train", lang=lang) for lang in 186 self.config.languages] if not dl_manager.is_streaming else None, [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives(dl_manager, lang, format, split) 267 # for streaming case 268 def _download_audio_archives(dl_manager, lang, format, split): --> 269 archives_paths = _download_audio_archives_paths(dl_manager, lang, format, split) 270 return [dl_manager.iter_archive(archive_path) for archive_path in archives_paths] [~/.cache/huggingface/modules/datasets_modules/datasets/MLCommons--ml_spoken_words/321ea853cf0a05abb7a2d7efea900692a3d8622af65a2f3ce98adb7800a5d57b/ml_spoken_words.py](https://localhost:8080/#) in _download_audio_archives_paths(dl_manager, lang, format, split) 251 n_files_path = dl_manager.download(n_files_url) 252 --> 253 with open(n_files_path, "r", encoding="utf-8") as file: 254 n_files = int(file.read().strip()) # the file contains a number of archives 255 ValueError: I/O operation on closed file. ``` ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
false
1,368,035,159
https://api.github.com/repos/huggingface/datasets/issues/4960
https://github.com/huggingface/datasets/issues/4960
4,960
BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema'
open
2
2022-09-09T16:06:43
2022-09-13T08:51:03
null
DSLituiev
[ "dataset bug" ]
## Describe the bug I am trying to load a dataset from drive and running into an error. ## Steps to reproduce the bug ```python data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b" bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir) ``` ## Actual results `AttributeError: 'BuilderConfig' object has no attribute 'schema'` <details> ``` Using custom data configuration default-a1ca3e05be5abf2f --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Input In [8], in <cell line: 2>() 1 data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b" ----> 2 bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir) File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1723, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1720 ignore_verifications = ignore_verifications or save_infos 1722 # Create a dataset builder -> 1723 builder_instance = load_dataset_builder( 1724 path=path, 1725 name=name, 1726 data_dir=data_dir, 1727 data_files=data_files, 1728 cache_dir=cache_dir, 1729 features=features, 1730 download_config=download_config, 1731 download_mode=download_mode, 1732 revision=revision, 1733 use_auth_token=use_auth_token, 1734 **config_kwargs, 1735 ) 1737 # Return iterable dataset in case of streaming 1738 if streaming: File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/load.py:1526, in load_dataset_builder(path, name, data_dir, data_files, cache_dir, features, download_config, download_mode, revision, use_auth_token, **config_kwargs) 1523 raise ValueError(error_msg) 1525 # Instantiate the dataset builder -> 1526 builder_instance: DatasetBuilder = builder_cls( 1527 cache_dir=cache_dir, 1528 config_name=config_name, 1529 data_dir=data_dir, 1530 data_files=data_files, 1531 hash=hash, 1532 features=features, 1533 use_auth_token=use_auth_token, 1534 **builder_kwargs, 1535 **config_kwargs, 1536 ) 1538 return builder_instance File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:1154, in GeneratorBasedBuilder.__init__(self, writer_batch_size, *args, **kwargs) 1153 def __init__(self, *args, writer_batch_size=None, **kwargs): -> 1154 super().__init__(*args, **kwargs) 1155 # Batch size used by the ArrowWriter 1156 # It defines the number of samples that are kept in memory before writing them 1157 # and also the length of the arrow chunks 1158 # None means that the ArrowWriter will use its default value 1159 self._writer_batch_size = writer_batch_size or self.DEFAULT_WRITER_BATCH_SIZE File ~/opt/anaconda3/envs/spacy3/lib/python3.10/site-packages/datasets/builder.py:307, in DatasetBuilder.__init__(self, cache_dir, config_name, hash, base_path, info, features, use_auth_token, repo_id, data_files, data_dir, name, **config_kwargs) 305 if info is None: 306 info = self.get_exported_dataset_info() --> 307 info.update(self._info()) 308 info.builder_name = self.name 309 info.config_name = self.config.name File ~/.cache/huggingface/modules/datasets_modules/datasets/aps--bioasq_task_b/3d54b1213f7e8001eef755af92877f9efa44161ee83c2a70d5d649defa95759e/bioasq_task_b.py:477, in BioasqTaskBDataset._info(self) 474 def _info(self): 475 476 # BioASQ Task B source schema --> 477 if self.config.schema == "source": 478 features = datasets.Features( 479 { 480 "id": datasets.Value("string"), (...) 504 } 505 ) 506 # simplified schema for QA tasks AttributeError: 'BuilderConfig' object has no attribute 'schema' ``` </details> ## Environment info - `datasets` version: 2.4.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.4 - PyArrow version: 9.0.0 - Pandas version: 1.4.3
false
1,367,924,429
https://api.github.com/repos/huggingface/datasets/issues/4959
https://github.com/huggingface/datasets/pull/4959
4,959
Fix data URLs of compguesswhat dataset
closed
1
2022-09-09T14:36:10
2022-09-09T16:01:34
2022-09-09T15:59:04
albertvillanova
[]
After we informed the `compguesswhat` dataset authors about an error with their data URLs, they have updated them: - https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1 This PR updates their data URLs in our loading script. Related to: - #3191
true
1,367,695,376
https://api.github.com/repos/huggingface/datasets/issues/4958
https://github.com/huggingface/datasets/issues/4958
4,958
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.4.0/datasets/jsonl/jsonl.py
closed
1
2022-09-09T11:29:55
2022-09-09T11:38:44
2022-09-09T11:38:44
hasakikiki
[ "bug" ]
Hi, When I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version. ``` ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/2.3.0/datasets/jsonl/jsonl.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x2b08342004c0>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"))) ```
false
1,366,532,849
https://api.github.com/repos/huggingface/datasets/issues/4957
https://github.com/huggingface/datasets/pull/4957
4,957
Add `Dataset.from_generator`
closed
3
2022-09-08T15:08:25
2022-09-16T14:46:35
2022-09-16T14:44:18
mariosasko
[]
Add `Dataset.from_generator` to the API to allow creating datasets from data larger than RAM. The implementation relies on a packaged module not exposed in `load_dataset` to tie this method with `datasets`' caching mechanism. Closes https://github.com/huggingface/datasets/issues/4417
true
1,366,475,160
https://api.github.com/repos/huggingface/datasets/issues/4956
https://github.com/huggingface/datasets/pull/4956
4,956
Fix TF tests for 2.10
closed
1
2022-09-08T14:39:10
2022-09-08T15:16:51
2022-09-08T15:14:44
Rocketknight1
[]
Fixes #4953
true
1,366,382,314
https://api.github.com/repos/huggingface/datasets/issues/4955
https://github.com/huggingface/datasets/issues/4955
4,955
Raise a more precise error when the URL is unreachable in streaming mode
open
0
2022-09-08T13:52:37
2022-09-08T13:53:36
null
severo
[ "enhancement" ]
See for example: - https://github.com/huggingface/datasets/issues/3191 - https://github.com/huggingface/datasets/issues/3186 It would help provide clearer information on the Hub and help the dataset maintainer solve the issue by themselves quicker. Currently: - https://huggingface.co/datasets/compguesswhat <img width="1029" alt="Capture d’écran 2022-09-08 à 15 51 37" src="https://user-images.githubusercontent.com/1676121/189139946-6deffb91-f21b-4281-8825-a98026c69740.png"> - https://huggingface.co/datasets/nli_tr <img width="1032" alt="Capture d’écran 2022-09-08 à 15 51 44" src="https://user-images.githubusercontent.com/1676121/189139963-d26490ed-ad23-48ea-9cfc-1ab9c4d08d0c.png"> cc @albertvillanova
false
1,366,369,682
https://api.github.com/repos/huggingface/datasets/issues/4954
https://github.com/huggingface/datasets/pull/4954
4,954
Pin TensorFlow temporarily
closed
1
2022-09-08T13:46:15
2022-09-08T14:12:33
2022-09-08T14:10:03
albertvillanova
[]
Temporarily fix TensorFlow until a permanent solution is found. Related to: - #4953
true
1,366,356,514
https://api.github.com/repos/huggingface/datasets/issues/4953
https://github.com/huggingface/datasets/issues/4953
4,953
CI test of TensorFlow is failing
closed
0
2022-09-08T13:39:29
2022-09-08T15:14:45
2022-09-08T15:14:45
albertvillanova
[ "bug" ]
## Describe the bug The following CI test fails: https://github.com/huggingface/datasets/runs/8246722693?check_suite_focus=true ``` FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError: ``` Details: ``` _________________________ TempSeedTest.test_tensorflow _________________________ [gw0] linux -- Python 3.7.13 /opt/hostedtoolcache/Python/3.7.13/x64/bin/python self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow> @require_tf def test_tensorflow(self): import tensorflow as tf from tensorflow.keras import layers def gen_random_output(): model = layers.Dense(2) x = tf.random.uniform((1, 3)) return model(x).numpy() with temp_seed(42, set_tensorflow=True): out1 = gen_random_output() with temp_seed(42, set_tensorflow=True): out2 = gen_random_output() out3 = gen_random_output() > np.testing.assert_equal(out1, out2) E AssertionError: E Arrays are not equal E E Mismatched elements: 2 / 2 (100%) E Max absolute difference: 0.84619296 E Max relative difference: 16.083529 E x: array([[-0.793581, 0.333286]], dtype=float32) E y: array([[0.052612, 0.539708]], dtype=float32) tests/test_py_utils.py:149: AssertionError ```
false
1,366,354,604
https://api.github.com/repos/huggingface/datasets/issues/4952
https://github.com/huggingface/datasets/pull/4952
4,952
Add test-datasets CI job
closed
2
2022-09-08T13:38:30
2023-09-24T10:05:57
2022-09-16T13:25:48
lhoestq
[]
To avoid having too many conflicts in the datasets and metrics dependencies I split the CI into test and test-catalog test does the test of the core of the `datasets` lib, while test-catalog tests the datasets scripts and metrics scripts This also makes `pip install -e .[dev]` much smaller for developers WDYT @albertvillanova ?
true
1,365,954,814
https://api.github.com/repos/huggingface/datasets/issues/4951
https://github.com/huggingface/datasets/pull/4951
4,951
Fix license information in qasc dataset card
closed
1
2022-09-08T10:04:39
2022-09-08T14:54:47
2022-09-08T14:52:05
albertvillanova
[]
This PR adds the license information to `qasc` dataset, once reported via GitHub by Tushar Khot, the dataset is licensed under CC BY 4.0: - https://github.com/allenai/qasc/issues/5
true
1,365,458,633
https://api.github.com/repos/huggingface/datasets/issues/4950
https://github.com/huggingface/datasets/pull/4950
4,950
Update Enwik8 broken link and information
closed
1
2022-09-08T03:15:00
2022-09-24T22:14:35
2022-09-08T14:51:00
mtanghu
[]
The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.
true
1,365,251,916
https://api.github.com/repos/huggingface/datasets/issues/4949
https://github.com/huggingface/datasets/pull/4949
4,949
Update enwik8 fixing the broken link
closed
1
2022-09-07T22:17:14
2022-09-08T03:14:04
2022-09-08T03:14:04
mtanghu
[]
The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8.
true
1,364,973,778
https://api.github.com/repos/huggingface/datasets/issues/4948
https://github.com/huggingface/datasets/pull/4948
4,948
Fix minor typo in error message for missing imports
closed
1
2022-09-07T17:20:51
2022-09-08T14:59:31
2022-09-08T14:57:15
mariosasko
[]
null
true
1,364,967,957
https://api.github.com/repos/huggingface/datasets/issues/4947
https://github.com/huggingface/datasets/pull/4947
4,947
Try to fix the Windows CI after TF update 2.10
closed
1
2022-09-07T17:14:49
2023-09-24T10:05:38
2022-09-08T09:13:10
lhoestq
[]
null
true
1,364,692,069
https://api.github.com/repos/huggingface/datasets/issues/4946
https://github.com/huggingface/datasets/pull/4946
4,946
Introduce regex check when pushing as well
closed
2
2022-09-07T13:45:58
2022-09-13T10:19:01
2022-09-13T10:16:34
LysandreJik
[]
Closes https://github.com/huggingface/datasets/issues/4945 by adding a regex check when pushing to hub. Let me know if this is helpful and if it's the fix you would have in mind for the issue and I'm happy to contribute tests.
true
1,364,691,096
https://api.github.com/repos/huggingface/datasets/issues/4945
https://github.com/huggingface/datasets/issues/4945
4,945
Push to hub can push splits that do not respect the regex
closed
0
2022-09-07T13:45:17
2022-09-13T10:16:35
2022-09-13T10:16:35
LysandreJik
[ "bug" ]
## Describe the bug The `push_to_hub` method can push splits that do not respect the regex check that is used for downloads. Therefore, splits may be pushed but never re-used, which can be painful if the split was done after runtime preprocessing. ## Steps to reproduce the bug ```python >>> from datasets import Dataset, DatasetDict, load_dataset >>> d = Dataset.from_dict({'x': [1,2,3], 'y': [1,2,3]}) >>> di = DatasetDict() >>> di['identifier-with-column'] = d >>> di.push_to_hub('open-source-metrics/test') Pushing split identifier-with-column to the Hub. Pushing dataset shards to the dataset hub: 100%|██████████| 1/1 [00:04<00:00, 4.40s/it] ``` Loading it afterwards: ```python >>> load_dataset('open-source-metrics/test') Downloading: 100%|██████████| 610/610 [00:00<00:00, 432kB/s] Using custom data configuration open-source-metrics--test-28b63ec7cde80488 Downloading and preparing dataset None/None (download: 950 bytes, generated: 48 bytes, post-processed: Unknown size, total: 998 bytes) to /home/lysandre/.cache/huggingface/datasets/open-source-metrics___parquet/open-source-metrics--test-28b63ec7cde80488/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec... Downloading data files: 0%| | 0/1 [00:00<?, ?it/s] Downloading data: 100%|██████████| 950/950 [00:00<00:00, 1.01MB/s] Downloading data files: 100%|██████████| 1/1 [00:01<00:00, 1.48s/it] Extracting data files: 100%|██████████| 1/1 [00:00<00:00, 2291.97it/s] Traceback (most recent call last): File "/home/lysandre/.pyenv/versions/3.10.6/lib/python3.10/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/load.py", line 1746, in load_dataset builder_instance.download_and_prepare( File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/builder.py", line 771, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 48, in _split_generators splits.append(datasets.SplitGenerator(name=split_name, gen_kwargs={"files": files})) File "<string>", line 5, in __init__ File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 599, in __post_init__ NamedSplit(self.name) # check that it's a valid split name File "/home/lysandre/Workspaces/python/Metrics/GitHub-Metrics/.env/lib/python3.10/site-packages/datasets/splits.py", line 346, in __init__ raise ValueError(f"Split name should match '{_split_re}' but got '{split_name}'.") ValueError: Split name should match '^\w+(\.\w+)*$' but got 'identifier-with-column'. ``` ## Expected results I would expect `push_to_hub` to stop me in my tracks if trying to upload a split that will not be working afterwards. ## Actual results See above ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.15.64-1-lts-x86_64-with-glibc2.36 - Python version: 3.10.6 - PyArrow version: 9.0.0 - Pandas version: 1.4.4
false
1,364,313,569
https://api.github.com/repos/huggingface/datasets/issues/4944
https://github.com/huggingface/datasets/issues/4944
4,944
larger dataset, larger GPU memory in the training phase? Is that correct?
closed
2
2022-09-07T08:46:30
2022-09-07T12:34:58
2022-09-07T12:34:58
debby1103
[ "bug" ]
from datasets import set_caching_enabled set_caching_enabled(False) for ds_name in ["squad","newsqa","nqopen","narrativeqa"]: train_ds = load_from_disk("../../../dall/downstream/processedproqa/{}-train.hf".format(ds_name)) break train_ds = concatenate_datasets([train_ds,train_ds,train_ds,train_ds]) #operation 1 trainer = QuestionAnsweringTrainer( #huggingface trainer model=model, args=training_args, train_dataset=train_ds, eval_dataset= None, eval_examples=None, answer_column_name=answer_column, dataset_name="squad", tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics if training_args.predict_with_generate else None, ) with operation 1, the GPU memory increases from 16G to 23G
false
1,363,967,650
https://api.github.com/repos/huggingface/datasets/issues/4943
https://github.com/huggingface/datasets/pull/4943
4,943
Add splits to MBPP dataset
closed
4
2022-09-07T01:18:31
2022-09-13T12:29:19
2022-09-13T12:27:21
cwarny
[]
This PR addresses https://github.com/huggingface/datasets/issues/4795
true
1,363,869,421
https://api.github.com/repos/huggingface/datasets/issues/4942
https://github.com/huggingface/datasets/issues/4942
4,942
Trec Dataset has incorrect labels
closed
1
2022-09-06T22:13:40
2022-09-08T11:12:03
2022-09-08T11:12:03
wmpauli
[ "bug" ]
## Describe the bug Both coarse and fine labels seem to be out of line. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = "trec" raw_datasets = load_dataset(dataset) df = pd.DataFrame(raw_datasets["test"]) df.head() ``` ## Expected results text (string) | coarse_label (class label) | fine_label (class label) -- | -- | -- How far is it from Denver to Aspen ? | 5 (NUM) | 40 (NUM:dist) What county is Modesto , California in ? | 4 (LOC) | 32 (LOC:city) Who was Galileo ? | 3 (HUM) | 31 (HUM:desc) What is an atom ? | 2 (DESC) | 24 (DESC:def) When did Hawaii become a state ? | 5 (NUM) | 39 (NUM:date) ## Actual results index | label-coarse |label-fine | text -- |-- | -- | -- 0 | 4 | 40 | How far is it from Denver to Aspen ? 1 | 5 | 21 | What county is Modesto , California in ? 2 | 3 | 12 | Who was Galileo ? 3 | 0 | 7 | What is an atom ? 4 | 4 | 8 | When did Hawaii become a state ? ## Environment info - `datasets` version: 2.4.0 - Platform: Linux-5.4.0-1086-azure-x86_64-with-glibc2.27 - Python version: 3.9.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
false
1,363,622,861
https://api.github.com/repos/huggingface/datasets/issues/4941
https://github.com/huggingface/datasets/pull/4941
4,941
Add Papers with Code ID to scifact dataset
closed
1
2022-09-06T17:46:37
2022-09-06T18:28:17
2022-09-06T18:26:01
albertvillanova
[]
This PR: - adds Papers with Code ID - forces sync between GitHub and Hub, which previously failed due to Hub validation error of the license tag: https://github.com/huggingface/datasets/runs/8200223631?check_suite_focus=true
true
1,363,513,058
https://api.github.com/repos/huggingface/datasets/issues/4940
https://github.com/huggingface/datasets/pull/4940
4,940
Fix multilinguality tag and missing sections in xquad_r dataset card
closed
1
2022-09-06T16:05:35
2022-09-12T10:11:07
2022-09-12T10:08:48
albertvillanova
[]
This PR fixes issue reported on the Hub: - Label as multilingual: https://huggingface.co/datasets/xquad_r/discussions/1
true
1,363,468,679
https://api.github.com/repos/huggingface/datasets/issues/4939
https://github.com/huggingface/datasets/pull/4939
4,939
Fix NonMatchingChecksumError in adv_glue dataset
closed
1
2022-09-06T15:31:16
2022-09-06T17:42:10
2022-09-06T17:39:16
albertvillanova
[]
Fix issue reported on the Hub: https://huggingface.co/datasets/adv_glue/discussions/1
true
1,363,429,228
https://api.github.com/repos/huggingface/datasets/issues/4938
https://github.com/huggingface/datasets/pull/4938
4,938
Remove main branch rename notice
closed
1
2022-09-06T15:03:05
2022-09-06T16:46:11
2022-09-06T16:43:53
lhoestq
[]
We added a notice in README.md to show that we renamed the master branch to main, but we can remove it now (it's been 2 months) I also unpinned the github issue about the branch renaming
true
1,363,426,946
https://api.github.com/repos/huggingface/datasets/issues/4937
https://github.com/huggingface/datasets/pull/4937
4,937
Remove deprecated identical_ok
closed
1
2022-09-06T15:01:24
2022-09-06T22:24:09
2022-09-06T22:21:57
lhoestq
[]
`huggingface-hub` says that the `identical_ok` argument of `HfApi.upload_file` is now deprecated, and will be removed soon. It even has no effect at the moment when it's passed: ```python Args: ... identical_ok (`bool`, *optional*, defaults to `True`): Deprecated: will be removed in 0.11.0. Changing this value has no effect. ... ``` There was only one occurence of `identical_ok=False` but it's maybe not worth adding a check ti verify if the files were the same. cc @mariosasko
true
1,363,274,907
https://api.github.com/repos/huggingface/datasets/issues/4936
https://github.com/huggingface/datasets/issues/4936
4,936
vivos (Vietnamese speech corpus) dataset not accessible
closed
3
2022-09-06T13:17:55
2022-09-21T06:06:02
2022-09-12T07:14:20
polinaeterna
[ "dataset bug" ]
## Describe the bug VIVOS data is not accessible anymore, neither of these links work (at least from France): * https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data) * https://ailab.hcmus.edu.vn/vivos (dataset page) Therefore `load_dataset` doesn't work. ## Steps to reproduce the bug ```python ds = load_dataset("vivos") ``` ## Expected results dataset loaded ## Actual results ``` ConnectionError: Couldn't reach https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: /assets/vivos.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9d8a27d190>: Failed to establish a new connection: [Errno -5] No address associated with hostname'))"))) ``` Will try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https://github.com/huggingface/datasets/pull/4872), because it's small and straightforward and uses tar archives.
false
1,363,226,736
https://api.github.com/repos/huggingface/datasets/issues/4935
https://github.com/huggingface/datasets/issues/4935
4,935
Dataset Viewer issue for ubuntu_dialogs_corpus
closed
1
2022-09-06T12:41:50
2022-09-06T12:51:25
2022-09-06T12:51:25
CibinQuadance
[ "dataset-viewer" ]
### Link _No response_ ### Description _No response_ ### Owner _No response_
false
1,363,034,253
https://api.github.com/repos/huggingface/datasets/issues/4934
https://github.com/huggingface/datasets/issues/4934
4,934
Dataset Viewer issue for indonesian-nlp/librivox-indonesia
closed
6
2022-09-06T10:03:23
2022-09-06T12:46:40
2022-09-06T12:46:40
cahya-wirawan
[]
### Link https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia ### Description I created a new speech dataset https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia, but the dataset preview doesn't work with following error message: ``` Server error Status code: 400 Exception: TypeError Message: unsupported operand type(s) for +: 'NoneType' and 'str' ``` Please help, I am not sure what the problem here is. Thanks a lot. ### Owner Yes
false
1,363,013,023
https://api.github.com/repos/huggingface/datasets/issues/4933
https://github.com/huggingface/datasets/issues/4933
4,933
Dataset/DatasetDict.filter() cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.
closed
2
2022-09-06T09:47:48
2022-09-06T11:44:27
2022-09-06T11:44:27
tianjianjiang
[ "bug" ]
## Describe the bug `Dataset/DatasetDict.filter()` cannot have `batched=True` due to `mask` (numpy array?) being non-iterable. ## Steps to reproduce the bug (In a python 3.7.12 env, I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.) ```python from datasets import load_dataset ds_mc4_ja = load_dataset("mc4", "ja") # This will take 6+ hours... perhaps test it with a toy dataset instead? ds_mc4_ja_2020 = ds_mc4_ja.filter( lambda example: example["timestamp"][:4] == "2020", batched=True, ) ``` ## Expected results No error ## Actual results ```python --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2779, in _map_single offset=offset, File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated result = f(decorated_item, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 4946, in get_indices_from_mask_function indices_array = [i for i, to_keep in zip(indices, mask) if to_keep] TypeError: zip argument #2 must support iteration """ The above exception was the direct cause of the following exception: TypeError Traceback (most recent call last) /tmp/ipykernel_51348/2345782281.py in <module> 7 batched=True, 8 # batch_size=10_000, ----> 9 num_proc=111, 10 ) 11 # ds_mc4_ja_clean_2020 = ds_mc4_ja.filter( /opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc) 878 desc=desc, 879 ) --> 880 for k, dataset in self.items() 881 } 882 ) /opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0) 878 desc=desc, 879 ) --> 880 for k, dataset in self.items() 881 } 882 ) /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 522 } 523 # apply actual function --> 524 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 525 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 526 # re-apply format to the output /opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 478 # Call actual function 479 --> 480 out = func(self, *args, **kwargs) 481 482 # Update fingerprint of in-place transforms + update in-place history of transforms /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2920 new_fingerprint=new_fingerprint, 2921 input_columns=input_columns, -> 2922 desc=desc, 2923 ) 2924 new_dataset = copy.deepcopy(self) /opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 2498 2499 for index, async_result in results.items(): -> 2500 transformed_shards[index] = async_result.get() 2501 2502 assert ( /opt/conda/lib/python3.7/site-packages/multiprocess/pool.py in get(self, timeout) 655 return self._value 656 else: --> 657 raise self._value 658 659 def _set(self, i, obj): TypeError: zip argument #2 must support iteration ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.12 - Python version: 3.7.12 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 (I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)
false
1,362,522,423
https://api.github.com/repos/huggingface/datasets/issues/4932
https://github.com/huggingface/datasets/issues/4932
4,932
Dataset Viewer issue for bigscience-biomedical/biosses
closed
4
2022-09-05T22:40:32
2022-09-06T14:24:56
2022-09-06T14:24:56
galtay
[]
### Link https://huggingface.co/datasets/bigscience-biomedical/biosses ### Description I've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) . ``` Status code: 400 Exception: ModuleNotFoundError Message: No module named 'datasets_modules.datasets.bigscience-biomedical--biosses.ddbd5893bf6c2f4db06f407665eaeac619520ba41f69d94ead28f7cc5b674056.bigbiohub' ``` ### Owner Yes
false
1,362,298,764
https://api.github.com/repos/huggingface/datasets/issues/4931
https://github.com/huggingface/datasets/pull/4931
4,931
Fix missing tags in dataset cards
closed
1
2022-09-05T17:03:04
2022-09-22T12:40:15
2022-09-06T05:39:29
albertvillanova
[]
Fix missing tags in dataset cards: - coqa - hyperpartisan_news_detection - opinosis - scientific_papers - scifact - search_qa - wiki_qa - wiki_split - wikisql This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task. Related to: - #4833 - #4891 - #4896 - #4908 - #4921
true