id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,274,010,628
https://api.github.com/repos/huggingface/datasets/issues/4518
https://github.com/huggingface/datasets/pull/4518
4,518
Patch tests for hfh v0.8.0
closed
1
2022-06-16T19:45:32
2022-06-17T16:15:57
2022-06-17T16:06:07
LysandreJik
[]
This PR patches testing utilities that would otherwise fail with hfh v0.8.0.
true
1,273,960,476
https://api.github.com/repos/huggingface/datasets/issues/4517
https://github.com/huggingface/datasets/pull/4517
4,517
Add tags for task_ids:summarization-* and task_categories:summarization*
closed
2
2022-06-16T18:52:25
2022-07-08T15:14:23
2022-07-08T15:02:31
hobson
[]
yaml header at top of README.md file was edited to add task tags because I couldn't find the existing tags in the json separate Pull Request will modify dataset_infos.json to add these tags The Enron dataset (dataset id aeslc) is only tagged with: arxiv:1906.03497' languages:en pretty_name:AESLC Using the email subject_line field as a label or target variable it possible to create models for the following task_ids (in order of relevance): 'task_ids:summarization' 'task_ids:summarization-other-conversations-summarization' "task_ids:other-other-query-based-multi-document-summarization" 'task_ids:summarization-other-aspect-based-summarization' 'task_ids:summarization--other-headline-generation' The subject might also be used for the task_category "task_categories:summarization" E-mail chains might be used for the task category "task_categories:dialogue-system"
true
1,273,825,640
https://api.github.com/repos/huggingface/datasets/issues/4516
https://github.com/huggingface/datasets/pull/4516
4,516
Fix hashing for python 3.9
closed
2
2022-06-16T16:42:31
2022-06-28T13:33:46
2022-06-28T13:23:06
lhoestq
[]
In python 3.9, pickle hashes the `glob_ids` dictionary in addition to the `globs` of a function. Therefore the test at `tests/test_fingerprint.py::RecurseDumpTest::test_recurse_dump_for_function_with_shuffled_globals` is currently failing for python 3.9 To make hashing deterministic when the globals are not in the same order, we also need to make the order of `glob_ids` deterministic. Right now we don't have a CI to test python 3.9 but we should definitely have one. For this PR in particular I ran the tests locally using python 3.9 and they're passing now. Fix https://github.com/huggingface/datasets/issues/4506
true
1,273,626,131
https://api.github.com/repos/huggingface/datasets/issues/4515
https://github.com/huggingface/datasets/pull/4515
4,515
Add uppercased versions of image file extensions for automatic module inference
closed
1
2022-06-16T14:14:49
2022-06-16T17:21:53
2022-06-16T17:11:41
mariosasko
[]
Adds the uppercased versions of the image file extensions to the supported extensions. Another approach would be to call `.lower()` on extensions while resolving data files, but uppercased extensions are not something we want to encourage out of the box IMO unless they are commonly used (as they are in the vision domain) Note that there is a slight discrepancy between the image file resolution and `imagefolder` as the latter calls `.lower()` on file extensions leading to some image file extensions being ignored by the resolution but not by the loader (e.g. `pNg`). Such extensions should also be discouraged, so I'm ignoring that case too. Fix #4514.
true
1,273,505,230
https://api.github.com/repos/huggingface/datasets/issues/4514
https://github.com/huggingface/datasets/issues/4514
4,514
Allow .JPEG as a file extension
closed
2
2022-06-16T12:36:20
2022-06-20T08:18:46
2022-06-16T17:11:40
DiGyt
[ "bug" ]
## Describe the bug When loading image data, HF datasets seems to recognize `.jpg` and `.jpeg` file extensions, but not e.g. .JPEG. As the naming convention .JPEG is used in important datasets such as imagenet, I would welcome if according extensions like .JPEG or .JPG would be allowed. ## Steps to reproduce the bug ```python # use bash to create 2 sham datasets with jpeg and JPEG ext !mkdir dataset_a !mkdir dataset_b !wget https://upload.wikimedia.org/wikipedia/commons/7/71/Dsc_%28179253513%29.jpeg -O example_img.jpeg !cp example_img.jpeg ./dataset_a/ !mv example_img.jpeg ./dataset_b/example_img.JPEG from datasets import load_dataset # working df1 = load_dataset("./dataset_a", ignore_verifications=True) #not working df2 = load_dataset("./dataset_b", ignore_verifications=True) # show print(df1, df2) ``` ## Expected results ``` DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 1 }) }) DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 1 }) }) ``` ## Actual results ``` FileNotFoundError: Unable to resolve any data file that matches '['**']' at /..PATH../dataset_b with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` I know that it can be annoying to allow seemingly arbitrary numbers of file extensions. But I think this one would be really welcome.
false
1,273,450,338
https://api.github.com/repos/huggingface/datasets/issues/4513
https://github.com/huggingface/datasets/pull/4513
4,513
Update Google Cloud Storage documentation and add Azure Blob Storage example
closed
5
2022-06-16T11:46:09
2022-06-23T17:05:11
2022-06-23T16:54:59
alvarobartt
[ "documentation" ]
While I was going through the 🤗 Datasets documentation of the Cloud storage filesystems at https://huggingface.co/docs/datasets/filesystems, I realized that the Google Cloud Storage documentation could be improved e.g. bullet point says "Load your dataset" when the actual call was to "Save your dataset", in-line code comment was mentioning "s3 bucket" instead of "gcs bucket", and some more in-line comments could be included. Also, I think that mixing Google Cloud Storage documentation with AWS S3's one was a little bit confusing, so I moved all those to the end of the document under an h2 tab named "Other filesystems", with an h3 for "Google Cloud Storage". Besides that, I was currently working with Azure Blob Storage and found out that the URL to [adlfs](https://github.com/fsspec/adlfs) was common for both filesystems Azure Blob Storage and Azure DataLake Storage, as well as the URL, which was updated even though the redirect was working fine, so I decided to group those under the same row in the column of supported filesystems. And took also the change to add a small documentation entry as for Google Cloud Storage but for Azure Blob Storage, as I assume that AWS S3, GCP Cloud Storage, and Azure Blob Storage, are the most used cloud storage providers. Let me know if you're OK with these changes, or whether you want me to roll back some of those! :hugs:
true
1,273,378,129
https://api.github.com/repos/huggingface/datasets/issues/4512
https://github.com/huggingface/datasets/pull/4512
4,512
Add links to vision tasks scripts in ADD_NEW_DATASET template
closed
2
2022-06-16T10:35:35
2022-07-08T14:07:50
2022-07-08T13:56:23
mariosasko
[]
Add links to vision dataset scripts in the ADD_NEW_DATASET template.
true
1,273,336,874
https://api.github.com/repos/huggingface/datasets/issues/4511
https://github.com/huggingface/datasets/pull/4511
4,511
Support all negative values in ClassLabel
closed
4
2022-06-16T09:59:39
2025-07-23T18:38:15
2022-06-16T13:54:07
lhoestq
[]
We usually use -1 to represent a missing label, but we should also support any negative values (some users use -100 for example). This is a regression from `datasets` 2.3 Fix https://github.com/huggingface/datasets/issues/4508
true
1,273,260,396
https://api.github.com/repos/huggingface/datasets/issues/4510
https://github.com/huggingface/datasets/pull/4510
4,510
Add regression test for `ArrowWriter.write_batch` when batch is empty
closed
2
2022-06-16T08:53:51
2022-06-16T12:38:02
2022-06-16T12:28:19
alvarobartt
[]
As spotted by @cccntu in #4502, there's a logic bug in `ArrowWriter.write_batch` as the if-statement to handle the empty batches as detailed in the docstrings of the function ("Ignores the batch if it appears to be empty, preventing a potential schema update of unknown types."), the current if-statement is not handling properly `writer.write_batch({})` as an error is triggered. Also, if we add a regression test in `test_arrow_writer.py::test_write_batch` before applying the fix, the test will fail as when trying to write an empty batch as follows: ``` =================================================================================== short test summary info =================================================================================== FAILED tests/test_arrow_writer.py::test_write_batch[None-None] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[None-1] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[None-10] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields1-None] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields1-1] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields1-10] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields2-None] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields2-1] - ValueError: Schema and number of arrays unequal FAILED tests/test_arrow_writer.py::test_write_batch[fields2-10] - ValueError: Schema and number of arrays unequal ======================================================================== 9 failed, 73 deselected, 7 warnings in 0.81s ========================================================================= ``` So the batch is not ignored when empty, as `batch_examples={}` won't match the condition `if batch_examples: ...`.
true
1,273,227,760
https://api.github.com/repos/huggingface/datasets/issues/4509
https://github.com/huggingface/datasets/pull/4509
4,509
Support skipping Parquet to Arrow conversion when using Beam
closed
3
2022-06-16T08:25:38
2022-11-07T16:22:41
2022-11-07T16:22:41
albertvillanova
[]
null
true
1,272,718,921
https://api.github.com/repos/huggingface/datasets/issues/4508
https://github.com/huggingface/datasets/issues/4508
4,508
cast_storage method from datasets.features
closed
2
2022-06-15T20:47:22
2022-06-16T13:54:07
2022-06-16T13:54:07
romainremyb
[ "bug" ]
## Describe the bug A bug occurs when mapping a function to a dataset object. I ran the same code with the same data yesterday and it worked just fine. It works when i run locally on an old version of datasets. ## Steps to reproduce the bug Steps are: - load whatever datset - write a preprocessing function such as "tokenize_and_align_labels" written in https://huggingface.co/docs/transformers/tasks/token_classification - map the function on dataset and get "ValueError: Class label -100 less than -1" from cast_storage method from datasets.features # Sample code to reproduce the bug def tokenize_and_align_labels(examples): tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True, max_length=38,padding="max_length") labels = [] for i, label in enumerate(examples[f"labels"]): word_ids = tokenized_inputs.word_ids(batch_index=i) # Map tokens to their respective word. previous_word_idx = None label_ids = [] for word_idx in word_ids: # Set the special tokens to -100. if word_idx is None: label_ids.append(-100) elif word_idx != previous_word_idx: # Only label the first token of a given word. label_ids.append(label[word_idx]) else: label_ids.append(-100) previous_word_idx = word_idx labels.append(label_ids) tokenized_inputs["labels"] = labels return tokenized_inputs tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") dt = dataset.map(tokenize_and_align_labels, batched=True) ## Expected results New dataset objects should load and do on older versions. ## Actual results "ValueError: Class label -100 less than -1" from cast_storage method from datasets.features ## Environment info everything works fine on older installations of datasets/transformers Issue arises when installing datasets on google collab under python3.7 I can't manage to find the exact output you're requirering but version printed is datasets-2.3.2
false
1,272,615,932
https://api.github.com/repos/huggingface/datasets/issues/4507
https://github.com/huggingface/datasets/issues/4507
4,507
How to let `load_dataset` return a `Dataset` instead of `DatasetDict` in customized loading script
closed
2
2022-06-15T18:56:34
2022-06-16T10:40:08
2022-06-16T10:40:08
liyucheng09
[ "enhancement" ]
If the dataset does not need splits, i.e., no training and validation split, more like a table. How can I let the `load_dataset` function return a `Dataset` object directly rather than return a `DatasetDict` object with only one key-value pair. Or I can paraphrase the question in the following way: how to skip `_split_generators` step in `DatasetBuilder` to let `as_dataset` gives a single `Dataset` rather than a list`[Dataset]`? Many thanks for any help.
false
1,272,516,895
https://api.github.com/repos/huggingface/datasets/issues/4506
https://github.com/huggingface/datasets/issues/4506
4,506
Failure to hash (and cache) a `.map(...)` (almost always) - using this method can produce incorrect results
closed
5
2022-06-15T17:11:31
2023-02-16T03:14:32
2022-06-28T13:23:05
DrMatters
[ "bug" ]
## Describe the bug Sometimes I get messages about not being able to hash a method: `Parameter 'function'=<function StupidDataModule._separate_speaker_id_from_dialogue at 0x7f1b27180d30> of the transform datasets.arrow_dataset.Dataset. _map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` Whilst the function looks like this: ```python @staticmethod def _separate_speaker_id_from_dialogue(example: arrow_dataset.Example): speaker_id, dialogue = tuple(zip(*(example["dialogue"]))) example["speaker_id"] = speaker_id example["dialogue"] = dialogue return example ``` This is the first step in my preprocessing pipeline, but sometimes the message about failure to hash is not appearing on the first step, but then appears on a later step. This error is sometimes causing a failure to use cached data, instead of re-running all steps again. ## Steps to reproduce the bug ```python import copy import datasets from datasets import arrow_dataset def main(): dataset = datasets.load_dataset("blended_skill_talk") res = dataset.map(method) print(res) def method(example: arrow_dataset.Example): example['previous_utterance_copy'] = copy.deepcopy(example['previous_utterance']) return example if __name__ == '__main__': main() ``` Run with: ``` python -m reproduce_error ``` ## Expected results Dataset is mapped and cached correctly. ## Actual results The code outputs this at some point: `Parameter 'function'=<function method at 0x7faa83d2a160> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Ubuntu 20.04.3 - Python version: 3.9.12 - PyArrow version: 8.0.0 - Datasets version: 2.3.1
false
1,272,477,226
https://api.github.com/repos/huggingface/datasets/issues/4505
https://github.com/huggingface/datasets/pull/4505
4,505
Fix double dots in data files
closed
2
2022-06-15T16:31:04
2022-06-15T17:15:58
2022-06-15T17:05:53
lhoestq
[]
As mentioned in https://github.com/huggingface/transformers/pull/17715 `data_files` can't find a file if the path contains double dots `/../`. This has been introduced in https://github.com/huggingface/datasets/pull/4412, by trying to ignore hidden files and directories (i.e. if they start with a dot) I fixed this and added a test cc @sgugger @ydshieh
true
1,272,418,480
https://api.github.com/repos/huggingface/datasets/issues/4504
https://github.com/huggingface/datasets/issues/4504
4,504
Can you please add the Stanford dog dataset?
closed
16
2022-06-15T15:39:35
2024-12-09T15:44:11
2023-10-18T18:55:30
dgrnd4
[ "good first issue", "dataset request" ]
## Adding a Dataset - **Name:** *Stanford dog dataset* - **Description:** *The dataset is about 120 classes for a total of 20.580 images. You can find the dataset here http://vision.stanford.edu/aditya86/ImageNetDogs/* - **Paper:** *http://vision.stanford.edu/aditya86/ImageNetDogs/* - **Data:** *[link to the Github repository or current dataset location](http://vision.stanford.edu/aditya86/ImageNetDogs/)* - **Motivation:** *The dataset has been built using images and annotation from ImageNet for the task of fine-grained image categorization. It is useful for fine-grain purpose * Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,272,367,055
https://api.github.com/repos/huggingface/datasets/issues/4503
https://github.com/huggingface/datasets/pull/4503
4,503
Refactor and add metadata to fever dataset
closed
5
2022-06-15T14:59:47
2022-07-06T11:54:15
2022-07-06T11:41:30
albertvillanova
[]
Related to: #4452 and #3792.
true
1,272,353,700
https://api.github.com/repos/huggingface/datasets/issues/4502
https://github.com/huggingface/datasets/issues/4502
4,502
Logic bug in arrow_writer?
closed
10
2022-06-15T14:50:00
2022-06-18T15:15:51
2022-06-18T15:15:51
changjonathanc
[]
https://github.com/huggingface/datasets/blob/88a902d6474fae8d793542d57a4f3b0d187f3c5b/src/datasets/arrow_writer.py#L475-L488 I got some error, and I found it's caused by `batch_examples` being `{}`. I wonder if the code should be as follows: ``` - if batch_examples and len(next(iter(batch_examples.values()))) == 0: + if not batch_examples or len(next(iter(batch_examples.values()))) == 0: return ``` @lhoestq
false
1,272,300,646
https://api.github.com/repos/huggingface/datasets/issues/4501
https://github.com/huggingface/datasets/pull/4501
4,501
Corrected broken links in doc
closed
1
2022-06-15T14:12:17
2022-06-15T15:11:05
2022-06-15T15:00:56
clefourrier
[]
null
true
1,272,281,992
https://api.github.com/repos/huggingface/datasets/issues/4500
https://github.com/huggingface/datasets/pull/4500
4,500
Add `concatenate_datasets` for iterable datasets
closed
3
2022-06-15T13:58:50
2022-06-28T21:25:39
2022-06-28T21:15:04
lhoestq
[]
`concatenate_datasets` currently only supports lists of `datasets.Dataset`, not lists of `datasets.IterableDataset` like `interleave_datasets` Fix https://github.com/huggingface/datasets/issues/2564 I also moved `_interleave_map_style_datasets` from combine.py to arrow_dataset.py, since the logic depends a lot on the `Dataset` object internals And I moved `concatenate_datasets` from arrow_dataset.py to combine.py to have it with `interleave_datasets` (though it's also copied in arrow_dataset module for backward compatibility for now)
true
1,272,118,162
https://api.github.com/repos/huggingface/datasets/issues/4499
https://github.com/huggingface/datasets/pull/4499
4,499
fix ETT m1/m2 test/val dataset
closed
3
2022-06-15T11:51:02
2022-06-15T14:55:56
2022-06-15T14:45:13
kashif
[]
https://huggingface.co/datasets/ett/discussions/1
true
1,272,100,549
https://api.github.com/repos/huggingface/datasets/issues/4498
https://github.com/huggingface/datasets/issues/4498
4,498
WER and CER > 1
closed
1
2022-06-15T11:35:12
2022-06-15T16:38:05
2022-06-15T16:38:05
sadrasabouri
[ "bug" ]
## Describe the bug It seems that in some cases in which the `prediction` is longer than the `reference` we may have word/character error rate higher than 1 which is a bit odd. If it's a real bug I think I can solve it with a PR changing [this](https://github.com/huggingface/datasets/blob/master/metrics/wer/wer.py#L105) line to ```python return min(incorrect / total, 1.0) ``` ## Steps to reproduce the bug ```python from datasets import load_metric wer = load_metric("wer") wer_value = wer.compute(predictions=["Hi World vka"], references=["Hello"]) print(wer_value) ``` ## Expected results ``` 1.0 ``` ## Actual results ``` 3.0 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.0 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
false
1,271,964,338
https://api.github.com/repos/huggingface/datasets/issues/4497
https://github.com/huggingface/datasets/pull/4497
4,497
Re-add download_manager module in utils
closed
5
2022-06-15T09:44:33
2022-06-15T10:33:28
2022-06-15T10:23:44
lhoestq
[]
https://github.com/huggingface/datasets/pull/4384 moved `datasets.utils.download_manager` to `datasets.download.download_manager` This breaks `evaluate` which imports `DownloadMode` from `datasets.utils.download_manager` This PR re-adds `datasets.utils.download_manager` without circular imports. We could also show a message that says that accessing it is deprecated, but I think we can do this in a subsequent PR, and just focus on doing a patch release for now
true
1,271,945,704
https://api.github.com/repos/huggingface/datasets/issues/4496
https://github.com/huggingface/datasets/pull/4496
4,496
Replace `assertEqual` with `assertTupleEqual` in unit tests for verbosity
closed
2
2022-06-15T09:29:16
2022-07-07T17:06:51
2022-07-07T16:55:48
alvarobartt
[]
As detailed in #4419 and as suggested by @mariosasko, we could replace the `assertEqual` assertions with `assertTupleEqual` when the assertion is between Tuples, in order to make the tests more verbose.
true
1,271,851,025
https://api.github.com/repos/huggingface/datasets/issues/4495
https://github.com/huggingface/datasets/pull/4495
4,495
Fix patching module that doesn't exist
closed
1
2022-06-15T08:17:50
2022-06-15T16:40:49
2022-06-15T08:54:09
lhoestq
[]
Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true When trying to patch `scipy.io.loadmat`: ```python ModuleNotFoundError: No module named 'scipy' ``` Instead it shouldn't raise an error and do nothing Bug introduced by #4375 Fix https://github.com/huggingface/datasets/issues/4494
true
1,271,850,599
https://api.github.com/repos/huggingface/datasets/issues/4494
https://github.com/huggingface/datasets/issues/4494
4,494
Patching fails for modules that are not installed or don't exist
closed
0
2022-06-15T08:17:29
2022-06-15T08:54:09
2022-06-15T08:54:09
lhoestq
[]
Reported in https://github.com/huggingface/huggingface_hub/runs/6894703718?check_suite_focus=true When trying to patch `scipy.io.loadmat`: ```python ModuleNotFoundError: No module named 'scipy' ``` Instead it shouldn't raise an error and do nothing We use patching to extend such functions to support remote URLs and work in streaming mode
false
1,271,306,385
https://api.github.com/repos/huggingface/datasets/issues/4493
https://github.com/huggingface/datasets/pull/4493
4,493
Add `@transmit_format` in `flatten`
closed
4
2022-06-14T20:09:09
2022-09-27T11:37:25
2022-09-27T10:48:54
alvarobartt
[]
As suggested by @mariosasko in https://github.com/huggingface/datasets/pull/4411, we should include the `@transmit_format` decorator to `flatten`, `rename_column`, and `rename_columns` so as to ensure that the value of `_format_columns` in an `ArrowDataset` is properly updated. **Edit**: according to @mariosasko comment below, the decorator `@transmit_format` doesn't handle column renaming, so it's done manually for those instead.
true
1,271,112,497
https://api.github.com/repos/huggingface/datasets/issues/4492
https://github.com/huggingface/datasets/pull/4492
4,492
Pin the revision in imagenet download links
closed
1
2022-06-14T17:15:17
2022-06-14T17:35:13
2022-06-14T17:25:45
lhoestq
[]
Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism. cc @mariosasko
true
1,270,803,822
https://api.github.com/repos/huggingface/datasets/issues/4491
https://github.com/huggingface/datasets/issues/4491
4,491
Dataset Viewer issue for Pavithree/test
closed
1
2022-06-14T13:23:10
2022-06-14T14:37:21
2022-06-14T14:34:33
Pavithree
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/Pavithree/test ### Description I have extracted the subset of original eli5 dataset found at hugging face. However, while loading the dataset It throws ArrowNotImplementedError: Unsupported cast from string to null using function cast_null error. Is there anything missing from my end? Kindly help. ### Owner _No response_
false
1,270,719,074
https://api.github.com/repos/huggingface/datasets/issues/4490
https://github.com/huggingface/datasets/issues/4490
4,490
Use `torch.nested_tensor` for arrays of varying length in torch formatter
open
2
2022-06-14T12:19:40
2023-07-07T13:02:58
null
mariosasko
[ "enhancement" ]
Use `torch.nested_tensor` for arrays of varying length in `TorchFormatter`. The PyTorch API of nested tensors is in the prototype stage, so wait for it to become more mature.
false
1,270,706,195
https://api.github.com/repos/huggingface/datasets/issues/4489
https://github.com/huggingface/datasets/pull/4489
4,489
Add SV-Ident dataset
closed
5
2022-06-14T12:09:00
2022-06-20T08:48:26
2022-06-20T08:37:27
e-tornike
[]
null
true
1,270,613,857
https://api.github.com/repos/huggingface/datasets/issues/4488
https://github.com/huggingface/datasets/pull/4488
4,488
Update PASS dataset version
closed
1
2022-06-14T10:47:14
2022-06-14T16:41:55
2022-06-14T16:32:28
mariosasko
[]
Update the PASS dataset to version v3 (the newest one) from the [version history](https://github.com/yukimasano/PASS/blob/main/version_history.txt). PS: The older versions are not exposed as configs in the script because v1 was removed from Zenodo, and the same thing will probably happen to v2.
true
1,270,525,163
https://api.github.com/repos/huggingface/datasets/issues/4487
https://github.com/huggingface/datasets/pull/4487
4,487
Support streaming UDHR dataset
closed
1
2022-06-14T09:33:33
2022-06-15T05:09:22
2022-06-15T04:59:49
albertvillanova
[]
This PR: - Adds support for streaming UDHR dataset - Adds the BCP 47 language code as feature
true
1,269,518,084
https://api.github.com/repos/huggingface/datasets/issues/4486
https://github.com/huggingface/datasets/pull/4486
4,486
Add CCAgT dataset
closed
8
2022-06-13T14:20:19
2022-07-04T14:37:03
2022-07-04T14:25:45
johnnv1
[]
As described in #4075 I could not generate the dummy data. Also, on the data repository isn't provided the split IDs, but I copy the functions that provide the correct data split. In summary, to have a better distribution, the data in this dataset should be separated based on the amount of NORs in each image.
true
1,269,463,054
https://api.github.com/repos/huggingface/datasets/issues/4485
https://github.com/huggingface/datasets/pull/4485
4,485
Fix cast to null
closed
1
2022-06-13T13:44:32
2022-06-14T13:43:54
2022-06-14T13:34:14
lhoestq
[]
It currently fails with `ArrowNotImplementedError` instead of `TypeError` when one tries to cast integer to null type. Because if this, type inference breaks when one replaces null values with integers in `map` (it first tries to cast to the previous type before inferring the new type). Fix https://github.com/huggingface/datasets/issues/4483
true
1,269,383,811
https://api.github.com/repos/huggingface/datasets/issues/4484
https://github.com/huggingface/datasets/pull/4484
4,484
Better ImportError message when a dataset script dependency is missing
closed
4
2022-06-13T12:44:37
2022-07-08T14:30:44
2022-06-13T13:50:47
lhoestq
[]
When a depenency is missing for a dataset script, an ImportError message is shown, with a tip to install the missing dependencies. This message is not ideal at the moment: it may show duplicate dependencies, and is not very readable. I improved it from ``` ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance' ``` to ``` ImportError: To be able to use bigbench, you need to install the following dependency: bigbench. Please install it using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"' for instance' ```
true
1,269,253,840
https://api.github.com/repos/huggingface/datasets/issues/4483
https://github.com/huggingface/datasets/issues/4483
4,483
Dataset.map throws pyarrow.lib.ArrowNotImplementedError when converting from list of empty lists
closed
1
2022-06-13T10:47:52
2022-06-14T13:34:14
2022-06-14T13:34:14
sanderland
[ "bug" ]
## Describe the bug Dataset.map throws pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null when converting from a type of 'empty lists' to 'lists with some type'. This appears to be due to the interaction of arrow internals and some assumptions made by datasets. The bug appeared when binarizing some labels, and then adding a dataset which had all these labels absent (to force the model to not label empty strings such with anything) Particularly the fact that this only happens in batched mode is strange. ## Steps to reproduce the bug ```python import numpy as np ds = Dataset.from_dict( { "text": ["the lazy dog jumps over the quick fox", "another sentence"], "label": [[], []], } ) def mapper(features): features['label'] = [ [0,0,0] for l in features['label'] ] return features ds_mapped = ds.map(mapper,batched=True) ``` ## Expected results Not crashing ## Actual results ``` ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2346: in map return self._map_single( ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:532: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:499: in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/fingerprint.py:458: in wrapper out = func(self, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/arrow_dataset.py:2751: in _map_single writer.write_batch(batch) ../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:503: in write_batch arrays.append(pa.array(typed_sequence)) pyarrow/array.pxi:230: in pyarrow.lib.array ??? pyarrow/array.pxi:110: in pyarrow.lib._handle_arrow_array_protocol ??? ../.venv/lib/python3.8/site-packages/datasets/arrow_writer.py:198: in __arrow_array__ out = cast_array_to_feature(out, type, allow_number_to_str=not self.trying_type) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1812: in cast_array_to_feature casted_values = _c(array.values, feature.feature) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1843: in cast_array_to_feature return array_cast(array, feature(), allow_number_to_str=allow_number_to_str) ../.venv/lib/python3.8/site-packages/datasets/table.py:1675: in wrapper return func(array, *args, **kwargs) ../.venv/lib/python3.8/site-packages/datasets/table.py:1752: in array_cast return array.cast(pa_type) pyarrow/array.pxi:915: in pyarrow.lib.Array.cast ??? ../.venv/lib/python3.8/site-packages/pyarrow/compute.py:376: in cast return call_function("cast", [arr], options) pyarrow/_compute.pyx:542: in pyarrow._compute.call_function ??? pyarrow/_compute.pyx:341: in pyarrow._compute.Function.call ??? pyarrow/error.pxi:144: in pyarrow.lib.pyarrow_internal_check_status ??? _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ > ??? E pyarrow.lib.ArrowNotImplementedError: Unsupported cast from int64 to null using function cast_null pyarrow/error.pxi:121: ArrowNotImplementedError ``` ## Workarounds * Not using batched=True * Using an np.array([],dtype=float) or similar instead of [] in the input * Naming the output column differently from the input column ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.2 - Platform: Ubuntu - Python version: 3.8 - PyArrow version: 8.0.0
false
1,269,237,447
https://api.github.com/repos/huggingface/datasets/issues/4482
https://github.com/huggingface/datasets/pull/4482
4,482
Test that TensorFlow is not imported on startup
closed
3
2022-06-13T10:33:49
2023-10-12T06:31:39
2023-10-11T09:11:56
lhoestq
[]
TF takes some time to be imported, and also uses some GPU memory. I just added a test to make sure that in the future it's never imported by default when ```python import datasets ``` is called. Right now this fails because `huggingface_hub` does import tensorflow (though this is fixed now on their `main` branch) I'll mark this PR as ready for review once `huggingface_hub` has a new release
true
1,269,187,792
https://api.github.com/repos/huggingface/datasets/issues/4481
https://github.com/huggingface/datasets/pull/4481
4,481
Fix iwslt2017
closed
4
2022-06-13T09:51:21
2022-10-26T09:09:31
2022-06-13T10:40:18
lhoestq
[]
The files were moved to google drive, I hosted them on the Hub instead (ok according to the license) I also updated the `datasets_infos.json`
true
1,268,921,567
https://api.github.com/repos/huggingface/datasets/issues/4480
https://github.com/huggingface/datasets/issues/4480
4,480
Bigbench tensorflow GPU dependency
closed
3
2022-06-13T05:24:06
2022-06-14T19:45:24
2022-06-14T19:45:23
cceyda
[ "bug" ]
## Describe the bug Loading bigbech ```py from datasets import load_dataset dataset = load_dataset("bigbench","swedish_to_german_proverbs") ``` tries to use gpu and fails with OOM with the following error ``` Downloading and preparing dataset bigbench/swedish_to_german_proverbs (download: Unknown size, generated: 68.92 KiB, post-processed: Unknown size, total: 68.92 KiB) to /home/ceyda/.cache/huggingface/datasets/bigbench/swedish_to_german_proverbs/1.0.0/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0... Generating default split: 0%| | 0/72 [00:00<?, ? examples/s]2022-06-13 14:11:04.154469: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-06-13 14:11:05.133600: F tensorflow/core/platform/statusor.cc:33] Attempting to fetch value instead of handling error INTERNAL: failed initializing StreamExecutor for CUDA device ordinal 3: INTERNAL: failed call to cuDevicePrimaryCtxRetain: CUDA_ERROR_OUT_OF_MEMORY: out of memory; total memory reported: 25396838400 Aborted (core dumped) ``` I think this is because bigbench dependency (below) installs tensorflow (GPU version) and dataloading tries to use GPU as default. `pip install bigbench@https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz` while just doing 'pip install bigbench' results in following error ``` File "/home/ceyda/.local/lib/python3.7/site-packages/datasets/load.py", line 109, in import_main_class module = importlib.import_module(module_path) File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 118, in <module> class Bigbench(datasets.GeneratorBasedBuilder): File "/home/ceyda/.cache/huggingface/modules/datasets_modules/datasets/bigbench/7d2f6e537fa937dfaac8b1c1df782f2055071d3fd8e4f4ae93d28012a354ced0/bigbench.py", line 127, in Bigbench BigBenchConfig(name=name, version=datasets.Version("1.0.0")) for name in bb_utils.get_all_json_task_names() AttributeError: module 'bigbench.api.util' has no attribute 'get_all_json_task_names' ``` ## Steps to avoid the bug Not ideal but can solve with (since I don't really use tensorflow elsewhere) `pip uninstall tensorflow` `pip install tensorflow-cpu` ## Environment info - datasets @ master - Python version: 3.7
false
1,268,558,237
https://api.github.com/repos/huggingface/datasets/issues/4479
https://github.com/huggingface/datasets/pull/4479
4,479
Include entity positions as feature in ReCoRD
closed
3
2022-06-12T11:56:28
2022-08-19T23:23:02
2022-08-19T13:23:48
richarddwang
[]
https://huggingface.co/datasets/super_glue/viewer/record/validation TLDR: We need to record entity positions, which are included in the source data but excluded by the loading script, to enable efficient and effective training for ReCoRD. Currently, the loading script ignores the entity positions ("entity_start", "entity_end") and only records entity text. This might be because the training method of the official baseline is to make n training instance from a datapoint by replacing \"\@ placeholder\" in query with each entity individually. But it increases the already heavy computation by multiple folds. So DeBERTa uses a method that take entity embeddings by their positions in the passage, and thus makes one training instance from one data point. It is way more efficient and proved effective for the ReCoRD task. Can anybody help me with the dataset card rendering error? Maybe @lhoestq ?
true
1,268,358,213
https://api.github.com/repos/huggingface/datasets/issues/4478
https://github.com/huggingface/datasets/issues/4478
4,478
Dataset slow during model training
open
5
2022-06-11T19:40:19
2022-06-14T12:04:31
null
lehrig
[ "bug" ]
## Describe the bug While migrating towards 🤗 Datasets, I encountered an odd performance degradation: training suddenly slows down dramatically. I train with an image dataset using Keras and execute a `to_tf_dataset` just before training. First, I have optimized my dataset following https://discuss.huggingface.co/t/solved-image-dataset-seems-slow-for-larger-image-size/10960/6, which actually improved the situation from what I had before but did not completely solve it. Second, I saved and loaded my dataset using `tf.data.experimental.save` and `tf.data.experimental.load` before training (for which I would have expected no performance change). However, I ended up with the performance I had before tinkering with 🤗 Datasets. Any idea what's the reason for this and how to speed-up training with 🤗 Datasets? ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset import os dataset_dir = "./dataset" prep_dataset_dir = "./prepdataset" model_dir = "./model" # Load Data dataset = load_dataset("Lehrig/Monkey-Species-Collection", "downsized") def read_image_file(example): with open(example["image"].filename, "rb") as f: example["image"] = {"bytes": f.read()} return example dataset = dataset.map(read_image_file) dataset.save_to_disk(dataset_dir) # Preprocess from datasets import ( Array3D, DatasetDict, Features, load_from_disk, Sequence, Value ) import numpy as np from transformers import ImageFeatureExtractionMixin dataset = load_from_disk(dataset_dir) num_classes = dataset["train"].features["label"].num_classes one_hot_matrix = np.eye(num_classes) feature_extractor = ImageFeatureExtractionMixin() def to_pixels(image): image = feature_extractor.resize(image, size=size) image = feature_extractor.to_numpy_array(image, channel_first=False) image = image / 255.0 return image def process(examples): examples["pixel_values"] = [ to_pixels(image) for image in examples["image"] ] examples["label"] = [ one_hot_matrix[label] for label in examples["label"] ] return examples features = Features({ "pixel_values": Array3D(dtype="float32", shape=(size, size, 3)), "label": Sequence(feature=Value(dtype="int32"), length=num_classes) }) prep_dataset = dataset.map( process, remove_columns=["image"], batched=True, batch_size=batch_size, num_proc=2, features=features, ) prep_dataset = prep_dataset.with_format("numpy") # Split train_dev_dataset = prep_dataset['test'].train_test_split( test_size=test_size, shuffle=True, seed=seed ) train_dev_test_dataset = DatasetDict({ 'train': train_dev_dataset['train'], 'dev': train_dev_dataset['test'], 'test': prep_dataset['test'], }) train_dev_test_dataset.save_to_disk(prep_dataset_dir) # Train Model import datetime import tensorflow as tf from tensorflow.keras import Sequential from tensorflow.keras.applications import InceptionV3 from tensorflow.keras.layers import Dense, Dropout, GlobalAveragePooling2D, BatchNormalization from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping from transformers import DefaultDataCollator dataset = load_from_disk(prep_data_dir) data_collator = DefaultDataCollator(return_tensors="tf") train_dataset = dataset["train"].to_tf_dataset( columns=['pixel_values'], label_cols=['label'], shuffle=True, batch_size=batch_size, collate_fn=data_collator ) validation_dataset = dataset["dev"].to_tf_dataset( columns=['pixel_values'], label_cols=['label'], shuffle=False, batch_size=batch_size, collate_fn=data_collator ) print(f'{datetime.datetime.now()} - Saving Data') tf.data.experimental.save(train_dataset, model_dir+"/train") tf.data.experimental.save(validation_dataset, model_dir+"/val") print(f'{datetime.datetime.now()} - Loading Data') train_dataset = tf.data.experimental.load(model_dir+"/train") validation_dataset = tf.data.experimental.load(model_dir+"/val") shape = np.shape(dataset["train"][0]["pixel_values"]) backbone = InceptionV3( include_top=False, weights='imagenet', input_shape=shape ) for layer in backbone.layers: layer.trainable = False model = Sequential() model.add(backbone) model.add(GlobalAveragePooling2D()) model.add(Dense(128, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(64, activation='relu')) model.add(BatchNormalization()) model.add(Dropout(0.3)) model.add(Dense(10, activation='softmax')) model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'] ) print(model.summary()) earlyStopping = EarlyStopping( monitor='val_loss', patience=10, verbose=0, mode='min' ) mcp_save = ModelCheckpoint( f'{model_dir}/best_model.hdf5', save_best_only=True, monitor='val_loss', mode='min' ) reduce_lr_loss = ReduceLROnPlateau( monitor='val_loss', factor=0.1, patience=7, verbose=1, min_delta=0.0001, mode='min' ) hist = model.fit( train_dataset, epochs=epochs, validation_data=validation_dataset, callbacks=[earlyStopping, mcp_save, reduce_lr_loss] ) ``` ## Expected results Same performance when training without my "save/load hack" or a good explanation/recommendation about the issue. ## Actual results Performance slower without my "save/load hack". **Epoch Breakdown (without my "save/load hack"):** - Epoch 1/10 41s 2s/step - loss: 1.6302 - accuracy: 0.5048 - val_loss: 1.4713 - val_accuracy: 0.3273 - lr: 0.0010 - Epoch 2/10 32s 2s/step - loss: 0.5357 - accuracy: 0.8510 - val_loss: 1.0447 - val_accuracy: 0.5818 - lr: 0.0010 - Epoch 3/10 36s 3s/step - loss: 0.3547 - accuracy: 0.9231 - val_loss: 0.6245 - val_accuracy: 0.7091 - lr: 0.0010 - Epoch 4/10 36s 3s/step - loss: 0.2721 - accuracy: 0.9231 - val_loss: 0.3395 - val_accuracy: 0.9091 - lr: 0.0010 - Epoch 5/10 32s 2s/step - loss: 0.1676 - accuracy: 0.9856 - val_loss: 0.2187 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 6/10 42s 3s/step - loss: 0.2066 - accuracy: 0.9615 - val_loss: 0.1635 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 7/10 32s 2s/step - loss: 0.1814 - accuracy: 0.9423 - val_loss: 0.1418 - val_accuracy: 0.9636 - lr: 0.0010 - Epoch 8/10 32s 2s/step - loss: 0.1301 - accuracy: 0.9856 - val_loss: 0.1388 - val_accuracy: 0.9818 - lr: 0.0010 - Epoch 9/10 loss: 0.1102 - accuracy: 0.9856 - val_loss: 0.1185 - val_accuracy: 0.9818 - lr: 0.0010 - Epoch 10/10 32s 2s/step - loss: 0.1013 - accuracy: 0.9808 - val_loss: 0.0978 - val_accuracy: 0.9818 - lr: 0.0010 **Epoch Breakdown (with my "save/load hack"):** - Epoch 1/10 13s 625ms/step - loss: 3.0478 - accuracy: 0.1146 - val_loss: 2.3061 - val_accuracy: 0.0727 - lr: 0.0010 - Epoch 2/10 0s 80ms/step - loss: 2.3105 - accuracy: 0.2656 - val_loss: 2.3085 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 3/10 0s 77ms/step - loss: 1.8608 - accuracy: 0.3542 - val_loss: 2.3130 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 4/10 1s 98ms/step - loss: 1.8677 - accuracy: 0.3750 - val_loss: 2.3157 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 5/10 1s 204ms/step - loss: 1.5561 - accuracy: 0.4583 - val_loss: 2.3049 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 6/10 1s 210ms/step - loss: 1.4657 - accuracy: 0.4896 - val_loss: 2.2944 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 7/10 1s 205ms/step - loss: 1.4018 - accuracy: 0.5312 - val_loss: 2.2917 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 8/10 1s 207ms/step - loss: 1.2370 - accuracy: 0.5729 - val_loss: 2.2814 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 9/10 1s 214ms/step - loss: 1.1190 - accuracy: 0.6250 - val_loss: 2.2733 - val_accuracy: 0.0909 - lr: 0.0010 - Epoch 10/10 1s 207ms/step - loss: 1.1484 - accuracy: 0.6302 - val_loss: 2.2624 - val_accuracy: 0.0909 - lr: 0.0010 ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-4.18.0-305.45.1.el8_4.ppc64le-ppc64le-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 7.0.0 - Pandas version: 1.4.2 - TensorFlow: 2.8.0 - GPU (used during training): Tesla V100-SXM2-32GB
false
1,268,308,986
https://api.github.com/repos/huggingface/datasets/issues/4477
https://github.com/huggingface/datasets/issues/4477
4,477
Dataset Viewer issue for fgrezes/WIESP2022-NER
closed
2
2022-06-11T15:49:17
2022-07-18T13:07:33
2022-07-18T13:07:33
AshTayade
[]
### Link _No response_ ### Description _No response_ ### Owner _No response_
false
1,267,987,499
https://api.github.com/repos/huggingface/datasets/issues/4476
https://github.com/huggingface/datasets/issues/4476
4,476
`to_pandas` doesn't take into account format.
closed
4
2022-06-10T20:25:31
2022-06-15T17:41:41
2022-06-15T17:41:41
Dref360
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** I have a large dataset that I need to convert part of to pandas to do some further analysis. Calling `to_pandas` directly on it is expensive. So I thought I could simply select the columns that I want and then call `to_pandas`. **Describe the solution you'd like** ```python from datasets import Dataset ds = Dataset.from_dict({'a': [1,2,3], 'b': [5,6,7], 'c': [8,9,10]}) pandas_df = ds.with_format(columns=['a', 'b']).to_pandas() # I would expect `pandas_df` to only include a,b as column. ``` **Describe alternatives you've considered** I could remove all columns that I don't want? But I don't know all of them in advance. **Additional context** I can probably make a PR with some pointers.
false
1,267,798,451
https://api.github.com/repos/huggingface/datasets/issues/4475
https://github.com/huggingface/datasets/pull/4475
4,475
Improve error message for missing pacakges from inside dataset script
closed
3
2022-06-10T16:59:36
2022-10-06T13:46:26
2022-06-13T13:16:43
mariosasko
[]
Improve the error message for missing packages from inside a dataset script: With this change, the error message for missing packages for `bigbench` looks as follows: ``` ImportError: To be able to use bigbench, you need to install the following dependencies: - 'bigbench' using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"' ``` And this is how it looked before: ``` ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance' ```
true
1,267,767,541
https://api.github.com/repos/huggingface/datasets/issues/4474
https://github.com/huggingface/datasets/pull/4474
4,474
[Docs] How to use with PyTorch page
closed
1
2022-06-10T16:25:49
2022-06-14T14:40:32
2022-06-14T14:04:33
lhoestq
[ "documentation" ]
Currently the docs about PyTorch are scattered around different pages, and we were missing a place to explain more in depth how to use and optimize a dataset for PyTorch. This PR is related to #4457 which is the TF counterpart :) cc @Rocketknight1 we can try to align both documentations contents now I think cc @stevhliu let me know what you think !
true
1,267,555,994
https://api.github.com/repos/huggingface/datasets/issues/4473
https://github.com/huggingface/datasets/pull/4473
4,473
Add SST-2 dataset
closed
5
2022-06-10T13:37:26
2022-06-13T14:11:34
2022-06-13T14:01:09
albertvillanova
[]
Add SST-2 dataset. Currently it is part of GLUE benchmark. This PR adds it as a standalone dataset. CC: @julien-c
true
1,267,488,523
https://api.github.com/repos/huggingface/datasets/issues/4472
https://github.com/huggingface/datasets/pull/4472
4,472
Fix 401 error for unauthticated requests to non-existing repos
closed
1
2022-06-10T12:38:11
2022-06-10T13:05:11
2022-06-10T12:55:57
lhoestq
[]
The hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos. This PR add support for the 401 error and fixes the CI fails on `master`
true
1,267,475,268
https://api.github.com/repos/huggingface/datasets/issues/4471
https://github.com/huggingface/datasets/issues/4471
4,471
CI error with repo lhoestq/_dummy
closed
1
2022-06-10T12:26:06
2022-06-10T13:24:53
2022-06-10T13:24:53
albertvillanova
[ "bug" ]
## Describe the bug CI is failing because of repo "lhoestq/_dummy". See: https://app.circleci.com/pipelines/github/huggingface/datasets/12461/workflows/1b040b45-9578-4ab9-8c44-c643c4eb8691/jobs/74269 ``` requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/api/datasets/lhoestq/_dummy?full=true ``` The repo seems to no longer exist: https://huggingface.co/api/datasets/lhoestq/_dummy ``` error: "Repository not found" ``` CC: @lhoestq
false
1,267,470,051
https://api.github.com/repos/huggingface/datasets/issues/4470
https://github.com/huggingface/datasets/pull/4470
4,470
Reorder returned validation/test splits in script template
closed
1
2022-06-10T12:21:13
2022-06-10T18:04:10
2022-06-10T17:54:50
albertvillanova
[]
null
true
1,267,213,849
https://api.github.com/repos/huggingface/datasets/issues/4469
https://github.com/huggingface/datasets/pull/4469
4,469
Replace data URLs in wider_face dataset once hosted on the Hub
closed
1
2022-06-10T08:13:25
2022-06-10T16:42:08
2022-06-10T16:32:46
albertvillanova
[]
This PR replaces the URLs of data files in Google Drive with our Hub ones, once the data owners have approved to host their data on the Hub. They also informed us that their dataset is licensed under CC BY-NC-ND.
true
1,266,715,742
https://api.github.com/repos/huggingface/datasets/issues/4468
https://github.com/huggingface/datasets/pull/4468
4,468
Generalize tutorials for audio and vision
closed
1
2022-06-09T22:00:44
2022-06-14T16:22:02
2022-06-14T16:12:00
stevhliu
[ "documentation" ]
This PR updates the tutorials to be more generalizable to all modalities. After reading the tutorials, a user should be able to load any type of dataset, know how to index into and slice a dataset, and do the most basic/common type of preprocessing (tokenization, resampling, applying transforms) depending on their dataset. Other changes include: - Removed the sections about a dataset's metadata, features, and columns because we cover this in an earlier tutorial about inspecting the `DatasetInfo` through the dataset builder. - Separate the sharing dataset tutorial into two sections: (1) uploading via the web interface and (2) using the `huggingface_hub` library. - Renamed some tutorials in the TOC to be more clear and specific. - Added more text to nudge users towards joining the community and asking questions on the forums. - If it's okay with everyone, I'd also like to remove the section about loading and using metrics since we have the `evaluate` docs now.
true
1,266,218,358
https://api.github.com/repos/huggingface/datasets/issues/4467
https://github.com/huggingface/datasets/issues/4467
4,467
Transcript string 'null' converted to [None] by load_dataset()
closed
3
2022-06-09T14:26:00
2023-07-04T02:18:39
2022-06-09T16:29:02
mbarnig
[ "bug" ]
## Issue I am training a luxembourgish speech-recognition model in Colab with a custom dataset, including a dictionary of luxembourgish words, for example the speaken numbers 0 to 9. When preparing the dataset with the script `ds_train1 = mydataset.map(prepare_dataset)` the following error was issued: ``` ValueError Traceback (most recent call last) <ipython-input-69-1e8f2b37f5bc> in <module>() ----> 1 ds_train = mydataset_train.map(prepare_dataset) 11 frames /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2450 if not _is_valid_text_input(text): 2451 raise ValueError( -> 2452 "text input must of type str (single example), List[str] (batch or single pretokenized example) " 2453 "or List[List[str]] (batch of pretokenized examples)." 2454 ) ValueError: text input must of type str (single example), List[str] (batch or single pretokenized example) or List[List[str]] (batch of pretokenized examples). ``` Debugging this problem was not easy, all transcriptions in the dataset are correct strings. Finally I discovered that the transcription string 'null' is interpreted as [None] by the `load_dataset()` script. By deleting this row in the dataset the training worked fine. ## Expected result: transcription 'null' interpreted as 'str' instead of 'None'. ## Reproduction Here is the code to reproduce the error with a one-row-dataset. ``` with open("null-test.csv") as f: reader = csv.reader(f) for row in reader: print(row) ``` ['wav_filename', 'wav_filesize', 'transcript'] ['wavs/female/NULL1.wav', '17530', 'null'] ``` dataset = load_dataset('csv', data_files={'train': 'null-test.csv'}) ``` Using custom data configuration default-81ac0c0e27af3514 Downloading and preparing dataset csv/default to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519... Downloading data files: 100% 1/1 [00:00<00:00, 29.55it/s] Extracting data files: 100% 1/1 [00:00<00:00, 23.66it/s] Dataset csv downloaded and prepared to /root/.cache/huggingface/datasets/csv/default-81ac0c0e27af3514/0.0.0/433e0ccc46f9880962cc2b12065189766fbb2bee57a221866138fb9203c83519. Subsequent calls will reuse this data. 100% 1/1 [00:00<00:00, 25.84it/s] ``` print(dataset['train']['transcript']) ``` [None] ## Environment info ``` !pip install datasets==2.2.2 !pip install transformers==4.19.2 ```
false
1,266,159,920
https://api.github.com/repos/huggingface/datasets/issues/4466
https://github.com/huggingface/datasets/pull/4466
4,466
Optimize contiguous shard and select
closed
3
2022-06-09T13:45:39
2022-06-14T16:04:30
2022-06-14T15:54:45
lhoestq
[]
Currently `.shard()` and `.select()` always create an indices mapping. However if the requested data are contiguous, it's much more optimized to simply slice the Arrow table instead of building an indices mapping. In particular: - the shard/select operation will be much faster - reading speed will be much faster in the resulting dataset, since it won't have to do a lookup step in the indices mapping Since `.shard()` is also used for `.map()` with `num_proc>1`, it will also significantly improve the reading speed of multiprocessed `.map()` operations Here is an example of speed-up: ```python >>> import io >>> import numpy as np >>> from datasets import Dataset >>> ds = Dataset.from_dict({"a": np.random.rand(10_000_000)}) >>> shard = ds.shard(num_shards=4, index=0, contiguous=True) # this calls `.select(range(2_500_000))` >>> buf = io.BytesIO() >>> %time dd.to_json(buf) Creating json from Arrow format: 100%|██████████████████| 100/100 [00:00<00:00, 376.17ba/s] CPU times: user 258 ms, sys: 9.06 ms, total: 267 ms Wall time: 266 ms ``` while previously it was ```python Creating json from Arrow format: 100%|███████████████████| 100/100 [00:03<00:00, 29.41ba/s] CPU times: user 3.33 s, sys: 69.1 ms, total: 3.39 s Wall time: 3.4 s ``` In this simple case the speed-up is x10, but @sayakpaul experienced a x100 speed-up on its data when exporting to JSON. ## Implementation details I mostly improved `.select()`: it now checks if the input corresponds to a contiguous chunk of data and then it slices the main Arrow table (or the indices mapping table if it exists). To check if the input indices are contiguous it checks two possibilities: - if the indices is of type `range`, it checks that start >= 0 and step = 1 - otherwise in the general case, it iterates over the indices. If all the indices are contiguous then we're good, otherwise we have to build an indices mapping. Having to iterate over the indices doesn't cause performance issues IMO because: - either they are contiguous and in this case the cost of iterating over the indices is much less than the cost of creating an indices mapping - or they are not contiguous, and then iterating generally stops quickly when it first encounters the first indice that is not contiguous.
true
1,265,754,479
https://api.github.com/repos/huggingface/datasets/issues/4465
https://github.com/huggingface/datasets/pull/4465
4,465
Fix bigbench config names
closed
1
2022-06-09T08:06:19
2022-06-09T14:38:36
2022-06-09T14:29:19
lhoestq
[]
Fix https://github.com/huggingface/datasets/issues/4462 in the case of bigbench
true
1,265,682,931
https://api.github.com/repos/huggingface/datasets/issues/4464
https://github.com/huggingface/datasets/pull/4464
4,464
Extend support for streaming datasets that use xml.dom.minidom.parse
closed
1
2022-06-09T06:58:25
2022-06-09T08:43:24
2022-06-09T08:34:16
albertvillanova
[]
This PR extends the support in streaming mode for datasets that use `xml.dom.minidom.parse`, by patching that function. This PR adds support for streaming datasets like "Yaxin/SemEval2015". Fix #4453.
true
1,265,093,211
https://api.github.com/repos/huggingface/datasets/issues/4463
https://github.com/huggingface/datasets/pull/4463
4,463
Use config_id to check split sizes instead of config name
closed
2
2022-06-08T17:45:24
2023-09-24T10:03:00
2022-06-09T08:06:37
lhoestq
[]
Fix https://github.com/huggingface/datasets/issues/4462
true
1,265,079,347
https://api.github.com/repos/huggingface/datasets/issues/4462
https://github.com/huggingface/datasets/issues/4462
4,462
BigBench: NonMatchingSplitsSizesError when passing a dataset configuration parameter
open
3
2022-06-08T17:31:24
2022-07-05T07:39:55
null
lhoestq
[ "bug" ]
As noticed in https://github.com/huggingface/datasets/pull/4125 when a dataset config class has a parameter that reduces the number of examples (e.g. named `max_examples`), then loading the dataset and passing `max_examples` raises `NonMatchingSplitsSizesError`. This is because it will check for expected the number of examples of the config with the same name without taking into account the `max_examples` parameter. This can be fixed by checking the expected number of examples using the **config id** instead of name. Indeed the config id corresponds to the config name + an optional suffix that depends on the config parameters
false
1,264,800,451
https://api.github.com/repos/huggingface/datasets/issues/4461
https://github.com/huggingface/datasets/issues/4461
4,461
AttributeError: module 'datasets' has no attribute 'load_dataset'
closed
4
2022-06-08T13:59:20
2024-03-25T12:58:29
2022-06-08T14:41:00
AlexNLP
[ "bug" ]
## Describe the bug I have piped install datasets, but this package doesn't have these attributes: load_dataset, load_metric. ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-5.13.0-44-generic-x86_64-with-debian-bullseye-sid - Python version: 3.6.13 - PyArrow version: 6.0.1
false
1,264,644,205
https://api.github.com/repos/huggingface/datasets/issues/4460
https://github.com/huggingface/datasets/pull/4460
4,460
Drop Python 3.6 support
closed
5
2022-06-08T12:10:18
2022-07-26T19:16:39
2022-07-26T19:04:21
mariosasko
[]
Remove the fallback imports/checks in the code needed for Python 3.6 and update the requirements/CI files. Also, use Python types for the NumPy dtype wherever possible to avoid deprecation warnings in newer NumPy versions.
true
1,264,636,481
https://api.github.com/repos/huggingface/datasets/issues/4459
https://github.com/huggingface/datasets/pull/4459
4,459
Add and fix language tags for udhr dataset
closed
1
2022-06-08T12:03:42
2022-06-08T12:36:24
2022-06-08T12:27:13
albertvillanova
[]
Related to #4362.
true
1,263,531,911
https://api.github.com/repos/huggingface/datasets/issues/4457
https://github.com/huggingface/datasets/pull/4457
4,457
First draft of the docs for TF + Datasets
closed
4
2022-06-07T16:06:48
2022-06-14T16:08:41
2022-06-14T15:59:08
Rocketknight1
[ "documentation" ]
I might cc a few of the other TF people to take a look when this is closer to being finished, but it's still a draft for now.
true
1,263,241,449
https://api.github.com/repos/huggingface/datasets/issues/4456
https://github.com/huggingface/datasets/issues/4456
4,456
Workflow for Tabular data
open
8
2022-06-07T12:48:22
2023-03-06T08:53:55
null
lhoestq
[ "enhancement", "generic discussion" ]
Tabular data are treated very differently than data for NLP, audio, vision, etc. and therefore the worflow for tabular data in `datasets` is not ideal. For example for tabular data, it is common to use pandas/spark/dask to process the data, and then load the data into X and y (X is an array of features and y an array of labels), then train_test_split and finally feed the data to a machine learning model. In `datasets` the workflow is different: we use load_dataset, then map, then train_test_split (if we only have a train split) and we end up with columnar dataset splits, not formatted as X and y. Right now, it is already possible to convert a dataset from and to pandas, but there are still many things that could improve the workflow for tabular data: - be able to load the data into X and y - be able to load a dataset from the output of spark or dask (as far as I know it's usually csv or parquet files on S3/GCS/HDFS etc.) - support "unsplit" datasets explicitly, instead of putting everything in "train" by default cc @adrinjalali @merveenoyan feel free to complete/correct this :) Feel free to also share ideas of APIs that would be super intuitive in your opinion !
false
1,263,089,067
https://api.github.com/repos/huggingface/datasets/issues/4455
https://github.com/huggingface/datasets/pull/4455
4,455
Update data URLs in fever dataset
closed
1
2022-06-07T10:40:54
2022-06-08T07:24:54
2022-06-08T07:16:17
albertvillanova
[]
As stated in their website, data owners updated their URLs on 28/04/2022. This PR updates the data URLs. Fix #4452.
true
1,262,674,973
https://api.github.com/repos/huggingface/datasets/issues/4454
https://github.com/huggingface/datasets/issues/4454
4,454
Dataset Viewer issue for Yaxin/SemEval2015
closed
1
2022-06-07T03:31:46
2022-06-07T11:53:11
2022-06-07T11:53:11
WithYouTo
[ "duplicate", "dataset-viewer" ]
### Link _No response_ ### Description the link could not visit ### Owner _No response_
false
1,262,674,105
https://api.github.com/repos/huggingface/datasets/issues/4453
https://github.com/huggingface/datasets/issues/4453
4,453
Dataset Viewer issue for Yaxin/SemEval2015
closed
3
2022-06-07T03:30:08
2022-06-09T08:34:16
2022-06-09T08:34:16
WithYouTo
[]
### Link _No response_ ### Description _No response_ ### Owner _No response_
false
1,262,529,654
https://api.github.com/repos/huggingface/datasets/issues/4452
https://github.com/huggingface/datasets/issues/4452
4,452
Trying to load FEVER dataset results in NonMatchingChecksumError
closed
2
2022-06-06T23:13:15
2022-12-15T13:36:40
2022-06-08T07:16:16
santhnm2
[ "bug" ]
## Describe the bug Trying to load the `fever` dataset fails with `datasets.utils.info_utils.NonMatchingChecksumError`. I tried with `download_mode="force_redownload"` but that did not fix the error. I also tried with `ignore_verification=True` but then that raised a `json.decoder.JSONDecodeError`. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('fever', 'v1.0') # Fails with NonMatchingChecksumError dataset = load_dataset('fever', 'v1.0', download_mode="force_redownload") # Fails with NonMatchingChecksumError dataset = load_dataset('fever', 'v1.0', ignore_verification=True)` # Fails with JSONDecodeError ``` ## Expected results I expect this call to return with no error raised. ## Actual results With `ignore_verification=False`: ``` *** datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://s3-eu-west-1.amazonaws.com/fever.public/train.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_dev_public.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/shared_task_test.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_dev.jsonl', 'https://s3-eu-west-1.amazonaws.com/fever.public/paper_test.jsonl'] ``` With `ignore_verification=True`: ``` *** json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.2.3.dev0 - Platform: Linux-4.15.0-50-generic-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
false
1,262,103,323
https://api.github.com/repos/huggingface/datasets/issues/4451
https://github.com/huggingface/datasets/pull/4451
4,451
Use newer version of multi-news with fixes
closed
2
2022-06-06T16:57:08
2022-06-07T17:40:01
2022-06-07T17:14:44
JohnGiorgi
[]
Closes #4430.
true
1,261,878,324
https://api.github.com/repos/huggingface/datasets/issues/4450
https://github.com/huggingface/datasets/pull/4450
4,450
Update README.md of fquad
closed
1
2022-06-06T13:52:41
2022-06-06T14:51:49
2022-06-06T14:43:03
lhoestq
[]
null
true
1,261,262,326
https://api.github.com/repos/huggingface/datasets/issues/4449
https://github.com/huggingface/datasets/issues/4449
4,449
Rj
closed
0
2022-06-06T02:24:32
2022-06-06T15:44:50
2022-06-06T15:44:50
Aeckard45
[]
import android.content.DialogInterface; import android.database.Cursor; import android.os.Bundle; import android.view.View; import android.widget.ArrayAdapter; import android.widget.Button; import android.widget.EditText; import android.widget.Toast; import androidx.appcompat.app.AlertDialog; import androidx.appcompat.app.AppCompatActivity; public class MainActivity extends AppCompatActivity { private EditText editTextID; private EditText editTextName; private EditText editTextNum; private String name; private int number; private String ID; private dbHelper db; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); db = new dbHelper(this); editTextID = findViewById(R.id.editText1); editTextName = findViewById(R.id.editText2); editTextNum = findViewById(R.id.editText3); Button buttonSave = findViewById(R.id.button); Button buttonRead = findViewById(R.id.button2); Button buttonUpdate = findViewById(R.id.button3); Button buttonDelete = findViewById(R.id.button4); Button buttonSearch = findViewById(R.id.button5); Button buttonDeleteAll = findViewById(R.id.button6); buttonSave.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { name = editTextName.getText().toString(); String num = editTextNum.getText().toString(); if (name.isEmpty() || num.isEmpty()) { Toast.makeText(MainActivity.this, "Cannot Submit Empty Fields", Toast.LENGTH_SHORT).show(); } else { number = Integer.parseInt(num); try { // Insert Data db.insertData(name, number); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } catch (Exception e) { e.printStackTrace(); } } } }); buttonRead.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { final ArrayAdapter<String> adapter = new ArrayAdapter<>(MainActivity.this, android.R.layout.simple_list_item_1); String name; String num; String id; try { Cursor cursor = db.readData(); if (cursor != null && cursor.getCount() > 0) { while (cursor.moveToNext()) { id = cursor.getString(0); // get data in column index 0 name = cursor.getString(1); // get data in column index 1 num = cursor.getString(2); // get data in column index 2 // Add SQLite data to listView adapter.add("ID :- " + id + "\n" + "Name :- " + name + "\n" + "Number :- " + num + "\n\n"); } } else { adapter.add("No Data"); } cursor.close(); } catch (Exception e) { e.printStackTrace(); } // show the saved data in alertDialog AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this); builder.setTitle("SQLite saved data"); builder.setIcon(R.mipmap.app_icon_foreground); builder.setAdapter(adapter, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { } }); builder.setPositiveButton("OK", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { dialog.cancel(); } }); AlertDialog dialog = builder.create(); dialog.show(); } }); buttonUpdate.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { name = editTextName.getText().toString(); String num = editTextNum.getText().toString(); ID = editTextID.getText().toString(); if (name.isEmpty() || num.isEmpty() || ID.isEmpty()) { Toast.makeText(MainActivity.this, "Cannot Submit Empty Fields", Toast.LENGTH_SHORT).show(); } else { number = Integer.parseInt(num); try { // Update Data db.updateData(ID, name, number); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } catch (Exception e) { e.printStackTrace(); } } } }); buttonDelete.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { ID = editTextID.getText().toString(); if (ID.isEmpty()) { Toast.makeText(MainActivity.this, "Please enter the ID", Toast.LENGTH_SHORT).show(); } else { try { // Delete Data db.deleteData(ID); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } catch (Exception e) { e.printStackTrace(); } } } }); buttonDeleteAll.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Delete all data // You can simply delete all the data by calling this method --> db.deleteAllData(); // You can try this also AlertDialog.Builder builder = new AlertDialog.Builder(MainActivity.this); builder.setIcon(R.mipmap.app_icon_foreground); builder.setTitle("Delete All Data"); builder.setCancelable(false); builder.setMessage("Do you really need to delete your all data ?"); builder.setPositiveButton("Yes", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // User confirmed , now you can delete the data db.deleteAllData(); // Clear the fields editTextID.getText().clear(); editTextName.getText().clear(); editTextNum.getText().clear(); } }); builder.setNegativeButton("No", new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { // user not confirmed dialog.cancel(); } }); AlertDialog dialog = builder.create(); dialog.show(); } }); buttonSearch.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { ID = editTextID.getText().toString(); if (ID.isEmpty()) { Toast.makeText(MainActivity.this, "Please enter the ID", Toast.LENGTH_SHORT).show(); } else { try { // Search data Cursor cursor = db.searchData(ID); if (cursor.moveToFirst()) { editTextName.setText(cursor.getString(1)); editTextNum.setText(cursor.getString(2)); Toast.makeText(MainActivity.this, "Data successfully searched", Toast.LENGTH_SHORT).show(); } else { Toast.makeText(MainActivity.this, "ID not found", Toast.LENGTH_SHORT).show(); editTextNum.setText("ID Not found"); editTextName.setText("ID not found"); } cursor.close(); } catch (Exception e) { e.printStackTrace(); } } } }); } }
false
1,260,966,129
https://api.github.com/repos/huggingface/datasets/issues/4448
https://github.com/huggingface/datasets/issues/4448
4,448
New Preprocessing Feature - Deduplication [Request]
open
2
2022-06-05T05:32:56
2023-12-12T07:52:40
null
yuvalkirstain
[ "duplicate", "enhancement" ]
**Is your feature request related to a problem? Please describe.** Many large datasets are full of duplications and it has been shown that deduplicating datasets can lead to better performance while training, and more truthful evaluation at test-time. A feature that allows one to easily deduplicate a dataset can be cool! **Describe the solution you'd like** We can define a function and keep only the first/last data-point that yields the value according to this function. **Describe alternatives you've considered** The clear alternative is to repeat a clear boilerplate every time someone want to deduplicate a dataset.
false
1,260,041,805
https://api.github.com/repos/huggingface/datasets/issues/4447
https://github.com/huggingface/datasets/pull/4447
4,447
Minor fixes/improvements in `scene_parse_150` card
closed
1
2022-06-03T15:22:34
2022-06-06T15:50:25
2022-06-06T15:41:37
mariosasko
[]
Add `paperswithcode_id` and fix some links in the `scene_parse_150` card.
true
1,260,028,995
https://api.github.com/repos/huggingface/datasets/issues/4446
https://github.com/huggingface/datasets/pull/4446
4,446
Add missing kwargs to docstrings
closed
1
2022-06-03T15:10:27
2022-06-03T16:10:09
2022-06-03T16:01:29
albertvillanova
[]
null
true
1,259,947,568
https://api.github.com/repos/huggingface/datasets/issues/4445
https://github.com/huggingface/datasets/pull/4445
4,445
Fix missing args in docstring of load_dataset_builder
closed
1
2022-06-03T13:55:50
2022-06-03T14:35:32
2022-06-03T14:27:09
albertvillanova
[]
Currently, the docstring of `load_dataset_builder` only contains the first parameter `path` (no other): - https://huggingface.co/docs/datasets/v2.2.1/en/package_reference/loading_methods#datasets.load_dataset_builder.path
true
1,259,738,209
https://api.github.com/repos/huggingface/datasets/issues/4444
https://github.com/huggingface/datasets/pull/4444
4,444
Fix kwargs in docstrings
closed
1
2022-06-03T10:29:02
2022-06-03T11:01:28
2022-06-03T10:52:46
albertvillanova
[]
To fix the rendering of `**kwargs` in docstrings, a parentheses must be added afterwards. See: - huggingface/doc-builder/issues/235
true
1,259,606,334
https://api.github.com/repos/huggingface/datasets/issues/4443
https://github.com/huggingface/datasets/issues/4443
4,443
Dataset Viewer issue for openclimatefix/nimrod-uk-1km
open
7
2022-06-03T08:17:16
2023-09-25T12:15:08
null
ZYMXIXI
[]
### Link _No response_ ### Description _No response_ ### Owner _No response_
false
1,258,589,276
https://api.github.com/repos/huggingface/datasets/issues/4442
https://github.com/huggingface/datasets/issues/4442
4,442
Dataset Viewer issue for amazon_polarity
closed
2
2022-06-02T19:18:38
2022-06-07T18:50:37
2022-06-07T18:50:37
lewtun
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/amazon_polarity/viewer/amazon_polarity/test ### Description For some reason the train split is OK but the test split is not for this dataset: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/amazon_polarity/__init__.py' ``` ### Owner No
false
1,258,568,656
https://api.github.com/repos/huggingface/datasets/issues/4441
https://github.com/huggingface/datasets/issues/4441
4,441
Dataset Viewer issue for aeslc
closed
1
2022-06-02T18:57:12
2022-06-07T18:50:55
2022-06-07T18:50:55
lewtun
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/aeslc ### Description The dataset viewer can't find `dataset_infos.json` in it's cache: ``` Server error Status code: 400 Exception: FileNotFoundError Message: [Errno 2] No such file or directory: '/cache/modules/datasets_modules/datasets/aeslc/eb8e30234cf984a58ebe9f205674597ac1db2ec91e7321cd7f36864f7e3671b8/dataset_infos.json' ``` ### Owner No
false
1,258,494,469
https://api.github.com/repos/huggingface/datasets/issues/4440
https://github.com/huggingface/datasets/pull/4440
4,440
Update docs around audio and vision
closed
2
2022-06-02T17:42:03
2022-06-23T16:33:19
2022-06-23T16:23:02
stevhliu
[ "documentation" ]
As part of the strategy to center the docs around the different modalities, this PR updates the quickstart to include audio and vision examples. This improves the developer experience by making audio and vision content more discoverable, enabling users working in these modalities to also quickly get started without digging too deeply into the docs. Other changes include: - Moved the installation guide to the Get Started section because it should be part of a user's onboarding to the library before exploring tutorials or how-to's. - Updated the native TF code at creating a `tf.data.Dataset` because it was throwing an error. The `to_tensor()` bit was redundant and removing it fixed the error (please double-check me here!). - Added some UI components to the quickstart so it's easier for users to navigate directly to the relevant section with context about what to expect. - Reverted to the code tabs for content that don't have any framework-specific text. I think this saves space compared to the code blocks. We'll still use the code blocks if the `torch` text is different from the `tf` text. Let me know what you think, especially if we should include some code samples for training a model in the audio/vision sections. I left this out since we already showed it in the NLP section. I want to keep the focus on using Datasets to load and process a dataset, and not so much the training part. Maybe we can add links to the Transformers docs instead?
true
1,258,434,111
https://api.github.com/repos/huggingface/datasets/issues/4439
https://github.com/huggingface/datasets/issues/4439
4,439
TIMIT won't load after manual download: Errors about files that don't exist
closed
3
2022-06-02T16:35:56
2022-06-03T08:44:17
2022-06-03T08:44:16
drscotthawley
[ "bug" ]
## Describe the bug I get the message from HuggingFace that it must be downloaded manually. From the URL provided in the message, I got to UPenn page for manual download. (UPenn apparently want $250? for the dataset??) ...So, ok, I obtained a copy from a friend and also a smaller version from Kaggle. But in both cases the HF dataloader fails; it is looking for files that don't exist anywhere in the dataset: it is looking for files with lower-case letters like "**test*" (all the filenames in both my copies are uppercase) and certain file extensions that exclude the .DOC which is provided in TIMIT: ## Steps to reproduce the bug ```python data = load_dataset('timit_asr', 'clean')['train'] ``` ## Expected results The dataset should load with no errors. ## Actual results This error message: ``` File "/home/ubuntu/envs/data2vec/lib/python3.9/site-packages/datasets/data_files.py", line 201, in resolve_patterns_locally_or_by_urls raise FileNotFoundError(error_msg) FileNotFoundError: Unable to resolve any data file that matches '['**test*', '**eval*']' at /home/ubuntu/datasets/timit with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` But this is a strange sort of error: why is it looking for lower-case file names when all the TIMIT dataset filenames are uppercase? Why does it exclude .DOC files when the only parts of the TIMIT data set with "TEST" in them have ".DOC" extensions? ...I wonder, how was anyone able to get this to work in the first place? The files in the dataset look like the following: ``` ³ PHONCODE.DOC ³ PROMPTS.TXT ³ SPKRINFO.TXT ³ SPKRSENT.TXT ³ TESTSET.DOC ``` ...so why are these being excluded by the dataset loader? ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-1060-aws-x86_64-with-glibc2.27 - Python version: 3.9.9 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
false
1,258,255,394
https://api.github.com/repos/huggingface/datasets/issues/4438
https://github.com/huggingface/datasets/pull/4438
4,438
Fix docstring of inspect_dataset
closed
1
2022-06-02T14:21:10
2022-06-02T16:40:55
2022-06-02T16:32:27
albertvillanova
[]
As pointed out by @sgugger: - huggingface/doc-builder/issues/235
true
1,258,249,582
https://api.github.com/repos/huggingface/datasets/issues/4437
https://github.com/huggingface/datasets/pull/4437
4,437
Add missing columns to `blended_skill_talk`
closed
1
2022-06-02T14:16:26
2022-06-06T15:49:56
2022-06-06T15:41:25
mariosasko
[]
Adds the missing columns to `blended_skill_talk` to align the loading logic with [ParlAI](https://github.com/facebookresearch/ParlAI/blob/main/parlai/tasks/blended_skill_talk/build.py). Fix #4426
true
1,257,758,834
https://api.github.com/repos/huggingface/datasets/issues/4436
https://github.com/huggingface/datasets/pull/4436
4,436
Fix directory names for LDC data in timit_asr dataset
closed
1
2022-06-02T06:45:04
2022-06-02T09:32:56
2022-06-02T09:24:27
albertvillanova
[]
Related to: - #4422
true
1,257,496,552
https://api.github.com/repos/huggingface/datasets/issues/4435
https://github.com/huggingface/datasets/issues/4435
4,435
Load a local cached dataset that has been modified
closed
2
2022-06-02T01:51:49
2022-06-02T23:59:26
2022-06-02T23:59:18
mihail911
[ "bug" ]
## Describe the bug I have loaded a dataset as follows: ``` d = load_dataset("emotion", split="validation") ``` Afterwards I make some modifications to the dataset via a `map` call: ``` d.map(some_update_func, cache_file_name=modified_dataset) ``` This generates a cached version of the dataset on my local system in the same directory as the original download of the data (/path/to/cache). Running an `ls` returns: ``` modified_dataset dataset_info.json emotion-test.arrow emotion-train.arrow emotion-validation.arrow ``` as expected. However, when I try to load up the modified cached dataset via a call to ``` modified = load_dataset("emotion", split="validation", data_files="/path/to/cache/modified_dataset") ``` it simply redownloads a new version of the dataset and dumps to a new cache rather than loading up the original modified dataset: ``` Using custom data configuration validation-cdbf51685638421b Downloading and preparing dataset emotion/validation to ... ``` How am I supposed to load the original modified local cache copy of the dataset? ## Environment info - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
false
1,256,207,321
https://api.github.com/repos/huggingface/datasets/issues/4434
https://github.com/huggingface/datasets/pull/4434
4,434
Fix dummy dataset generation script for handling nested types of _URLs
closed
0
2022-06-01T14:53:15
2022-06-07T12:08:28
2022-06-07T09:24:09
silverriver
[]
It seems that when user specify nested _URLs structures in their dataset script. An error will be raised when generating dummy dataset. I think the types of all elements in `dummy_data_dict.values()` should be checked because they may have different types. Linked to issue #4428 PS: I am not sure whether my code fix this issue in a proper way.
true
1,255,830,758
https://api.github.com/repos/huggingface/datasets/issues/4433
https://github.com/huggingface/datasets/pull/4433
4,433
Fix script fetching and local path handling in `inspect_dataset` and `inspect_metric`
closed
2
2022-06-01T12:09:56
2022-06-09T10:34:54
2022-06-09T10:26:07
mariosasko
[]
Fix #4348
true
1,255,523,720
https://api.github.com/repos/huggingface/datasets/issues/4432
https://github.com/huggingface/datasets/pull/4432
4,432
Fix builder docstring
closed
1
2022-06-01T09:45:30
2022-06-02T17:43:47
2022-06-02T17:35:15
albertvillanova
[]
Currently, the args of `DatasetBuilder` do not appear in the docs: https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/builder_classes#datasets.DatasetBuilder
true
1,254,618,948
https://api.github.com/repos/huggingface/datasets/issues/4431
https://github.com/huggingface/datasets/pull/4431
4,431
Add personaldialog datasets
closed
5
2022-06-01T01:20:40
2022-06-11T12:40:23
2022-06-11T12:31:16
silverriver
[]
It seems that all tests are passed
true
1,254,412,591
https://api.github.com/repos/huggingface/datasets/issues/4430
https://github.com/huggingface/datasets/issues/4430
4,430
Add ability to load newer, cleaner version of Multi-News
closed
6
2022-05-31T21:00:44
2022-06-07T17:14:44
2022-06-07T17:14:44
JohnGiorgi
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** The [Multi-News dataloader points to the original version of the Multi-News dataset](https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/datasets/multi_news/multi_news.py#L47), but this has [known errors in it](https://github.com/Alex-Fabbri/Multi-News/issues/11). There exists a [newer version which fixes some of these issues](https://drive.google.com/open?id=1jwBzXBVv8sfnFrlzPnSUBHEEAbpIUnFq). Unfortunately I don't think you can just replace this old URL with the new one, otherwise this could lead to issues with reproducibility. **Describe the solution you'd like** Add a new version to the Multi-News dataloader that points to the updated dataset which has fixes for some known issues. **Describe alternatives you've considered** Replace the current URL to the original version to the dataset with the URL to the version with fixes. **Additional context** Would be happy to make a PR for this, could someone maybe point me to another dataloader that has multiple versions so I can see how this is handled in `datasets`?
false
1,254,184,358
https://api.github.com/repos/huggingface/datasets/issues/4429
https://github.com/huggingface/datasets/pull/4429
4,429
Update builder docstring for deprecated/added arguments
closed
5
2022-05-31T17:37:25
2022-06-08T11:40:18
2022-06-08T11:31:45
albertvillanova
[]
This PR updates the builder docstring with deprecated/added directives for arguments name/config_name. Follow up of: - #4414 - huggingface/doc-builder#233 First merge: - #4432
true
1,254,092,818
https://api.github.com/repos/huggingface/datasets/issues/4428
https://github.com/huggingface/datasets/issues/4428
4,428
Errors when building dummy data if you use nested _URLS
closed
0
2022-05-31T16:10:57
2022-06-07T09:24:09
2022-06-07T09:24:09
silverriver
[ "bug" ]
## Describe the bug When making dummy data with the `datasets-cli dummy_data` tool, an error will be raised if you use a nested _URLS in your dataset script. Traceback (most recent call last): File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module> main() File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 311, in run self._autogenerate_dummy_data( File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 337, in _autogenerate_dummy_data dataset_builder._split_generators(dl_manager) File "/home/name/.cache/huggingface/modules/datasets_modules/datasets/personal_dialog/559332bced5eeafa7f7efc2a7c10ce02cee2a8116bbab4611c35a50ba2715b77/personal_dialog.py", line 108, in _split_generators data_dir = dl_manager.download_and_extract(urls) File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 56, in download_and_extract dummy_output = self.mock_download_manager.download(url_or_urls) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 130, in download return self.download_and_extract(data_url) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 122, in download_and_extract return self.create_dummy_data_dict(dummy_file, data_url) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 165, in create_dummy_data_dict if isinstance(first_value, str) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()): TypeError: unhashable type: 'list' ## Steps to reproduce the bug You can use my dataset script implemented here: https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py ```python datasets_cli dummy_data datasets/personal_dialog --auto_generate ``` You can change https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py#L54 to ``` "train": "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz" ``` before runing the above script to avoid downloading a large training data. ## Expected results The dummy data should be generated ## Actual results An error is raised. It seems that in https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 We only check if the first item of dummy_data_dict.values() is str. However, dummy_data_dict.values() may have the type of [str, list, list]. A simple fix would be changing https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 to ```python if all([isinstance(value, str) for value in dummy_data_dict.values()]) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()): ``` But I don't know if this kinds of change may bring any side effect since I am not sure about the detail logic here. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux - Python version: Python 3.9.10 - PyArrow version: 7.0.0
false
1,253,959,313
https://api.github.com/repos/huggingface/datasets/issues/4427
https://github.com/huggingface/datasets/pull/4427
4,427
Add HF.co for PRs/Issues for specific datasets
closed
1
2022-05-31T14:31:21
2022-06-01T12:37:42
2022-06-01T12:29:02
lhoestq
[]
As in https://github.com/huggingface/transformers/pull/17485, issues and PR for datasets under a namespace have to be on the HF Hub
true
1,253,887,311
https://api.github.com/repos/huggingface/datasets/issues/4426
https://github.com/huggingface/datasets/issues/4426
4,426
Add loading variable number of columns for different splits
closed
1
2022-05-31T13:40:16
2022-06-03T16:25:25
2022-06-03T16:25:25
DrMatters
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** The original dataset `blended_skill_talk` consists of different sets of columns for the different splits: (test/valid) splits have additional data column `label_candidates` that the (train) doesn't have. When loading such data, an exception occurs at table.py:cast_table_to_schema, because of mismatched columns.
false
1,253,641,604
https://api.github.com/repos/huggingface/datasets/issues/4425
https://github.com/huggingface/datasets/pull/4425
4,425
Make extensions case-insensitive in timit_asr dataset
closed
1
2022-05-31T10:10:04
2022-06-01T14:15:30
2022-06-01T14:06:51
albertvillanova
[]
Related to #4422.
true
1,253,542,488
https://api.github.com/repos/huggingface/datasets/issues/4424
https://github.com/huggingface/datasets/pull/4424
4,424
Fix DuplicatedKeysError in timit_asr dataset
closed
1
2022-05-31T08:47:45
2022-05-31T13:50:50
2022-05-31T13:42:31
albertvillanova
[]
Fix #4422.
true
1,253,326,023
https://api.github.com/repos/huggingface/datasets/issues/4423
https://github.com/huggingface/datasets/pull/4423
4,423
Add new dataset MMChat
closed
2
2022-05-31T04:45:07
2022-06-11T12:40:52
2022-06-11T12:31:42
silverriver
[]
Hi, I am adding a new dataset MMChat. It seems that all tests are passed
true
1,253,146,511
https://api.github.com/repos/huggingface/datasets/issues/4422
https://github.com/huggingface/datasets/issues/4422
4,422
Cannot load timit_asr data set
closed
6
2022-05-30T22:00:22
2022-06-02T06:34:05
2022-05-31T13:42:31
bhaddow
[ "bug" ]
## Describe the bug I am trying to load the timit_asr data set. I have tried with a copy from the LDC, and a copy from deepai. In both cases they fail with a "duplicate key" error. With the LDC version I have to convert the file extensions all to upper-case before I can load it at all. ## Steps to reproduce the bug ```python timit = datasets.load_dataset("timit_asr", data_dir = "/path/to/dataset") # Sample code to reproduce the bug ``` ## Expected results The data set should load without error. It worked for me before the LDC url change. ## Actual results ``` datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: SA1 Keys should be unique and deterministic in nature ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - `datasets` version: 2.2.2 - Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 8.0.0 - Pandas version: 1.4.2
false
1,253,059,467
https://api.github.com/repos/huggingface/datasets/issues/4421
https://github.com/huggingface/datasets/pull/4421
4,421
Add extractor for bzip2-compressed files
closed
0
2022-05-30T19:19:40
2022-06-06T15:22:50
2022-06-06T15:22:50
osyvokon
[]
This change enables loading bzipped datasets, just like any other compressed dataset.
true
1,252,739,239
https://api.github.com/repos/huggingface/datasets/issues/4420
https://github.com/huggingface/datasets/issues/4420
4,420
Metric evaluation problems in multi-node, shared file system
closed
6
2022-05-30T13:24:05
2023-07-11T09:33:18
2023-07-11T09:33:17
gullabi
[ "bug" ]
## Describe the bug Metric evaluation fails in multi-node within a shared file system, because the master process cannot find the lock files from other nodes. (This issue was originally mentioned in the transformers repo https://github.com/huggingface/transformers/issues/17412) ## Steps to reproduce the bug 1. clone [this huggingface model](https://huggingface.co/PereLluis13/wav2vec2-xls-r-300m-ca-lm) and replace the `run_speech_recognition_ctc.py` script with the version in the gist [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71#file-run_speech_recognition_ctc-py). 2. Setup the `venv` according to the requirements of the model file plus `datasets==2.0.0`, `transformers==4.18.0` and `torch==1.9.0` 3. Launch the runner in a distributed environment which has a shared file system for two nodes, preferably with SLURM. Example [here](https://gist.github.com/gullabi/3f66094caa8db1c1e615dd35bd67ec71) Specifically for the datasets, for the distributed setup the `load_metric` is called as: ``` process_id=int(os.environ["RANK"]) num_process=int(os.environ["WORLD_SIZE"]) eval_metrics = {metric: load_metric(metric, process_id=process_id, num_process=num_process, experiment_id="slurm") for metric in data_args.eval_metrics} ``` ## Expected results The training should not fail, due to the failure of the `Metric.compute()` step. ## Actual results For the test I am executing the world size is 4, with 2 GPUs in 2 nodes. However the process is not finding the necessary lock files ``` File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 841, in <module> main() File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 792, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1497, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 1624, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2291, in evaluate metric_key_prefix=metric_key_prefix, File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/transformers/trainer.py", line 2535, in evaluation_loop metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels)) File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in compute_metrics metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()} File "/gpfs/projects/bsc88/speech/asr/wav2vec2-xls-r-300m-ca-lm/run_speech_recognition_ctc.py", line 742, in <dictcomp> metrics = {k: v.compute(predictions=pred_str, references=label_str) for k, v in eval_metrics.items()} File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 419, in compute self.add_batch(**inputs) File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 465, in add_batch self._init_writer() File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 552, in _init_writer self._check_rendez_vous() # wait for master to be ready and to let everyone go File "/gpfs/projects/bsc88/projects/speech-tech-resources/venv_amd_speech/lib/python3.7/site-packages/datasets/metric.py", line 342, in _check_rendez_vous ) from None ValueError: Expected to find locked file /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock from process 3 but it doesn't exist. ``` When I look at the cache directory, I can see all the lock files in principle: ``` /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-0.arrow.lock /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-1.arrow.lock /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-2.arrow.lock /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-3.arrow.lock /home/bsc88/bsc88474/.cache/huggingface/metrics/wer/default/slurm-4-rdv.lock ``` I see that there was another related issue here https://github.com/huggingface/datasets/issues/1942, but it seems to have resolved via https://github.com/huggingface/datasets/pull/1966. Let me know if there is problem with how I am calling the `load_metric` or whether I need to make changes to the `.compute()` steps. ## Environment info - `datasets` version: 2.0.0 - Platform: Linux-4.18.0-147.8.1.el8_1.x86_64-x86_64-with-centos-8.1.1911-Core - Python version: 3.7.4 - PyArrow version: 7.0.0 - Pandas version: 1.3.0
false
1,252,652,896
https://api.github.com/repos/huggingface/datasets/issues/4419
https://github.com/huggingface/datasets/issues/4419
4,419
Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual`
closed
3
2022-05-30T12:13:18
2022-09-30T16:01:37
2022-09-30T16:01:37
alvarobartt
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating. Find an example of an `assertEqual` over a tuple in 🤗 `datasets` unit tests over an `ArrowDataset` at https://github.com/huggingface/datasets/blob/0bb47271910c8a0b628dba157988372307fca1d2/tests/test_arrow_dataset.py#L570 **Describe the solution you'd like** Start slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`. **Additional context** If so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks 🤗
false
1,252,506,268
https://api.github.com/repos/huggingface/datasets/issues/4418
https://github.com/huggingface/datasets/pull/4418
4,418
Add dataset MMChat
closed
0
2022-05-30T10:10:40
2022-05-30T14:58:18
2022-05-30T14:58:18
silverriver
[]
null
true