id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,310,253,552
https://api.github.com/repos/huggingface/datasets/issues/4721
https://github.com/huggingface/datasets/issues/4721
4,721
PyArrow Dataset error when calling `load_dataset`
open
3
2022-07-20T01:16:03
2022-07-22T14:11:47
null
piraka9011
[ "bug" ]
## Describe the bug I am fine tuning a wav2vec2 model following the script here using my own dataset: https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py Loading my Audio dataset from the hub which was originally generated from disk results in the following PyArrow error: ```sh File "/home/ubuntu/w2v2/run_speech_recognition_ctc.py", line 227, in main raw_datasets = load_dataset( File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/load.py", line 1679, in load_dataset builder_instance.download_and_prepare( File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/builder.py", line 1268, in _prepare_split for key, table in logging.tqdm( File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/tqdm/std.py", line 1195, in __iter__ for obj in iterable: File "/home/ubuntu/.virtualenvs/meval/lib/python3.8/site-packages/datasets/packaged_modules/parquet/parquet.py", line 68, in _generate_tables for batch_idx, record_batch in enumerate( File "pyarrow/_parquet.pyx", line 1309, in iter_batches File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs ``` ## Steps to reproduce the bug I created a dataset from a JSON lines manifest of `audio_filepath`, `text`, and `duration`. When creating the dataset, I do something like this: ```python import json from datasets import Dataset, Audio # manifest_lines is a list of dicts w/ "audio_filepath", "duration", and "text for line in manifest_lines: line = line.strip() if line: line_dict = json.loads(line) manifest_dict["audio"].append(f"{root_path}/{line_dict['audio_filepath']}") manifest_dict["duration"].append(line_dict["duration"]) manifest_dict["transcription"].append(line_dict["text"]) # Create a HF dataset dataset = Dataset.from_dict(manifest_dict).cast_column( "audio", Audio(sampling_rate=16_000), ) # From the docs for saving to disk # https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Dataset.save_to_disk def read_audio_file(example): with open(example["audio"]["path"], "rb") as f: return {"audio": {"bytes": f.read()}} dataset = dataset.map(read_audio_file, num_proc=70) dataset.save_to_disk(f"/audio-data/hf/{artifact_name}") dataset.push_to_hub(f"{org-name}/{artifact_name}", max_shard_size="5GB", private=True) ``` Then when I call `load_dataset()` in my training script, with the same dataset I generated above, and download from the huggingface hub I get the above stack trace. I am able to load the dataset fine if I use `load_from_disk()`. ## Expected results `load_dataset()` should behave just like `load_from_disk()` and not cause any errors. ## Actual results See above ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> I am using the `huggingface/transformers-pytorch-gpu:latest` image - `datasets` version: 2.3.0 - Platform: Docker/Ubuntu 20.04 - Python version: 3.8 - PyArrow version: 8.0.0
false
1,309,980,195
https://api.github.com/repos/huggingface/datasets/issues/4720
https://github.com/huggingface/datasets/issues/4720
4,720
Dataset Viewer issue for shamikbose89/lancaster_newsbooks
closed
4
2022-07-19T20:00:07
2022-09-08T16:47:21
2022-09-08T16:47:21
shamikbose
[]
### Link https://huggingface.co/datasets/shamikbose89/lancaster_newsbooks ### Description Status code: 400 Exception: ValueError Message: Cannot seek streaming HTTP file I am able to use the dataset loading script locally and it also runs when I'm using the one from the hub, but the viewer still doesn't load ### Owner Yes
false
1,309,854,492
https://api.github.com/repos/huggingface/datasets/issues/4719
https://github.com/huggingface/datasets/issues/4719
4,719
Issue loading TheNoob3131/mosquito-data dataset
closed
2
2022-07-19T17:47:37
2022-07-20T06:46:57
2022-07-20T06:46:02
thenerd31
[]
![image](https://user-images.githubusercontent.com/53668030/179815591-d75fa7d3-3122-485f-a852-b06a68909066.png) So my dataset is public in the Huggingface Hub, but when I try to load it using the load_dataset command, it shows that it is downloading the files, but throws a ValueError. When I went to my directory to see if the files were downloaded, the folder was blank. Here is the error below: ValueError Traceback (most recent call last) Input In [8], in <cell line: 3>() 1 from datasets import load_dataset ----> 3 dataset = load_dataset("TheNoob3131/mosquito-data", split="train") File ~\Anaconda3\lib\site-packages\datasets\load.py:1679, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1676 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES 1678 # Download and prepare data -> 1679 builder_instance.download_and_prepare( 1680 download_config=download_config, 1681 download_mode=download_mode, 1682 ignore_verifications=ignore_verifications, 1683 try_from_hf_gcs=try_from_hf_gcs, 1684 use_auth_token=use_auth_token, 1685 ) 1687 # Build dataset for splits 1688 keep_in_memory = ( 1689 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size) 1690 ) Is the dataset in the wrong format or is there some security permission that I should enable?
false
1,309,520,453
https://api.github.com/repos/huggingface/datasets/issues/4718
https://github.com/huggingface/datasets/pull/4718
4,718
Make Extractor accept Path as input
closed
1
2022-07-19T13:25:06
2022-07-22T13:42:27
2022-07-22T13:29:43
albertvillanova
[]
This PR: - Makes `Extractor` accept instance of `Path` as input - Removes unnecessary castings of `Path` to `str`
true
1,309,512,483
https://api.github.com/repos/huggingface/datasets/issues/4717
https://github.com/huggingface/datasets/issues/4717
4,717
Dataset Viewer issue for LawalAfeez/englishreview-ds-mini
closed
1
2022-07-19T13:19:39
2022-07-20T08:32:57
2022-07-20T08:32:57
lawalAfeez820
[ "dataset-viewer" ]
### Link _No response_ ### Description Unable to view the split data ### Owner _No response_
false
1,309,455,838
https://api.github.com/repos/huggingface/datasets/issues/4716
https://github.com/huggingface/datasets/pull/4716
4,716
Support "tags" yaml tag
closed
3
2022-07-19T12:34:31
2022-07-20T13:44:50
2022-07-20T13:31:56
lhoestq
[]
Added the "tags" YAML tag, so that users can specify data domain/topics keywords for dataset search
true
1,309,405,980
https://api.github.com/repos/huggingface/datasets/issues/4715
https://github.com/huggingface/datasets/pull/4715
4,715
Fix POS tags
closed
2
2022-07-19T11:52:54
2022-07-19T12:54:34
2022-07-19T12:41:16
lhoestq
[]
We're now using `part-of-speech` and not `part-of-speech-tagging`, see discussion here: https://github.com/huggingface/datasets/commit/114c09aff2fa1519597b46fbcd5a8e0c0d3ae020#r78794777
true
1,309,265,682
https://api.github.com/repos/huggingface/datasets/issues/4714
https://github.com/huggingface/datasets/pull/4714
4,714
Fix named split sorting and remove unnecessary casting
closed
3
2022-07-19T09:48:28
2022-07-22T09:39:45
2022-07-22T09:10:57
albertvillanova
[]
This PR: - makes `NamedSplit` sortable: so that `sorted()` can be called on them - removes unnecessary `sorted()` on `dict.keys()`: `dict_keys` view is already like a `set` - removes unnecessary casting of `NamedSplit` to `str`
true
1,309,184,756
https://api.github.com/repos/huggingface/datasets/issues/4713
https://github.com/huggingface/datasets/pull/4713
4,713
Document installation of sox OS dependency for audio
closed
1
2022-07-19T08:42:35
2022-07-21T08:16:59
2022-07-21T08:04:15
albertvillanova
[]
The `sox` OS package needs being installed manually using the distribution package manager. This PR adds this explanation to the docs.
true
1,309,177,302
https://api.github.com/repos/huggingface/datasets/issues/4712
https://github.com/huggingface/datasets/pull/4712
4,712
Highlight non-commercial license in amazon_reviews_multi dataset card
closed
1
2022-07-19T08:36:20
2022-07-27T16:09:40
2022-07-27T15:57:41
sbroadhurst-hf
[]
Highlight that the licence granted by Amazon only covers non-commercial research use.
true
1,309,138,570
https://api.github.com/repos/huggingface/datasets/issues/4711
https://github.com/huggingface/datasets/issues/4711
4,711
Document how to create a dataset loading script for audio/vision
closed
1
2022-07-19T08:03:40
2023-07-25T16:07:52
2023-07-25T16:07:52
albertvillanova
[ "documentation" ]
Currently, in our docs for Audio/Vision/Text, we explain how to: - Load data - Process data However we only explain how to *Create a dataset loading script* for text data. I think it would be useful that we add the same for Audio/Vision as these have some specificities different from Text. See, for example: - #4697 - and comment there: https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492 CC: @stevhliu
false
1,308,958,525
https://api.github.com/repos/huggingface/datasets/issues/4710
https://github.com/huggingface/datasets/pull/4710
4,710
Add object detection processing tutorial
closed
3
2022-07-19T04:23:46
2022-07-21T20:10:35
2022-07-21T19:56:42
nateraw
[]
The following adds a quick guide on how to process object detection datasets with `albumentations`.
true
1,308,633,093
https://api.github.com/repos/huggingface/datasets/issues/4709
https://github.com/huggingface/datasets/issues/4709
4,709
WMT21 & WMT22
open
7
2022-07-18T21:05:33
2023-06-20T09:02:11
null
Muennighoff
[ "good first issue", "dataset request" ]
## Adding a Dataset - **Name:** WMT21 & WMT22 - **Description:** We are going to have three tracks: two small tasks and a large task. The small tracks evaluate translation between fairly related languages and English (all pairs). The large track uses 101 languages. - **Paper:** / - **Data:** https://statmt.org/wmt21/large-scale-multilingual-translation-task.html https://statmt.org/wmt22/large-scale-multilingual-translation-task.html - **Motivation:** Many more languages than previous WMT versions - Could be very high impact Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md). I could also tackle this. I saw the existing logic for WMT models is a bit complex (datasets are stored on the wmt account & retrieved in separate wmt datasets afaict). How long do you think it would take me? @lhoestq
false
1,308,279,700
https://api.github.com/repos/huggingface/datasets/issues/4708
https://github.com/huggingface/datasets/pull/4708
4,708
Fix require torchaudio and refactor test requirements
closed
1
2022-07-18T17:24:28
2022-07-22T06:30:56
2022-07-22T06:18:11
albertvillanova
[]
Currently there is a bug in `require_torchaudio` (indeed it is requiring `sox` instead): ```python def require_torchaudio(test_case): if find_spec("sox") is None: ... ``` The bug was introduced by: - #3685 - Commit: https://github.com/huggingface/datasets/pull/3685/commits/b5a3e7122d49c4dcc9333b1d8d18a833fc04b940 which moved ```python require_sndfile = pytest.mark.skipif( # In Windows and OS X, soundfile installs sndfile (sys.platform != "linux" and find_spec("soundfile") is None) # In Linux, soundfile throws RuntimeError if sndfile not installed with distribution package manager or (sys.platform == "linux" and find_library("sndfile") is None), reason="Test requires 'sndfile': `pip install soundfile`; " "Linux requires sndfile installed with distribution package manager, e.g.: `sudo apt-get install libsndfile1`", ) require_sox = pytest.mark.skipif( find_library("sox") is None, reason="Test requires 'sox'; only available in non-Windows, e.g.: `sudo apt-get install sox`", ) require_torchaudio = pytest.mark.skipif(find_spec("torchaudio") is None, reason="Test requires 'torchaudio'") ``` to ```python def require_sndfile(test_case): """ Decorator marking a test that requires soundfile. These tests are skipped when soundfile isn't installed. """ if (sys.platform != "linux" and find_spec("soundfile") is None) or ( sys.platform == "linux" and find_library("sndfile") is None ): test_case = unittest.skip( "test requires 'sndfile': `pip install soundfile`; " "Linux requires sndfile installed with distribution package manager, e.g.: `sudo apt-get install libsndfile1`", )(test_case) return test_case def require_sox(test_case): """ Decorator marking a test that requires sox. These tests are skipped when sox isn't installed. """ if find_library("sox") is None: return unittest.skip("test requires 'sox'; only available in non-Windows, e.g.: `sudo apt-get install sox`")( test_case ) return test_case def require_torchaudio(test_case): """ Decorator marking a test that requires torchaudio. These tests are skipped when torchaudio isn't installed. """ if find_spec("sox") is None: return unittest.skip("test requires 'torchaudio'")(test_case) return test_case ``` This PR; - fixes the bug in `require_torchaudio` - refactors the test requirements back to using `pytest` instead of `unittest` - the text in `pytest.skipif` `reason` can be used if needed in a test body: `require_torchaudio.kwargs["reason"]`
true
1,308,251,405
https://api.github.com/repos/huggingface/datasets/issues/4707
https://github.com/huggingface/datasets/issues/4707
4,707
Dataset Viewer issue for TheNoob3131/mosquito-data
closed
6
2022-07-18T17:07:19
2022-07-18T19:44:46
2022-07-18T17:15:50
thenerd31
[ "dataset-viewer" ]
### Link _No response_ ### Description Getting this error when trying to view dataset preview: Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/TheNoob3131/mosquito-data/resolve/8aceebd6c4a359d216d10ef020868bd9e8c986dd/0_Africa_train.csv') ### Owner _No response_
false
1,308,198,454
https://api.github.com/repos/huggingface/datasets/issues/4706
https://github.com/huggingface/datasets/pull/4706
4,706
Fix empty examples in xtreme dataset for bucc18 config
closed
2
2022-07-18T16:22:46
2022-07-19T06:41:14
2022-07-19T06:29:17
lhoestq
[]
As reported in https://huggingface.co/muibk, there are empty examples in xtreme/bucc18.de I applied your fix @mustaszewski I also used a dict to make the dataset generation much faster
true
1,308,161,794
https://api.github.com/repos/huggingface/datasets/issues/4705
https://github.com/huggingface/datasets/pull/4705
4,705
Fix crd3
closed
1
2022-07-18T15:53:44
2022-07-21T17:18:44
2022-07-21T17:06:30
lhoestq
[]
As reported in https://huggingface.co/datasets/crd3/discussions/1#62cc377073b2512b81662794, each split of the dataset was containing the same data. This issues comes from a bug in the dataset script I fixed it and also uploaded the data to hf.co to make the dataset work in streaming mode
true
1,308,147,876
https://api.github.com/repos/huggingface/datasets/issues/4704
https://github.com/huggingface/datasets/pull/4704
4,704
Skip tests only for lz4/zstd params if not installed
closed
1
2022-07-18T15:41:40
2022-07-19T13:02:31
2022-07-19T12:49:18
albertvillanova
[]
Currently, if `zstandard` or `lz4` are not installed, `test_compression_filesystems` and `test_streaming_dl_manager_extract_all_supported_single_file_compression_types` are skipped for all compression format parameters. This PR fixes these tests, so that if `zstandard` or `lz4` are not installed, the tests are skipped only for the corresponding compression parameters (`zstd` or `lz4`), whereas the tests are not skipped for all the other compression parameters (`gzip`, `xz` and `bz2`). Related to: - #4688
true
1,307,844,097
https://api.github.com/repos/huggingface/datasets/issues/4703
https://github.com/huggingface/datasets/pull/4703
4,703
Make cast in `from_pandas` more robust
closed
1
2022-07-18T11:55:49
2022-07-22T11:17:42
2022-07-22T11:05:24
mariosasko
[]
Make the cast in `from_pandas` more robust (as it was done for the packaged modules in https://github.com/huggingface/datasets/pull/4364) This should be useful in situations like [this one](https://discuss.huggingface.co/t/loading-custom-audio-dataset-and-fine-tuning-model/8836/4).
true
1,307,793,811
https://api.github.com/repos/huggingface/datasets/issues/4702
https://github.com/huggingface/datasets/issues/4702
4,702
Domain specific dataset discovery on the Hugging Face hub
open
11
2022-07-18T11:14:03
2024-02-12T09:53:43
null
davanstrien
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** ## The problem The datasets hub currently has `8,239` datasets. These datasets span a wide range of different modalities and tasks (currently with a bias towards textual data). There are various ways of identifying datasets that may be relevant for a particular use case: - searching - various filters Currently, however, there isn't an easy way to identify datasets belonging to a specific domain. For example, I want to browse machine learning datasets related to 'social science' or 'climate change research'. The ability to identify datasets relating to a specific domain has come up in discussions around the [BigLA](https://github.com/bigscience-workshop/lam/) datasets hackathon https://github.com/bigscience-workshop/lam/discussions/31#discussioncomment-3123610. As part of the hackathon, we're currently collecting datasets related to Libraries, Archives and Museums and making them available via the hub. We currently do this under a Hugging Face organization (https://huggingface.co/biglam). However, going forward, I can see some of these datasets being migrated to sit under an organization that is the custodian of the dataset (for example, a national library the data was originally from). At this point, it becomes more difficult to quickly identify datasets from this domain without relying on search. This is also related to some existing issues on Github related to metadata on the hub: - https://github.com/huggingface/datasets/issues/3625 - https://github.com/huggingface/datasets/issues/3877 **Describe the solution you'd like** ### Some possible solutions that may help with this: #### Enable domain tags (from a controlled vocabulary) - This would add metadata field to the YAML for the domain a dataset relates to - Advantages: - the list is controlled, allowing it to be more easily integrated into the datasets tag app (https://huggingface.co/space/huggingface/datasets-tagging) - the controlled vocabulary could align with an existing controlled vocabulary - this additional metadata can be used to perform filtering by domain - disadvantages - choosing the best controlled vocab may be difficult - there are many datasets that are likely to fit into the 'machine learning' domain (i.e. there is a long tail of datasets that aren't in more 'generic' machine learning domain #### Enable topic tags (user-generated) Enable 'free form' topic tags for datasets and models. This would be closer to GitHub's repository topics which can be chosen from a controlled list (https://github.com/topics/) but can also be more user/org specific. This could potentially be useful for organizations to also manage their own models and datasets as the number they hold in their org grows. For example, they may create 'topic tags' for a specific project, so it's clearer which datasets /models are related to that project. #### Collections This solution would likely be the biggest shift and may require significant changes in the hub fronted. Collections could work in several different ways but would include: Users can curate particular datasets, models, spaces, etc., into a collection. For example, they may create a collection of 'historic newspapers suitable for training language models'. These collections would not be mutually exclusive, i.e. a dataset can belong to zero, one or many collections. Collections can also potentially be nested under other collections. This is fairly common on other data reposotiores for example the following collections: <img width="293" alt="Screenshot 2022-07-18 at 11 50 44" src="https://user-images.githubusercontent.com/8995957/179496445-963ed122-5e26-4574-96e8-41081bce3e2b.png"> all belong under a higher level collection (https://bl.iro.bl.uk/collections/353c908d-b495-4413-b047-87236d2573e3?locale=en). There are different models one could use for how these collections could be created: - only within an org - for any dataset/model - the owner or a dataset/model has to agree to be added to a collection - a collection owner can have people suggest additions to their collection - other models.... These collections could be thematic, related to particular training approaches, curate models with particular inference properties etc. Whilst some of these features may duplicate current/or future tag filters on the hub, they offer the advantage of being flexible and not having to predict what users will want to do upfront. There is also potential for automating the creation of these collections based on existing metadata. For example, one could collect models trained on a collection of datasets so for example, if we had a collection of 'historic newspapers suitable for training language models' that contained 30 datasets, we could create another collection 'historic newspaper language models' that takes any model on the hub whose metadata says it used one or more of those 30 datasets. There is also the option of exploring ML approaches to suggest models/datasets may be relevant to a particular collection. This approach is likely to be quite difficult to implement well and would require significant thought. There is also likely to be a benefit in doing quite a bit of upfront work in curating useful collections to demonstrate the benefits of collections. **Describe alternatives you've considered** A clear and concise description of any alternative solutions or features you've considered. It is possible to collate this information externally, i.e. one could link back to the relevant models/datasets from an external platform. **Additional context** Add any other context about the feature request here. I'm cc'ing others involved in the BigLAM hackathon who may also have thoughts @cakiki @clancyoftheoverflow @albertvillanova
false
1,307,689,625
https://api.github.com/repos/huggingface/datasets/issues/4701
https://github.com/huggingface/datasets/pull/4701
4,701
Added more information in the README about contributors of the Arabic Speech Corpus
closed
0
2022-07-18T09:48:03
2022-07-28T10:33:05
2022-07-28T10:33:05
nawarhalabi
[]
Added more information in the README about contributors and encouraged reading the thesis for more infos
true
1,307,599,161
https://api.github.com/repos/huggingface/datasets/issues/4700
https://github.com/huggingface/datasets/pull/4700
4,700
Support extract lz4 compressed data files
closed
1
2022-07-18T08:41:31
2022-07-18T14:43:59
2022-07-18T14:31:47
albertvillanova
[]
null
true
1,307,555,592
https://api.github.com/repos/huggingface/datasets/issues/4699
https://github.com/huggingface/datasets/pull/4699
4,699
Fix Authentification Error while streaming
closed
1
2022-07-18T08:03:41
2022-07-20T13:10:44
2022-07-20T13:10:43
hkjeon13
[]
I fixed a few errors when it occurs while streaming the private dataset on the Huggingface Hub. ``` from datasets import load_dataset dataset = load_dataset(<repo_id>, use_auth_token=<private_token>, streaming=True) for d in dataset['train']: print(d) break # this is for checking ``` This code is an example for streaming private datasets. when the version of the datasets is 2.2.2, it works well but datasets>2.2.2 occurs error like this, ``` /usr/local/lib/python3.7/dist-packages/aiohttp/client_reqrep.py in raise_for_status(self) 1007 status=self.status, 1008 message=self.reason, → 1009 headers=self.headers, 1010 ) 1011 ClientResponseError: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/.../train-00000-of-00001-168b451062c67c34.parquet') ``` (this is an example on the dataset has `parquet` extenstion) It seems that the `xisfile `module in `download/streaming_download_manager.py` couldn't recognize the file on "https://huggingface.co/~". so I add three lines. With this change, there is no error anymore(but this code is ad-hoc).
true
1,307,539,585
https://api.github.com/repos/huggingface/datasets/issues/4698
https://github.com/huggingface/datasets/pull/4698
4,698
Enable streaming dataset to use the "all" split
closed
9
2022-07-18T07:47:39
2025-05-21T13:17:19
2025-05-21T13:17:19
cakiki
[]
Fixes #4637
true
1,307,332,253
https://api.github.com/repos/huggingface/datasets/issues/4697
https://github.com/huggingface/datasets/issues/4697
4,697
Trouble with streaming frgfm/imagenette vision dataset with TAR archive
closed
5
2022-07-18T02:51:09
2022-08-01T15:10:57
2022-08-01T15:10:57
frgfm
[ "streaming" ]
### Link https://huggingface.co/datasets/frgfm/imagenette ### Description Hello there :wave: Thanks for the amazing work you've done with HF Datasets! I've just started playing with it, and managed to upload my first dataset. But for the second one, I'm having trouble with the preview since there is some archive extraction involved :sweat_smile: Basically, I get a: ``` Status code: 400 Exception: NotImplementedError Message: Extraction protocol for TAR archives like 'https://s3.amazonaws.com/fast-ai-imageclas/imagenette2.tgz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead. ``` I've tried several things and checked this issue https://github.com/huggingface/datasets/issues/4181 as well, but no luck so far! Could you point me in the right direction please? :pray: ### Owner Yes
false
1,307,183,099
https://api.github.com/repos/huggingface/datasets/issues/4696
https://github.com/huggingface/datasets/issues/4696
4,696
Cannot load LinCE dataset
closed
2
2022-07-17T19:01:54
2022-07-18T09:20:40
2022-07-18T07:24:22
finiteautomata
[ "bug" ]
## Describe the bug Cannot load LinCE dataset due to a connection error ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("lince", "ner_spaeng") ``` A notebook with this code and corresponding error can be found at https://colab.research.google.com/drive/1pgX3bNB9amuUwAVfPFm-XuMV5fEg-cD2 ## Expected results It should load the dataset ## Actual results ```python --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-2-fc551ddcebef> in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("lince", "ner_spaeng") 10 frames /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs) 1682 ignore_verifications=ignore_verifications, 1683 try_from_hf_gcs=try_from_hf_gcs, -> 1684 use_auth_token=use_auth_token, 1685 ) 1686 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 703 if not downloaded_from_gcs: 704 self._download_and_prepare( --> 705 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 706 ) 707 # Sync info /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos) 1219 1220 def _download_and_prepare(self, dl_manager, verify_infos): -> 1221 super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) 1222 1223 def _get_examples_iterable_for_split(self, split_generator: SplitGenerator) -> ExamplesIterable: /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 769 split_dict = SplitDict(dataset_name=self.name) 770 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 771 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 772 773 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/lince/10d41747f55f0849fa84ac579ea1acfa7df49aa2015b60426bc459c111b3d589/lince.py in _split_generators(self, dl_manager) 481 def _split_generators(self, dl_manager): 482 """Returns SplitGenerators.""" --> 483 lince_dir = dl_manager.download_and_extract(f"{_LINCE_URL}/{self.config.name}.zip") 484 data_dir = os.path.join(lince_dir, self.config.data_dir) 485 return [ /usr/local/lib/python3.7/dist-packages/datasets/download/download_manager.py in download_and_extract(self, url_or_urls) 429 extracted_path(s): `str`, extracted paths of given URL(s). 430 """ --> 431 return self.extract(self.download(url_or_urls)) 432 433 def get_recorded_sizes_checksums(self): /usr/local/lib/python3.7/dist-packages/datasets/download/download_manager.py in download(self, url_or_urls) 313 num_proc=download_config.num_proc, 314 disable_tqdm=not is_progress_bar_enabled(), --> 315 desc="Downloading data files", 316 ) 317 duration = datetime.now() - start_time /usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types, disable_tqdm, desc) 346 # Singleton 347 if not isinstance(data_struct, dict) and not isinstance(data_struct, types): --> 348 return function(data_struct) 349 350 disable_tqdm = disable_tqdm or not logging.is_progress_bar_enabled() /usr/local/lib/python3.7/dist-packages/datasets/download/download_manager.py in _download(self, url_or_filename, download_config) 333 # append the relative path to the base_path 334 url_or_filename = url_or_path_join(self._base_path, url_or_filename) --> 335 return cached_path(url_or_filename, download_config=download_config) 336 337 def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]): /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 195 use_auth_token=download_config.use_auth_token, 196 ignore_url_params=download_config.ignore_url_params, --> 197 download_desc=download_config.download_desc, 198 ) 199 elif os.path.exists(url_or_filename): /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc) 531 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 532 if head_error is not None: --> 533 raise ConnectionError(f"Couldn't reach {url} ({repr(head_error)})") 534 elif response is not None: 535 raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})") ConnectionError: Couldn't reach https://ritual.uh.edu/lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (ConnectTimeout(MaxRetryError("HTTPSConnectionPool(host='ritual.uh.edu', port=443): Max retries exceeded with url: /lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/ner_spaeng.zip (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7feb1c45a690>, 'Connection to ritual.uh.edu timed out. (connect timeout=100)'))"))) ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
false
1,307,134,701
https://api.github.com/repos/huggingface/datasets/issues/4695
https://github.com/huggingface/datasets/pull/4695
4,695
Add MANtIS dataset
closed
2
2022-07-17T15:53:05
2022-09-30T14:39:30
2022-09-30T14:37:16
bhavitvyamalik
[ "dataset contribution" ]
This PR adds MANtIS dataset. Arxiv: [https://arxiv.org/abs/1912.04639](https://arxiv.org/abs/1912.04639) Github: [https://github.com/Guzpenha/MANtIS](https://github.com/Guzpenha/MANtIS) README and dataset tags are WIP.
true
1,306,958,380
https://api.github.com/repos/huggingface/datasets/issues/4694
https://github.com/huggingface/datasets/issues/4694
4,694
Distributed data parallel training for streaming datasets
open
6
2022-07-17T01:29:43
2023-04-26T18:21:09
null
cyk1337
[ "enhancement" ]
### Feature request Any documentations for the the `load_dataset(streaming=True)` for (multi-node multi-GPU) DDP training? ### Motivation Given a bunch of data files, it is expected to split them onto different GPUs. Is there a guide or documentation? ### Your contribution Does it requires manually split on data files for each worker in `DatasetBuilder._split_generator()`? What is`IterableDatasetShard` expected to do?
false
1,306,788,322
https://api.github.com/repos/huggingface/datasets/issues/4693
https://github.com/huggingface/datasets/pull/4693
4,693
update `samsum` script
closed
2
2022-07-16T11:53:05
2022-09-23T11:40:11
2022-09-23T11:37:57
bhavitvyamalik
[ "dataset contribution" ]
update `samsum` script after #4672 was merged (citation is also updated)
true
1,306,609,680
https://api.github.com/repos/huggingface/datasets/issues/4692
https://github.com/huggingface/datasets/issues/4692
4,692
Unable to cast a column with `Image()` by using the `cast_column()` feature
closed
1
2022-07-15T22:56:03
2022-07-19T13:36:24
2022-07-19T13:36:24
skrishnan99
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. When I create a dataset, then add a column to the created dataset through the `dataset.add_column` feature and then try to cast a column of the dataset (this column contains image paths) with `Image()` by using the `cast_column()` feature, I get the following error - ``` TypeError: Couldn't cast array of type string to {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)} ``` When I try and cast the same column, but without doing the `add_column` in the previous step, it works as expected. ## Steps to reproduce the bug ```python from datasets import Dataset, Image data_dict = { "img_path": ["https://picsum.photos/200/300"] } dataset = Dataset.from_dict(data_dict) #NOTE Comment out this line and use cast_column and it works properly dataset = dataset.add_column("yeet", [1]) #NOTE This line fails to execute properly if `add_column` is called before dataset = dataset.cast_column("img_path", Image()) # #NOTE This is my current workaround. This seems to work fine with/without `add_column`. While # # running this, make sure to comment out the `cast_column` line # new_features = dataset.features.copy() # new_features["img_path"] = Image() # dataset = dataset.cast(new_features) print(dataset) print(dataset.features) print(dataset[0]) ``` ## Expected results A clear and concise description of the expected results. Able to successfully use `cast_column` to cast a column containing img_paths to now be Image() features after modifying the dataset using `add_column` in a previous step ## Actual results Specify the actual results or traceback. ``` Traceback (most recent call last): File "/home/surya/Desktop/hf_bug_test.py", line 14, in <module> dataset = dataset.cast_column("img_path", Image()) File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/fingerprint.py", line 458, in wrapper out = func(self, *args, **kwargs) File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 1580, in cast_column dataset._data = dataset._data.cast(dataset.features.arrow_schema) File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1487, in cast new_tables.append(subtable.cast(subschema, *args, **kwargs)) File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 834, in cast return InMemoryTable(table_cast(self.table, *args, **kwargs)) File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1897, in table_cast return cast_table_to_schema(table, schema) File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1880, in cast_table_to_schema arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1880, in <listcomp> arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()] File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1673, in wrapper return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1673, in <listcomp> return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks]) File "/home/surya/anaconda3/envs/snap_test/lib/python3.9/site-packages/datasets/table.py", line 1846, in cast_array_to_feature raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}") TypeError: Couldn't cast array of type string to {'bytes': Value(dtype='binary', id=None), 'path': Value(dtype='string', id=None)} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Ubuntu 20.04.3 LTS - Python version: 3.9.7 - PyArrow version: 7.0.0
false
1,306,389,656
https://api.github.com/repos/huggingface/datasets/issues/4691
https://github.com/huggingface/datasets/issues/4691
4,691
Dataset Viewer issue for rajistics/indian_food_images
closed
1
2022-07-15T19:03:15
2022-07-18T15:02:03
2022-07-18T15:02:03
rajshah4
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/rajistics/indian_food_images/viewer/rajistics--indian_food_images/train ### Description I have a train/test split in my dataset <img width="410" alt="Screen Shot 2022-07-15 at 11 44 42 AM" src="https://user-images.githubusercontent.com/6808012/179293215-7b419ec3-3527-46f2-8dad-adbc5568cfa0.png"> t The dataset viewer works for the test split (images of indian food), but does not show my train split. My guess is maybe there is some corrupt image file that is guessing this. But I have no idea. The original dataset was pulled from here: https://www.kaggle.com/datasets/l33tc0d3r/indian-food-classification?resource=download-directory ### Owner Yes
false
1,306,321,975
https://api.github.com/repos/huggingface/datasets/issues/4690
https://github.com/huggingface/datasets/pull/4690
4,690
Refactor base extractors
closed
1
2022-07-15T17:47:48
2022-07-18T08:46:56
2022-07-18T08:34:49
albertvillanova
[]
This PR: - Refactors base extractors as subclasses of `BaseExtractor`: - this is an abstract class defining the interface with: - `is_extractable`: abstract class method - `extract`: abstract static method - Implements abstract `MagicNumberBaseExtractor` (as subclass of `BaseExtractor`): - this has a default implementation of `is_extractable` - this improves performance (reducing the number of file reads) by allowing passing already read `magic_number` - Refactors `Extractor`: - reads magic number from file only once This PR deprecates: ```python is_extractable, extractor = self.extractor.is_extractable(input_path, return_extractor=True) self.extractor.extract(input_path, output_path, extractor=extractor) ``` and uses more Pythonic instead: ```python extractor_format = self.extractor.infer_extractor_format(input_path) self.extractor.extract(input_path, output_path, extractor_format) ```
true
1,306,230,203
https://api.github.com/repos/huggingface/datasets/issues/4689
https://github.com/huggingface/datasets/pull/4689
4,689
Test extractors for all compression formats
closed
1
2022-07-15T16:29:55
2022-07-15T17:47:02
2022-07-15T17:35:24
albertvillanova
[]
This PR: - Adds all compression formats to `test_extractor` - Tests each base extractor for all compression formats Note that all compression formats are tested except "rar".
true
1,306,100,488
https://api.github.com/repos/huggingface/datasets/issues/4688
https://github.com/huggingface/datasets/pull/4688
4,688
Skip test_extractor only for zstd param if zstandard not installed
closed
1
2022-07-15T14:23:47
2022-07-15T15:27:53
2022-07-15T15:15:24
albertvillanova
[]
Currently, if `zstandard` is not installed, `test_extractor` is skipped for all compression format parameters. This PR fixes `test_extractor` so that if `zstandard` is not installed, `test_extractor` is skipped only for the `zstd` compression parameter, that is, it is not skipped for all the other compression parameters (`gzip`, `xz`,...).
true
1,306,021,415
https://api.github.com/repos/huggingface/datasets/issues/4687
https://github.com/huggingface/datasets/pull/4687
4,687
Trigger CI also on push to main
closed
1
2022-07-15T13:11:29
2022-07-15T13:47:21
2022-07-15T13:35:23
albertvillanova
[]
Currently, new CI (on GitHub Actions) is only triggered on pull requests branches when the base branch is main. This PR also triggers the CI when a PR is merged to main branch.
true
1,305,974,924
https://api.github.com/repos/huggingface/datasets/issues/4686
https://github.com/huggingface/datasets/pull/4686
4,686
Align logging with Transformers (again)
closed
2
2022-07-15T12:24:29
2023-09-24T10:05:34
2023-07-11T18:29:27
mariosasko
[]
Fix #2832
true
1,305,861,708
https://api.github.com/repos/huggingface/datasets/issues/4685
https://github.com/huggingface/datasets/pull/4685
4,685
Fix mock fsspec
closed
1
2022-07-15T10:23:12
2022-07-15T13:05:03
2022-07-15T12:52:40
albertvillanova
[]
This PR: - Removes an unused method from `DummyTestFS` - Refactors `mock_fsspec` to make it simpler
true
1,305,554,654
https://api.github.com/repos/huggingface/datasets/issues/4684
https://github.com/huggingface/datasets/issues/4684
4,684
How to assign new values to Dataset?
closed
2
2022-07-15T04:17:57
2023-03-20T15:50:41
2022-10-10T11:53:38
beyondguo
[ "enhancement" ]
![image](https://user-images.githubusercontent.com/37113676/179149159-bbbda0c8-a661-403c-87ed-dc2b4219cd68.png) Hi, if I want to change some values of the dataset, or add new columns to it, how can I do it? For example, I want to change all the labels of the SST2 dataset to `0`: ```python from datasets import load_dataset data = load_dataset('glue','sst2') data['train']['label'] = [0]*len(data) ``` I will get the error: ``` TypeError: 'Dataset' object does not support item assignment ```
false
1,305,443,253
https://api.github.com/repos/huggingface/datasets/issues/4683
https://github.com/huggingface/datasets/pull/4683
4,683
Update create dataset card docs
closed
1
2022-07-15T00:41:29
2022-07-18T17:26:00
2022-07-18T13:24:10
stevhliu
[ "documentation" ]
This PR proposes removing the [online dataset card creator](https://huggingface.co/datasets/card-creator/) in favor of simply copy/pasting a template and using the [Datasets Tagger app](https://huggingface.co/spaces/huggingface/datasets-tagging) to generate the tags. The Tagger app provides more guidance by showing all possible values a user can select in the dropdown menus, whereas the online dataset card creator doesn't, which can make it difficult to know what tag values to input. Let me know what you think! :)
true
1,304,788,215
https://api.github.com/repos/huggingface/datasets/issues/4682
https://github.com/huggingface/datasets/issues/4682
4,682
weird issue/bug with columns (dataset iterable/stream mode)
open
0
2022-07-14T13:26:47
2022-07-14T13:26:47
null
eunseojo
[]
I have a dataset online (CloverSearch/cc-news-mutlilingual) that has a bunch of columns, two of which are "score_title_maintext" and "score_title_description". the original files are jsonl formatted. I was trying to iterate through via streaming mode and grab all "score_title_description" values, but I kept getting key not found after a certain point of iteration. I found that some json objects in the file don't have "score_title_description". And in SOME cases, this returns a NONE and in others it just gets a key error. Why is there an inconsistency here and how can I fix it?
false
1,304,617,484
https://api.github.com/repos/huggingface/datasets/issues/4681
https://github.com/huggingface/datasets/issues/4681
4,681
IndexError when loading ImageFolder
closed
2
2022-07-14T10:57:55
2022-07-25T12:37:54
2022-07-25T12:37:54
johko
[ "bug" ]
## Describe the bug Loading an image dataset with `imagefolder` throws `IndexError: list index out of range` when the given folder contains a non-image file (like a csv). ## Steps to reproduce the bug Put a csv file in a folder with images and load it: ```python import datasets datasets.load_dataset("imagefolder", data_dir=path/to/folder) ``` ## Expected results I would expect a better error message, like `Unsupported file` or even the dataset loader just ignoring every file that is not an image in that case. ## Actual results Here is the whole traceback: ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-5.11.0-051100-generic-x86_64-with-glibc2.27 - Python version: 3.9.9 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
false
1,304,534,770
https://api.github.com/repos/huggingface/datasets/issues/4680
https://github.com/huggingface/datasets/issues/4680
4,680
Dataset Viewer issue for codeparrot/xlcost-text-to-code
closed
5
2022-07-14T09:45:50
2022-07-18T16:37:00
2022-07-18T16:04:36
loubnabnl
[]
### Link https://huggingface.co/datasets/codeparrot/xlcost-text-to-code ### Description Error ``` Server Error Status code: 400 Exception: TypeError Message: 'NoneType' object is not iterable ``` Before I did a minor change in the dataset script (removing some comments), the viewer was working but not properely, it wasn't showing the dataset subsets. But the data can be loaded successfully. Thanks! ### Owner Yes
false
1,303,980,648
https://api.github.com/repos/huggingface/datasets/issues/4679
https://github.com/huggingface/datasets/pull/4679
4,679
Added method to remove excess nesting in a DatasetDict
closed
11
2022-07-13T21:49:37
2022-07-21T15:55:26
2022-07-21T10:55:02
CakeCrusher
[]
Added the ability for a DatasetDict to remove additional nested layers within its features to avoid conflicts when collating. It is meant to accompany [this PR](https://github.com/huggingface/transformers/pull/18119) to resolve the same issue [#15505](https://github.com/huggingface/transformers/issues/15505). @stas00 @lhoestq
true
1,303,741,432
https://api.github.com/repos/huggingface/datasets/issues/4678
https://github.com/huggingface/datasets/issues/4678
4,678
Cant pass streaming dataset to dataloader after take()
open
1
2022-07-13T17:34:18
2022-07-14T13:07:21
null
zankner
[ "bug" ]
## Describe the bug I am trying to pass a streaming version of c4 to a dataloader, but it can't be passed after I call `dataset.take(n)`. Some functions such as `shuffle()` can be applied without breaking the dataloader but not take. ## Steps to reproduce the bug ```python import datasets import torch dset = datasets.load_dataset(path='c4', name='en', split="train", streaming=True) dset = dset.take(50_000) dset = dset.with_format("torch") num_workers = 8 batch_size = 512 loader = torch.utils.data.DataLoader(dataset=dset, batch_size=batch_size, num_workers=num_workers) for batch in loader: ... ``` ## Expected results No error thrown when iterating over the dataloader ## Actual results Original Traceback (most recent call last): File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.9/dist-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) File "/root/.local/lib/python3.9/site-packages/datasets/formatting/dataset_wrappers/torch_iterable_dataset.py", line 48, in __iter__ for key, example in self._iter_shard(shard_idx): File "/root/.local/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 586, in _iter_shard yield from ex_iterable.shard_data_sources(shard_idx) File "/root/.local/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 60, in shard_data_sources raise NotImplementedError(f"{type(self)} doesn't implement shard_data_sources yet") NotImplementedError: <class 'datasets.iterable_dataset.TakeExamplesIterable'> doesn't implement shard_data_sources yet ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.4.0-120-generic-x86_64-with-glibc2.31 - Python version: 3.9.13 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
false
1,302,258,440
https://api.github.com/repos/huggingface/datasets/issues/4677
https://github.com/huggingface/datasets/issues/4677
4,677
Random 400 Client Error when pushing dataset
closed
2
2022-07-12T15:56:44
2023-02-07T13:54:10
2023-02-07T13:54:10
msis
[ "bug" ]
## Describe the bug When pushing a dataset, the client errors randomly with `Bad Request for url:...`. At the next call, a new parquet file is created for each shard. The client may fail at any random shard. ## Steps to reproduce the bug ```python dataset.push_to_hub("ORG/DATASET", private=True, branch="main") ``` ## Expected results Push all the dataset to the Hub with no duplicates. If it fails, it should retry or fail, but continue from the last failed shard. ## Actual results ``` --------------------------------------------------------------------------- HTTPError Traceback (most recent call last) testing.ipynb Cell 29 in <cell line: 1>() ----> [1](testing.ipynb?line=0) dataset.push_to_hub("ORG/DATASET", private=True, branch="main") File ~/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py:4297, in Dataset.push_to_hub(self, repo_id, split, private, token, branch, max_shard_size, shard_size, embed_external_files) 4291 warnings.warn( 4292 "'shard_size' was renamed to 'max_shard_size' in version 2.1.1 and will be removed in 2.4.0.", 4293 FutureWarning, 4294 ) 4295 max_shard_size = shard_size -> 4297 repo_id, split, uploaded_size, dataset_nbytes, repo_files, deleted_size = self._push_parquet_shards_to_hub( 4298 repo_id=repo_id, 4299 split=split, 4300 private=private, 4301 token=token, 4302 branch=branch, 4303 max_shard_size=max_shard_size, 4304 embed_external_files=embed_external_files, 4305 ) 4306 organization, dataset_name = repo_id.split("/") 4307 info_to_dump = self.info.copy() File ~/.local/lib/python3.9/site-packages/datasets/arrow_dataset.py:4195, in Dataset._push_parquet_shards_to_hub(self, repo_id, split, private, token, branch, max_shard_size, embed_external_files) 4193 shard.to_parquet(buffer) 4194 uploaded_size += buffer.tell() -> 4195 _retry( 4196 api.upload_file, 4197 func_kwargs=dict( 4198 path_or_fileobj=buffer.getvalue(), 4199 path_in_repo=shard_path_in_repo, 4200 repo_id=repo_id, 4201 token=token, 4202 repo_type="dataset", 4203 revision=branch, 4204 identical_ok=False, 4205 ), 4206 exceptions=HTTPError, 4207 status_codes=[504], 4208 base_wait_time=2.0, 4209 max_retries=5, 4210 max_wait_time=20.0, 4211 ) 4212 shards_path_in_repo.append(shard_path_in_repo) 4214 # Cleanup to remove unused files File ~/.local/lib/python3.9/site-packages/datasets/utils/file_utils.py:284, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time) 282 except exceptions as err: 283 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes): --> 284 raise err 285 else: 286 sleep_time = min(max_wait_time, base_wait_time * 2**retry) # Exponential backoff File ~/.local/lib/python3.9/site-packages/datasets/utils/file_utils.py:281, in _retry(func, func_args, func_kwargs, exceptions, status_codes, max_retries, base_wait_time, max_wait_time) 279 while True: 280 try: --> 281 return func(*func_args, **func_kwargs) 282 except exceptions as err: 283 if retry >= max_retries or (status_codes and err.response.status_code not in status_codes): File ~/.local/lib/python3.9/site-packages/huggingface_hub/hf_api.py:1967, in HfApi.upload_file(self, path_or_fileobj, path_in_repo, repo_id, token, repo_type, revision, identical_ok, commit_message, commit_description, create_pr) 1957 commit_message = ( 1958 commit_message 1959 if commit_message is not None 1960 else f"Upload {path_in_repo} with huggingface_hub" 1961 ) 1962 operation = CommitOperationAdd( 1963 path_or_fileobj=path_or_fileobj, 1964 path_in_repo=path_in_repo, 1965 ) -> 1967 pr_url = self.create_commit( 1968 repo_id=repo_id, 1969 repo_type=repo_type, 1970 operations=[operation], 1971 commit_message=commit_message, 1972 commit_description=commit_description, 1973 token=token, 1974 revision=revision, 1975 create_pr=create_pr, 1976 ) 1977 if pr_url is not None: 1978 re_match = re.match(REGEX_DISCUSSION_URL, pr_url) File ~/.local/lib/python3.9/site-packages/huggingface_hub/hf_api.py:1844, in HfApi.create_commit(self, repo_id, operations, commit_message, commit_description, token, repo_type, revision, create_pr, num_threads) 1836 commit_url = f"{self.endpoint}/api/{repo_type}s/{repo_id}/commit/{revision}" 1838 commit_resp = requests.post( 1839 url=commit_url, 1840 headers={"Authorization": f"Bearer {token}"}, 1841 json=commit_payload, 1842 params={"create_pr": 1} if create_pr else None, 1843 ) -> 1844 _raise_for_status(commit_resp) 1845 return commit_resp.json().get("pullRequestUrl", None) File ~/.local/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:84, in _raise_for_status(request) 76 if request.status_code == 401: 77 # The repo was not found and the user is not Authenticated 78 raise RepositoryNotFoundError( 79 f"401 Client Error: Repository Not Found for url: {request.url}. If the" 80 " repo is private, make sure you are authenticated. (Request ID:" 81 f" {request_id})" 82 ) ---> 84 _raise_with_request_id(request) File ~/.local/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:95, in _raise_with_request_id(request) 92 if request_id is not None and len(e.args) > 0 and isinstance(e.args[0], str): 93 e.args = (e.args[0] + f" (Request ID: {request_id})",) + e.args[1:] ---> 95 raise e File ~/.local/lib/python3.9/site-packages/huggingface_hub/utils/_errors.py:90, in _raise_with_request_id(request) 88 request_id = request.headers.get("X-Request-Id") 89 try: ---> 90 request.raise_for_status() 91 except Exception as e: 92 if request_id is not None and len(e.args) > 0 and isinstance(e.args[0], str): File ~/.local/lib/python3.9/site-packages/requests/models.py:1021, in Response.raise_for_status(self) 1016 http_error_msg = ( 1017 f"{self.status_code} Server Error: {reason} for url: {self.url}" 1018 ) 1020 if http_error_msg: -> 1021 raise HTTPError(http_error_msg, response=self) HTTPError: 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/ORG/DATASET/commit/main (Request ID: a_F0IQAHJdxGKVRYyu1cF) ``` ## Environment info - `datasets` version: 2.3.2 - Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.31 - Python version: 3.9.4 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
false
1,302,202,028
https://api.github.com/repos/huggingface/datasets/issues/4676
https://github.com/huggingface/datasets/issues/4676
4,676
Dataset.map gets stuck on _cast_to_python_objects
closed
9
2022-07-12T15:09:58
2022-10-03T13:01:04
2022-10-03T13:01:03
srobertjames
[ "bug", "good first issue" ]
## Describe the bug `Dataset.map`, when fed a Huggingface Tokenizer as its map func, can sometimes spend huge amounts of time doing casts. A minimal example follows. Not all usages suffer from this. For example, I profiled the preprocessor at https://github.com/huggingface/notebooks/blob/main/examples/question_answering.ipynb , and it did _not_ have this problem. However, I'm at a loss to figure out how it avoids it, as the example below is simple and minimal and still has this problem. This casting, where it occurs, causes the `Dataset.map` to run approximately 7x slower than it runs for code which does not cause this casting. This may be related to https://github.com/huggingface/datasets/issues/1046 . However, the tokenizer is _not_ set to return Tensors. ## Steps to reproduce the bug A minimal, self-contained example to reproduce is below: ```python import transformers from transformers import AutoTokenizer from datasets import load_dataset import torch import cProfile pretrained = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(pretrained) squad = load_dataset('squad') squad_train = squad['train'] squad_tiny = squad_train.select(range(5000)) assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast) def tokenize(ds): tokens = tokenizer(text=ds['question'], text_pair=ds['context'], add_special_tokens=True, padding='max_length', truncation='only_second', max_length=160, stride=32, return_overflowing_tokens=True, return_offsets_mapping=True, ) return tokens cmd = 'squad_tiny.map(tokenize, batched=True, remove_columns=squad_tiny.column_names)' cProfile.run(cmd, sort='tottime') ``` ## Actual results The code works, but takes 10-25 sec per batch (about 7x slower than non-casting code), with the following profile. Note that `_cast_to_python_objects` is the culprit. ``` 63524075 function calls (58206482 primitive calls) in 121.836 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 5274034/40 68.751 0.000 111.060 2.776 features.py:262(_cast_to_python_objects) 42223832 24.077 0.000 33.310 0.000 {built-in method builtins.isinstance} 16338/20 5.121 0.000 111.053 5.553 features.py:361(<listcomp>) 5274135 4.747 0.000 4.749 0.000 {built-in method _abc._abc_instancecheck} 80/40 4.731 0.059 116.292 2.907 {pyarrow.lib.array} 5274135 4.485 0.000 9.234 0.000 abc.py:96(__instancecheck__) 2661564/2645196 2.959 0.000 4.298 0.000 features.py:1081(_check_non_null_non_empty_recursive) 5 2.786 0.557 2.786 0.557 {method 'encode_batch' of 'tokenizers.Tokenizer' objects} 2668052 0.930 0.000 0.930 0.000 {built-in method builtins.len} 5000 0.930 0.000 0.938 0.000 tokenization_utils_fast.py:187(_convert_encoding) 5 0.750 0.150 0.808 0.162 {method 'to_pydict' of 'pyarrow.lib.Table' objects} 1 0.444 0.444 121.749 121.749 arrow_dataset.py:2501(_map_single) 40 0.375 0.009 116.291 2.907 arrow_writer.py:151(__arrow_array__) 10 0.066 0.007 0.066 0.007 {method 'write_batch' of 'pyarrow.lib._CRecordBatchWriter' objects} 1 0.060 0.060 121.835 121.835 fingerprint.py:409(wrapper) 11387/5715 0.049 0.000 0.175 0.000 {built-in method builtins.getattr} 36 0.049 0.001 0.049 0.001 {pyarrow._compute.call_function} 15000 0.040 0.000 0.040 0.000 _collections_abc.py:719(__iter__) 3 0.023 0.008 0.023 0.008 {built-in method _imp.create_dynamic} 77 0.020 0.000 0.020 0.000 {built-in method builtins.dir} 37 0.019 0.001 0.019 0.001 socket.py:543(send) 15 0.017 0.001 0.017 0.001 tokenization_utils_fast.py:460(<listcomp>) 432/421 0.015 0.000 0.024 0.000 traitlets.py:1388(_notify_observers) 5000 0.015 0.000 0.018 0.000 _collections_abc.py:672(keys) 51 0.014 0.000 0.042 0.001 traitlets.py:276(getmembers) 5 0.014 0.003 3.775 0.755 tokenization_utils_fast.py:392(_batch_encode_plus) 3/1 0.014 0.005 0.035 0.035 {built-in method _imp.exec_dynamic} 5 0.012 0.002 0.950 0.190 tokenization_utils_fast.py:438(<listcomp>) 31626 0.012 0.000 0.012 0.000 {method 'append' of 'list' objects} 1532/1001 0.011 0.000 0.189 0.000 traitlets.py:643(get) 5 0.009 0.002 3.796 0.759 arrow_dataset.py:2631(apply_function_on_filtered_inputs) 51 0.009 0.000 0.062 0.001 traitlets.py:1766(traits) 5 0.008 0.002 3.784 0.757 tokenization_utils_base.py:2632(batch_encode_plus) 368 0.007 0.000 0.044 0.000 traitlets.py:1715(_get_trait_default_generator) 26 0.007 0.000 0.022 0.001 traitlets.py:1186(setup_instance) 51 0.006 0.000 0.010 0.000 traitlets.py:1781(<listcomp>) 80/32 0.006 0.000 0.052 0.002 table.py:1758(cast_array_to_feature) 684 0.006 0.000 0.007 0.000 {method 'items' of 'dict' objects} 4344/1794 0.006 0.000 0.192 0.000 traitlets.py:675(__get__) ... ``` ## Environment info I observed this on both Google colab and my local workstation: ### Google colab - `datasets` version: 2.3.2 - Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.13 - PyArrow version: 6.0.1 - Pandas version: 1.3.5 ### Local - `datasets` version: 2.3.2 - Platform: Windows-7-6.1.7601-SP1 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
false
1,302,193,649
https://api.github.com/repos/huggingface/datasets/issues/4675
https://github.com/huggingface/datasets/issues/4675
4,675
Unable to use dataset with PyTorch dataloader
open
1
2022-07-12T15:04:04
2022-07-14T14:17:46
null
BlueskyFR
[ "bug" ]
## Describe the bug When using `.with_format("torch")`, an arrow table is returned and I am unable to use it by passing it to a PyTorch DataLoader: please see the code below. ## Steps to reproduce the bug ```python from datasets import load_dataset from torch.utils.data import DataLoader ds = load_dataset( "para_crawl", name="enfr", cache_dir="/tmp/test/", split="train", keep_in_memory=True, ) dataloader = DataLoader(ds.with_format("torch"), num_workers=32) print(next(iter(dataloader))) ``` Is there something I am doing wrong? The documentation does not say much about the behavior of `.with_format()` so I feel like I am a bit stuck here :-/ Thanks in advance for your help! ## Expected results The code should run with no error ## Actual results ``` AttributeError: 'str' object has no attribute 'dtype' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-4.18.0-348.el8.x86_64-x86_64-with-glibc2.28 - Python version: 3.10.4 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
false
1,301,294,844
https://api.github.com/repos/huggingface/datasets/issues/4674
https://github.com/huggingface/datasets/issues/4674
4,674
Issue loading datasets -- pyarrow.lib has no attribute
closed
1
2022-07-11T22:10:44
2023-02-28T18:06:55
2023-02-28T18:06:55
margotwagner
[ "bug" ]
## Describe the bug I am trying to load sentiment analysis datasets from huggingface, but any dataset I try to use via load_dataset, I get the same error: `AttributeError: module 'pyarrow.lib' has no attribute 'IpcReadOptions'` ## Steps to reproduce the bug ```python dataset = load_dataset("glue", "cola") ``` ## Expected results Download datasets without issue. ## Actual results `AttributeError: module 'pyarrow.lib' has no attribute 'IpcReadOptions'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 8.0.0 - Pandas version: 1.1.0
false
1,301,010,331
https://api.github.com/repos/huggingface/datasets/issues/4673
https://github.com/huggingface/datasets/issues/4673
4,673
load_datasets on csv returns everything as a string
closed
3
2022-07-11T17:30:24
2024-11-05T03:55:10
2022-07-12T13:33:08
courtneysprouse
[ "bug" ]
## Describe the bug If you use: `conll_dataset.to_csv("ner_conll.csv")` It will create a csv file with all of your data as expected, however when you load it with: `conll_dataset = load_dataset("csv", data_files="ner_conll.csv")` everything is read in as a string. For example if I look at everything in 'ner_tags' I get back `['[3 0 7 0 0 0 7 0 0]', '[1 2]', '[5 0]']` instead of what I originally saved which was `[[3, 0, 7, 0, 0, 0, 7, 0, 0], [1, 2], [5, 0]]` I think maybe there is something funky going on with the csv delimiter ## Steps to reproduce the bug ```python # Sample code to reproduce the bug #load original conll dataset orig_conll = load_dataset("conll2003") #save original conll as a csv orig_conll.to_csv("ner_conll.csv") #reload conll data as a csv new_conll = load_dataset("csv", data_files="ner_conll.csv")` ``` ## Expected results A clear and concise description of the expected results. I would expect the data be returned as the data type I saved it as. I.e. if I save a list of ints [[3, 0, 7, 0, 0, 0, 7, 0, 0]], I shouldnt get back a string ['[3 0 7 0 0 0 7 0 0]'] I also get back a string when I pass a list of strings ['EU', 'rejects', 'German', 'call', 'to', 'boycott', 'British', 'lamb', '.'] ## Actual results A list of strings `['[3 0 7 0 0 0 7 0 0]', '[1 2]', '[5 0]']` A string "['EU' 'rejects' 'German' 'call' 'to' 'boycott' 'British' 'lamb' '.']" ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.18.3 - Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.17 - Python version: 3.8.13 - PyArrow version: 8.0.0
false
1,300,911,467
https://api.github.com/repos/huggingface/datasets/issues/4672
https://github.com/huggingface/datasets/pull/4672
4,672
Support extract 7-zip compressed data files
closed
2
2022-07-11T15:56:51
2022-07-15T13:14:27
2022-07-15T13:02:07
albertvillanova
[]
Fix partially #3541, fix #4670.
true
1,300,385,909
https://api.github.com/repos/huggingface/datasets/issues/4671
https://github.com/huggingface/datasets/issues/4671
4,671
Dataset Viewer issue for wmt16
closed
6
2022-07-11T08:34:11
2022-09-13T13:27:02
2022-09-08T08:16:06
lewtun
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/wmt16 ### Description [Reported](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/12#62cb83f14c7f35284e796f9c) by a user of AutoTrain Evaluate. AFAIK this dataset was working 1-2 weeks ago, and I'm not sure how to interpret this error. ``` Status code: 400 Exception: NotImplementedError Message: This is a abstract method ``` Thanks! ### Owner No
false
1,299,984,246
https://api.github.com/repos/huggingface/datasets/issues/4670
https://github.com/huggingface/datasets/issues/4670
4,670
Can't extract files from `.7z` zipfile using `download_and_extract`
closed
5
2022-07-10T18:16:49
2022-07-15T13:02:07
2022-07-15T13:02:07
bhavitvyamalik
[ "bug" ]
## Describe the bug I'm adding a new dataset which is a `.7z` zip file in Google drive and contains 3 json files inside. I'm able to download the data files using `download_and_extract` but after downloading it throws this error: ``` >>> dataset = load_dataset("./datasets/mantis/") Using custom data configuration default Downloading and preparing dataset mantis/default to /Users/bhavitvyamalik/.cache/huggingface/datasets/mantis/default/1.1.0/611affa804ec53e2055a335cc1b8b213bb5a0b5142d919967729d5ee23c6bab4... Downloading data: 100%|█████████████████████████████████████████████████████████| 77.2M/77.2M [00:23<00:00, 3.28MB/s] /Users/bhavitvyamalik/.cache/huggingface/datasets/downloads/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/load.py", line 1745, in load_dataset use_auth_token=use_auth_token, File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/builder.py", line 595, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/Users/bhavitvyamalik/Desktop/work/hf/datasets/src/datasets/builder.py", line 690, in _download_and_prepare ) from None OSError: Cannot find data file. Original error: [Errno 20] Not a directory: '/Users/bhavitvyamalik/.cache/huggingface/datasets/downloads/fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6/merged_train.json' ``` just before generating the splits. I checked `fc3d70123c9de8407587a59aa426c37819cf2bf016795d33270e8a1d558a34e6` file and it's `7z` zip file (similar to downloaded Google drive file) which means it didn't get unzip. Do I need to unzip it separately and then pass the paths for train,dev,test files in `SplitGenerator`? ## Environment info - `datasets` version: 1.18.4.dev0 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.7.8 - PyArrow version: 5.0.0
false
1,299,848,003
https://api.github.com/repos/huggingface/datasets/issues/4669
https://github.com/huggingface/datasets/issues/4669
4,669
loading oscar-corpus/OSCAR-2201 raises an error
closed
1
2022-07-10T07:09:30
2022-07-11T09:27:49
2022-07-11T09:27:49
vitalyshalumov
[ "bug" ]
## Describe the bug load_dataset('oscar-2201', 'af') raises an error: Traceback (most recent call last): File "/usr/lib/python3.8/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "..python3.8/site-packages/datasets/load.py", line 1656, in load_dataset builder_instance = load_dataset_builder( File ".../lib/python3.8/site-packages/datasets/load.py", line 1439, in load_dataset_builder dataset_module = dataset_module_factory( File ".../lib/python3.8/site-packages/datasets/load.py", line 1189, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at .../oscar-2201/oscar-2201.py or any data file in the same directory. Couldn't find 'oscar-2201' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/oscar-2201/oscar-2201.py I've tried other permutations such as : oscar_22 = load_dataset('oscar-2201', 'af',use_auth_token=True) oscar_22 = load_dataset('oscar-corpus/OSCAR-2201', 'af',use_auth_token=True) oscar_22 = load_dataset('oscar-2201', 'af') oscar_22 = load_dataset('oscar-corpus/OSCAR-2201') with the same unfortunate result. ## Steps to reproduce the bug oscar_22 = load_dataset('oscar-2201', 'af',use_auth_token=True) oscar_22 = load_dataset('oscar-corpus/OSCAR-2201', 'af',use_auth_token=True) oscar_22 = load_dataset('oscar-2201', 'af') oscar_22 = load_dataset('oscar-corpus/OSCAR-2201') # Sample code to reproduce the bug ``` ## Expected results loaded data ## Actual results Traceback (most recent call last): File "/usr/lib/python3.8/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 1, in <module> File "..python3.8/site-packages/datasets/load.py", line 1656, in load_dataset builder_instance = load_dataset_builder( File ".../lib/python3.8/site-packages/datasets/load.py", line 1439, in load_dataset_builder dataset_module = dataset_module_factory( File ".../lib/python3.8/site-packages/datasets/load.py", line 1189, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at .../oscar-2201/oscar-2201.py or any data file in the same directory. Couldn't find 'oscar-2201' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/oscar-2201/oscar-2201.py ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
false
1,299,735,893
https://api.github.com/repos/huggingface/datasets/issues/4668
https://github.com/huggingface/datasets/issues/4668
4,668
Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed
closed
1
2022-07-09T18:04:13
2022-07-11T07:47:47
2022-07-11T07:47:47
ghost
[ "dataset-viewer" ]
### Link https://huggingface.co/hungnm/multilingual-amazon-review-sentiment ### Description _No response_ ### Owner Yes
false
1,299,735,703
https://api.github.com/repos/huggingface/datasets/issues/4667
https://github.com/huggingface/datasets/issues/4667
4,667
Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed
closed
0
2022-07-09T18:03:15
2022-07-11T07:47:15
2022-07-11T07:47:15
ghost
[ "duplicate" ]
### Link _No response_ ### Description _No response_ ### Owner _No response_
false
1,299,732,238
https://api.github.com/repos/huggingface/datasets/issues/4666
https://github.com/huggingface/datasets/issues/4666
4,666
Issues with concatenating datasets
closed
2
2022-07-09T17:45:14
2022-07-12T17:16:15
2022-07-12T17:16:14
ChenghaoMou
[ "bug" ]
## Describe the bug It is impossible to concatenate datasets if a feature is sequence of dict in one dataset and a dict of sequence in another. But based on the document, it should be automatically converted. > A [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Sequence) with a internal dictionary feature will be automatically converted into a dictionary of lists. This behavior is implemented to have a compatilbity layer with the TensorFlow Datasets library but may be un-wanted in some cases. If you don’t want this behavior, you can use a python list instead of the [datasets.Sequence](https://huggingface.co/docs/datasets/v2.3.2/en/package_reference/main_classes#datasets.Sequence). ## Steps to reproduce the bug ```python from datasets import concatenate_datasets, load_dataset squad = load_dataset("squad_v2") squad["train"].to_json("output.jsonl", lines=True) temp = load_dataset("json", data_files={"train": "output.jsonl"}) concatenate_datasets([temp["train"], squad["train"]]) ``` ## Expected results No error executing that code ## Actual results ``` ValueError: The features can't be aligned because the key answers of features {'id': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'context': Value(dtype='string', id=None), 'question': Value(dtype='string', id=None), 'answers': Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None)} has unexpected type - Sequence(feature={'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int32', id=None)}, length=-1, id=None) (expected either {'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'answer_start': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)} or Value("null"). ``` ## Environment info - `datasets` version: 2.3.2 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.8.11 - PyArrow version: 6.0.1 - Pandas version: 1.3.5
false
1,299,652,638
https://api.github.com/repos/huggingface/datasets/issues/4665
https://github.com/huggingface/datasets/issues/4665
4,665
Unable to create dataset having Python dataset script only
closed
1
2022-07-09T11:45:46
2022-07-11T07:10:09
2022-07-11T07:10:01
aleSuglia
[ "bug" ]
## Describe the bug Hi there, I'm trying to add the following dataset to Huggingface datasets: https://huggingface.co/datasets/Heriot-WattUniversity/dialog-babi/blob/ I'm trying to do so using the CLI commands but seems that this command generates the wrong `dataset_info.json` file (you can find it in the repo already): ``` datasets-cli test Heriot-WattUniversity/dialog-babi/dialog_babi.py --save_infos --all-configs ``` while it errors when I remove the python script: ``` datasets-cli test Heriot-WattUniversity/dialog-babi/ --save_infos --all-configs ``` The error message is the following: ``` FileNotFoundError: Unable to resolve any data file that matches '['**']' at /Users/as2180/workspace/Heriot-WattUniversity/dialog-babi with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip'] ``` ## Environment info - `datasets` version: 2.3.2 - Platform: macOS-12.4-arm64-arm-64bit - Python version: 3.9.9 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
false
1,299,571,212
https://api.github.com/repos/huggingface/datasets/issues/4664
https://github.com/huggingface/datasets/pull/4664
4,664
Add stanford dog dataset
closed
5
2022-07-09T04:46:07
2022-07-15T13:30:32
2022-07-15T13:15:42
khushmeeet
[]
This PR is for adding dataset, related to issue #4504. We are adding Stanford dog breed dataset. It is a multi class image classification dataset. Details can be found here - http://vision.stanford.edu/aditya86/ImageNetDogs/ Tests on dummy data is failing currently, which I am looking into.
true
1,299,298,693
https://api.github.com/repos/huggingface/datasets/issues/4663
https://github.com/huggingface/datasets/pull/4663
4,663
Add text decorators
closed
1
2022-07-08T17:51:48
2022-07-18T18:33:14
2022-07-18T18:20:49
stevhliu
[]
This PR adds some decoration to text about different modalities to make it more obvious separate guides exist for audio, vision, and text. The goal is to make it easier for users to discover these guides! ![underline](https://user-images.githubusercontent.com/59462357/178044392-9596693e-9a4a-479a-a282-f1edbd90be1a.png) TODO: - [x] Open PR to support new Tailwind classes
true
1,298,845,369
https://api.github.com/repos/huggingface/datasets/issues/4662
https://github.com/huggingface/datasets/pull/4662
4,662
Fix: conll2003 - fix empty example
closed
1
2022-07-08T10:49:13
2022-07-08T14:14:53
2022-07-08T14:02:42
lhoestq
[]
As reported in https://huggingface.co/datasets/conll2003/discussions/2#62c45a14f93fc97e8260532f, there was an extra empty example at the end of the dataset
true
1,298,374,944
https://api.github.com/repos/huggingface/datasets/issues/4661
https://github.com/huggingface/datasets/issues/4661
4,661
Concurrency bug when using same cache among several jobs
open
3
2022-07-08T01:58:11
2025-04-10T13:21:23
null
ioana-blue
[ "bug" ]
## Describe the bug I used to see this bug with an older version of the datasets. It seems to persist. This is my concrete scenario: I launch several evaluation jobs on a cluster in which I share the file system and I share the cache directory used by huggingface libraries. The evaluation jobs read the same *.csv files. If my jobs get all scheduled pretty much at the same time, there are all kinds of weird concurrency errors. Sometime it crashes silently. This time I got lucky that it crashed with a stack trace that I can share and maybe you get to the bottom of this. If you don't have a similar setup available, it may be hard to reproduce as you really need two jobs accessing the same file at the same time to see this type of bug. ## Steps to reproduce the bug I'm running a modified version of `run_glue.py` script adapted to my use case. I've seen the same problem when running some glue datasets as well (so it's not specific to loading the datasets from csv files). ## Expected results No crash, concurrent access to the (intermediate) files just fine. ## Actual results Crashes due to races/concurrency bugs. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-4.18.0-348.23.1.el8_5.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 8.0.0 - Pandas version: 1.1.0 Stack trace that I just got with the crash (I've obfuscated some names, it should still be quite informative): ``` Running tokenizer on dataset: 0%| | 0/3 [00:00<?, ?ba/s] Traceback (most recent call last): File "../../src/models//run_*******.py", line 600, in <module> main() File "../../src/models//run_*******.py", line 444, in main raw_datasets = raw_datasets.map( File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/dataset_dict.py", line 770, in map { File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/dataset_dict.py", line 771, in <dictcomp> k: dataset.map( File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2376, in map return self._map_single( File "/*******/envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 551, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 518, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/*******/envs/tr-crt/lib/python3.8/site-packages/datasets/fingerprint.py", line 458, in wrapper out = func(self, *args, **kwargs) File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2776, in _map_single buf_writer, writer, tmp_file = init_buffer_and_writer() File "/*******//envs/tr-crt/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2696, in init_buffer_and_writer tmp_file = tempfile.NamedTemporaryFile("wb", dir=os.path.dirname(cache_file_name), delete=False) File "/*******//envs/tr-crt/lib/python3.8/tempfile.py", line 541, in NamedTemporaryFile (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type) File "/*******//envs/tr-crt/lib/python3.8/tempfile.py", line 250, in _mkstemp_inner fd = _os.open(file, flags, 0o600) FileNotFoundError: [Errno 2] No such file or directory: '/*******/cache-transformers//transformers/csv/default-ef9cd184210742a7/0.0.0/51cce309a08df9c4d82ffd9363bbe090bf173197fc01a71b034e8594995a1a58/tmps8l6j5yc' ``` As I ran 100s of experiments last year for an empirical paper, I ran into this type of bugs several times. I found several bandaid/work-arounds, e.g., run one job first that caches the dataset => eliminate concurrency; OR use unique caches => eliminate concurrency (but increase storage space), etc. and it all works fine. I'd like to help you fixing this bug as it's really annoying to always apply the work arounds. Let me know what other info from my side could help you figure out the issue. Thanks for your help!
false
1,297,128,387
https://api.github.com/repos/huggingface/datasets/issues/4660
https://github.com/huggingface/datasets/pull/4660
4,660
Fix _resolve_single_pattern_locally on Windows with multiple drives
closed
2
2022-07-07T09:57:30
2022-07-07T17:03:36
2022-07-07T16:52:07
albertvillanova
[]
Currently, when `_resolve_single_pattern_locally` is called from a different drive than the one in `pattern`, it raises an exception: ``` _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\io\parquet.py:35: in __init__ **kwargs, C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\builder.py:287: in __init__ sanitize_patterns(data_files), base_path=base_path, use_auth_token=use_auth_token C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:761: in from_local_or_remote if not isinstance(patterns_for_key, DataFilesList) C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:723: in from_local_or_remote data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions) C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:321: in resolve_patterns_locally_or_by_urls for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions): C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:239: in _resolve_single_pattern_locally for filepath in glob_iter C:\hostedtoolcache\windows\Python\3.6.8\x64\lib\site-packages\datasets\data_files.py:242: in <listcomp> os.path.relpath(filepath, base_path), os.path.relpath(pattern, base_path) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ path = 'C:\\Users\\runneradmin\\AppData\\Local\\Temp\\pytest-of-runneradmin\\pytest-0\\popen-gw0\\data6\\dataset.parquet' start = '/' ... E ValueError: path is on mount 'C:', start on mount 'D:' ``` This PR makes sure that `base_path` is in the same drive as `pattern`.
true
1,297,094,140
https://api.github.com/repos/huggingface/datasets/issues/4659
https://github.com/huggingface/datasets/pull/4659
4,659
Transfer CI to GitHub Actions
closed
4
2022-07-07T09:29:47
2022-07-12T11:30:20
2022-07-12T11:18:25
albertvillanova
[]
This PR transfers CI from CircleCI to GitHub Actions. The implementation in GitHub Actions tries to be as faithful as possible to the implementation in CircleCI and get the same output results (exceptions below). **IMPORTANT NOTE**: The fast-fail policy (described below) is not finally implemented, so that: - we can continue merging PRs with CI in red because of some random error returned by the Hub - it is not annoying for maintainers to have to relaunch failed CI jobs See comments here: https://github.com/huggingface/datasets/pull/4659#discussion_r918802348 Differences in the implementation in GitHub Actions compared to the CircleCI one: - This PR introduces some *fail-fast* mechanisms to significantly reduce the total time CI is running, both because of environmental impact and because CI in GitHub Actions billing depends on the minutes per month running time (see [About billing for GitHub Actions](https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions)): - All tests *depend* on `check_code_quality` job: only if `check_code_quality` passes, then the other test jobs are launched - The tests are implemented with a matrix strategy (cross-product: OS and PyArrow versions) and fail-fast: if any of the 4 processes fails, the others are cancelled - OS dependencies for Linux (see table below) | OS dependencies | Passed tests | Skipped tests | | --- | ---: | ---: | | libsndfile1-dev | 4786 | 3119 | | libsndfile1 | 4786 | 3119 | | libsndfile1, sox | 4788 | 3117 | - This PR replaces `libsndfile1-dev` with `libsndfile1`: the same number of passing tests but less packages installed - This PR adds `sox`: required by MP3 tests (2 more tests are passed: 4788 instead of 4786) - For tests using PyArrow 6, this PR uses 6.0.1 instead of 6.0.0 TO DO: - [ ] Remove old CircleCI CI: kept for the moment to compare stability and performance Close #4658. ## Comparison between CircleCI and GitHub Actions | | | CircleCI | GitHub Actions | | --- | --- | ---: | ---: | | Ubuntu, pyarrow-latest |||| || Passed tests | 4786 | 4788 | || Duration | 11m 0s | 10m 10s | | Windows, pyarrow-latest |||| || Passed tests | 4783 | 4783 | || Duration | 29m 59s | 22m 56s |
true
1,297,001,390
https://api.github.com/repos/huggingface/datasets/issues/4658
https://github.com/huggingface/datasets/issues/4658
4,658
Transfer CI tests to GitHub Actions
closed
0
2022-07-07T08:10:50
2022-07-12T11:18:25
2022-07-12T11:18:25
albertvillanova
[]
Let's try CI tests using GitHub Actions to see if they are more stable than on CircleCI.
false
1,296,743,133
https://api.github.com/repos/huggingface/datasets/issues/4657
https://github.com/huggingface/datasets/issues/4657
4,657
Add SQuAD2.0 Dataset
closed
2
2022-07-07T03:19:36
2022-07-12T16:14:52
2022-07-12T16:14:52
omarespejel
[ "dataset request" ]
## Adding a Dataset - **Name:** *SQuAD2.0* - **Description:** *Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.* - **Paper:** *https://aclanthology.org/P18-2124.pdf* - **Data:** *https://rajpurkar.github.io/SQuAD-explorer/* - **Motivation:** *Dataset for training and evaluating models of conversational response*
false
1,296,740,266
https://api.github.com/repos/huggingface/datasets/issues/4656
https://github.com/huggingface/datasets/issues/4656
4,656
Add Amazon-QA Dataset
closed
1
2022-07-07T03:15:11
2022-07-14T02:20:12
2022-07-14T02:20:12
omarespejel
[ "dataset request" ]
## Adding a Dataset - **Name:** *Amazon-QA* - **Description:** *The dataset is .jsonl format, where each line in the file is a json string that corresponds to a question, existing answers to the question and the extracted review snippets (relevant to the question).* - **Paper:** *https://github.com/amazonqa/amazonqa/tree/master/paper* - **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/amazon-qa.jsonl.gz* - **Motivation:** *Dataset for training and evaluating models of conversational response*
false
1,296,720,896
https://api.github.com/repos/huggingface/datasets/issues/4655
https://github.com/huggingface/datasets/issues/4655
4,655
Simple Wikipedia
closed
1
2022-07-07T02:51:26
2022-07-14T02:16:33
2022-07-14T02:16:33
omarespejel
[ "dataset request" ]
## Adding a Dataset - **Name:** *Simple Wikipedia* - **Description:** *Two different versions of the data set now exist. Both were generated by aligning Simple English Wikipedia and English Wikipedia. A complete description of the extraction process can be found in "Simple English Wikipedia: A New Simplification Task", William Coster and David Kauchak (2011).* - **Paper:** *https://aclanthology.org/P11-2117/* - **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/SimpleWiki.jsonl.gz* - **Motivation:** *Dataset for training and evaluating models of conversational response*
false
1,296,716,119
https://api.github.com/repos/huggingface/datasets/issues/4654
https://github.com/huggingface/datasets/issues/4654
4,654
Add Quora Question Triplets Dataset
closed
1
2022-07-07T02:43:42
2022-07-14T02:13:50
2022-07-14T02:13:50
omarespejel
[ "dataset request" ]
## Adding a Dataset - **Name:** *Quora Question Triplets* - **Description:** *This dataset consists of over 400,000 lines of potential question duplicate pairs. Each line contains IDs for each question in the pair, the full text for each question, and a binary value that indicates whether the line truly contains a duplicate pair.* - **Paper:** - **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/quora_duplicates_triplets.jsonl.gz* - **Motivation:** *Dataset for training and evaluating models of conversational response*
false
1,296,702,834
https://api.github.com/repos/huggingface/datasets/issues/4653
https://github.com/huggingface/datasets/issues/4653
4,653
Add Altlex dataset
closed
1
2022-07-07T02:23:02
2022-07-14T02:12:39
2022-07-14T02:12:39
omarespejel
[ "dataset request" ]
## Adding a Dataset - **Name:** *Altlex* - **Description:** *Git repository for software associated with the 2016 ACL paper "Identifying Causal Relations Using Parallel Wikipedia Articles.”* - **Paper:** *https://aclanthology.org/P16-1135.pdf* - **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/altlex.jsonl.gz* - **Motivation:** *Dataset for training and evaluating models of conversational response*
false
1,296,697,498
https://api.github.com/repos/huggingface/datasets/issues/4652
https://github.com/huggingface/datasets/issues/4652
4,652
Add Sentence Compression Dataset
closed
1
2022-07-07T02:13:46
2022-07-14T02:11:48
2022-07-14T02:11:48
omarespejel
[ "dataset request" ]
## Adding a Dataset - **Name:** *Sentence Compression* - **Description:** *Large corpus of uncompressed and compressed sentences from news articles.* - **Paper:** *https://www.aclweb.org/anthology/D13-1155/* - **Data:** *https://github.com/google-research-datasets/sentence-compression/tree/master/data* - **Motivation:** *Dataset for training and evaluating models of conversational response*
false
1,296,689,414
https://api.github.com/repos/huggingface/datasets/issues/4651
https://github.com/huggingface/datasets/issues/4651
4,651
Add Flickr 30k Dataset
closed
1
2022-07-07T01:59:08
2022-07-14T02:09:45
2022-07-14T02:09:45
omarespejel
[ "dataset request" ]
## Adding a Dataset - **Name:** *Flickr 30k* - **Description:** *To produce the denotation graph, we have created an image caption corpus consisting of 158,915 crowd-sourced captions describing 31,783 images. This is an extension of our previous Flickr 8k Dataset. The new images and captions focus on people involved in everyday activities and events.* - **Paper:** *https://transacl.org/ojs/index.php/tacl/article/view/229/33* - **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/flickr30k_captions.jsonl.gz* - **Motivation:** *Dataset for training and evaluating models of conversational response*
false
1,296,680,037
https://api.github.com/repos/huggingface/datasets/issues/4650
https://github.com/huggingface/datasets/issues/4650
4,650
Add SPECTER dataset
open
1
2022-07-07T01:41:32
2022-07-14T02:07:49
null
omarespejel
[ "dataset request" ]
## Adding a Dataset - **Name:** *SPECTER* - **Description:** *SPECTER: Document-level Representation Learning using Citation-informed Transformers* - **Paper:** *https://doi.org/10.18653/v1/2020.acl-main.207* - **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/specter_train_triples.jsonl.gz* - **Motivation:** *Dataset for training and evaluating models of conversational response*
false
1,296,673,712
https://api.github.com/repos/huggingface/datasets/issues/4649
https://github.com/huggingface/datasets/issues/4649
4,649
Add PAQ dataset
closed
1
2022-07-07T01:29:42
2022-07-14T02:06:27
2022-07-14T02:06:27
omarespejel
[ "dataset request" ]
## Adding a Dataset - **Name:** *PAQ* - **Description:** *This repository contains code and models to support the research paper PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them* - **Paper:** *https://arxiv.org/abs/2102.07033* - **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/PAQ_pairs.jsonl.gz* - **Motivation:** *Dataset for training and evaluating models of conversational response*
false
1,296,659,335
https://api.github.com/repos/huggingface/datasets/issues/4648
https://github.com/huggingface/datasets/issues/4648
4,648
Add WikiAnswers dataset
closed
1
2022-07-07T01:06:37
2022-07-14T02:03:40
2022-07-14T02:03:40
omarespejel
[ "dataset request" ]
## Adding a Dataset - **Name:** *WikiAnswers* - **Description:** *The WikiAnswers corpus contains clusters of questions tagged by WikiAnswers users as paraphrases. Each cluster optionally contains an answer provided by WikiAnswers users.* - **Paper:** *https://dl.acm.org/doi/10.1145/2623330.2623677* - **Data:** *https://github.com/afader/oqa#wikianswers-corpus* - **Motivation:** *Dataset for training and evaluating models of conversational response*
false
1,296,311,270
https://api.github.com/repos/huggingface/datasets/issues/4647
https://github.com/huggingface/datasets/issues/4647
4,647
Add Reddit dataset
open
0
2022-07-06T19:49:18
2022-07-06T19:49:18
null
omarespejel
[ "dataset request" ]
## Adding a Dataset - **Name:** *Reddit comments (2015-2018)* - **Description:** *Reddit is an American social news aggregation website, where users can post links, and take part in discussions on these posts. These threaded discussions provide a large corpus, which is converted into a conversational dataset using the tools in this directory.* - **Paper:** *https://arxiv.org/abs/1904.06472* - **Data:** *https://github.com/PolyAI-LDN/conversational-datasets/tree/master/reddit* - **Motivation:** *Dataset for training and evaluating models of conversational response*
false
1,296,027,785
https://api.github.com/repos/huggingface/datasets/issues/4645
https://github.com/huggingface/datasets/pull/4645
4,645
Set HF_SCRIPTS_VERSION to main
closed
1
2022-07-06T15:43:21
2022-07-06T15:56:21
2022-07-06T15:45:05
lhoestq
[]
After renaming "master" to "main", the CI fails with ``` AssertionError: 'https://raw.githubusercontent.com/huggingface/datasets/main/datasets/_dummy/_dummy.py' not found in "Couldn't find a dataset script at /home/circleci/datasets/_dummy/_dummy.py or any data file in the same directory. Couldn't find '_dummy' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/_dummy/_dummy.py" ``` This is because in the CI we were still using `HF_SCRIPTS_VERSION=master`. I changed it to "main"
true
1,296,018,052
https://api.github.com/repos/huggingface/datasets/issues/4644
https://github.com/huggingface/datasets/pull/4644
4,644
[Minor fix] Typo correction
closed
1
2022-07-06T15:37:02
2022-07-06T15:56:32
2022-07-06T15:45:16
cakiki
[]
recieve -> receive
true
1,295,852,650
https://api.github.com/repos/huggingface/datasets/issues/4643
https://github.com/huggingface/datasets/pull/4643
4,643
Rename master to main
closed
3
2022-07-06T13:34:30
2022-07-06T15:36:46
2022-07-06T15:25:08
lhoestq
[]
This PR renames mentions of "master" by "main" in the code base for several cases: - set the default dataset script version to "main" if the local installation of `datasets` is a dev installation - update URLs to this github repository to use "main" - update the DVC benchmark - update the github workflows - update docstrings - update tests to compare the changes in dataset cards against "main"
true
1,295,748,083
https://api.github.com/repos/huggingface/datasets/issues/4642
https://github.com/huggingface/datasets/issues/4642
4,642
Streaming issue for ccdv/pubmed-summarization
closed
3
2022-07-06T12:13:07
2022-07-06T14:17:34
2022-07-06T14:17:34
lewtun
[]
### Link https://huggingface.co/datasets/ccdv/pubmed-summarization ### Description This was reported by a [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/7). It seems like streaming doesn't work due to the way the dataset loading script is defined? ``` Status code: 400 Exception: FileNotFoundError Message: https://huggingface.co/datasets/ccdv/pubmed-summarization/resolve/main/train.zip/train.txt ``` ### Owner No
false
1,295,633,250
https://api.github.com/repos/huggingface/datasets/issues/4641
https://github.com/huggingface/datasets/issues/4641
4,641
Dataset Viewer issue for kmfoda/booksum
closed
3
2022-07-06T10:38:16
2022-07-06T13:25:28
2022-07-06T11:58:06
lewtun
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/kmfoda/booksum ### Description A [user of AutoTrain Evaluate](https://huggingface.co/spaces/autoevaluate/model-evaluator/discussions/9) discovered this dataset cannot be streamed due to: ``` Status code: 400 Exception: ClientResponseError Message: 401, message='Unauthorized', url=URL('https://huggingface.co/datasets/kmfoda/booksum/resolve/47953f583d6967f086cb16a2f4d2346e9834024d/test.csv') ``` I'm not sure why it says "Unauthorized" since it's just a bunch of CSV files in a repo ### Owner No
false
1,295,495,699
https://api.github.com/repos/huggingface/datasets/issues/4640
https://github.com/huggingface/datasets/pull/4640
4,640
Support all split in streaming mode
open
1
2022-07-06T08:56:38
2022-07-06T15:19:55
null
albertvillanova
[]
Fix #4637.
true
1,295,367,322
https://api.github.com/repos/huggingface/datasets/issues/4639
https://github.com/huggingface/datasets/issues/4639
4,639
Add HaGRID -- HAnd Gesture Recognition Image Dataset
open
0
2022-07-06T07:41:32
2022-07-06T07:41:32
null
osanseviero
[ "dataset request" ]
## Adding a Dataset - **Name:** HaGRID -- HAnd Gesture Recognition Image Dataset - **Description:** We introduce a large image dataset HaGRID (HAnd Gesture Recognition Image Dataset) for hand gesture recognition (HGR) systems. You can use it for image classification or image detection tasks. Proposed dataset allows to build HGR systems, which can be used in video conferencing services (Zoom, Skype, Discord, Jazz etc.), home automation systems, the automotive sector, etc. - **Paper:** https://arxiv.org/abs/2206.08219 - **Data:** https://github.com/hukenovs/hagrid Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,295,233,315
https://api.github.com/repos/huggingface/datasets/issues/4638
https://github.com/huggingface/datasets/pull/4638
4,638
The speechocean762 dataset
closed
4
2022-07-06T06:17:30
2022-10-03T09:34:36
2022-10-03T09:34:36
jimbozhang
[ "dataset contribution" ]
[speechocean762](https://www.openslr.org/101/) is a non-native English corpus for pronunciation scoring tasks. It is free for both commercial and non-commercial use. I believe it will be easier to use if it could be available on Hugging Face.
true
1,294,818,236
https://api.github.com/repos/huggingface/datasets/issues/4637
https://github.com/huggingface/datasets/issues/4637
4,637
The "all" split breaks streaming
open
6
2022-07-05T21:56:49
2022-07-15T13:59:30
null
cakiki
[ "bug" ]
## Describe the bug Not sure if this is a bug or just the way streaming works, but setting `streaming=True` did not work when setting `split="all"` ## Steps to reproduce the bug The following works: ```python ds = load_dataset('super_glue', 'wsc.fixed', split='all') ``` The following throws `ValueError: Bad split: all. Available splits: ['train', 'validation', 'test']`: ```python ds = load_dataset('super_glue', 'wsc.fixed', split='all', streaming=True) ``` ## Expected results An iterator over all splits. ## Actual results I had to do the following to achieve the desired result: ```python from itertools import chain ds = load_dataset('super_glue', 'wsc.fixed', streaming=True) it = chain.from_iterable(ds.values()) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31 - Python version: 3.10.5 - PyArrow version: 8.0.0 - Pandas version: 1.4.3
false
1,294,547,836
https://api.github.com/repos/huggingface/datasets/issues/4636
https://github.com/huggingface/datasets/issues/4636
4,636
Add info in docs about behavior of download_config.num_proc
closed
0
2022-07-05T17:01:00
2022-07-28T10:40:32
2022-07-28T10:40:32
nateraw
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** I went to override `download_config.num_proc` and was confused about what was happening under the hood. It would be nice to have the behavior documented a bit better so folks know what's happening when they use it. **Describe the solution you'd like** - Add note about how the default number of workers is 16. Related code: https://github.com/huggingface/datasets/blob/7bcac0a6a0fc367cc068f184fa132b8de8dfa11d/src/datasets/download/download_manager.py#L299-L302 - Add note that if the number of workers is higher than the number of files to download, it won't use multiprocessing. **Describe alternatives you've considered** maybe it would also be nice to set `num_proc` = `num_files` when `num_proc` > `num_files`. **Additional context** ...
false
1,294,475,931
https://api.github.com/repos/huggingface/datasets/issues/4635
https://github.com/huggingface/datasets/issues/4635
4,635
Dataset Viewer issue for vadis/sv-ident
closed
6
2022-07-05T15:48:13
2022-07-06T07:13:33
2022-07-06T07:12:14
e-tornike
[ "dataset-viewer" ]
### Link https://huggingface.co/datasets/vadis/sv-ident/viewer/default/validation ### Description Error message when loading validation split in the viewer: ``` Status code: 400 Exception: Status400Error Message: The split cache is empty. ``` ### Owner _No response_
false
1,294,405,251
https://api.github.com/repos/huggingface/datasets/issues/4634
https://github.com/huggingface/datasets/issues/4634
4,634
Can't load the Hausa audio dataset
closed
1
2022-07-05T14:47:36
2022-09-13T14:07:32
2022-09-13T14:07:32
moro23
[]
common_voice_train = load_dataset("common_voice", "ha", split="train+validation")
false
1,294,367,783
https://api.github.com/repos/huggingface/datasets/issues/4633
https://github.com/huggingface/datasets/pull/4633
4,633
[data_files] Only match separated split names
closed
5
2022-07-05T14:18:11
2022-07-18T13:20:29
2022-07-18T13:07:33
lhoestq
[]
As reported in https://github.com/huggingface/datasets/issues/4477, the current pattern matching to infer which file goes into which split is too permissive. For example a file "contest.py" would be considered part of a test split (it contains "test") and "seqeval.py" as well (it contains "eval"). In this PR I made the pattern matching more robust by only matching split names **between separators**. The supported separators are dots, dashes, spaces and underscores. I updated the docs accordingly. One detail about the tests: I had to update one test because it was using `PurePath.match` as a reference for globbing, but it doesn't support the `[..]` glob pattern. Therefore I added a `mock_fs` context manager that can be used to easily define a dummy filesystem with certain files in it and run pattern matching tests. Its code comes mostly from test_streaming_download_manager.py Close https://github.com/huggingface/datasets/issues/4477
true
1,294,166,880
https://api.github.com/repos/huggingface/datasets/issues/4632
https://github.com/huggingface/datasets/issues/4632
4,632
'sort' method sorts one column only
closed
3
2022-07-05T11:25:26
2023-07-25T15:04:27
2023-07-25T15:04:27
shachardon
[]
The 'sort' method changes the order of one column only (the one defined by the argument 'column'), thus creating a mismatch between a sample fields. I would expect it to change the order of the samples as a whole, based on the 'column' order.
false
1,293,545,900
https://api.github.com/repos/huggingface/datasets/issues/4631
https://github.com/huggingface/datasets/pull/4631
4,631
Update WinoBias README
closed
1
2022-07-04T20:24:40
2022-07-07T13:23:32
2022-07-07T13:11:47
sashavor
[]
I'm adding some information about Winobias that I got from the paper :smile: I think this makes it a bit clearer!
true
1,293,470,728
https://api.github.com/repos/huggingface/datasets/issues/4630
https://github.com/huggingface/datasets/pull/4630
4,630
fix(dataset_wrappers): Fixes access to fsspec.asyn in torch_iterable_dataset.py.
closed
1
2022-07-04T18:26:55
2022-07-05T15:19:52
2022-07-05T15:08:21
gugarosa
[]
Fix #4612. Apparently, newest `fsspec` versions do not allow access to attribute-based modules if they are not imported, such as `fsspec.async`. Thus, @mariosasko suggested to add the missing part to the module import to allow for its access.
true
1,293,418,800
https://api.github.com/repos/huggingface/datasets/issues/4629
https://github.com/huggingface/datasets/issues/4629
4,629
Rename repo default branch to main
closed
0
2022-07-04T17:16:10
2022-07-06T15:49:57
2022-07-06T15:49:57
albertvillanova
[ "maintenance" ]
Rename repository default branch to `main` (instead of current `master`). Once renamed, users will have to manually update their local repos: - [ ] Upstream: ``` git branch -m master main git fetch upstream main git branch -u upstream/main main git remote set-head upstream -a ``` - [ ] Origin: Rename fork default branch as well at: https://github.com/USERNAME/lam/settings/branches Then: ``` git fetch origin main git remote set-head origin -a ``` CC: @sgugger
false
1,293,361,308
https://api.github.com/repos/huggingface/datasets/issues/4628
https://github.com/huggingface/datasets/pull/4628
4,628
Fix time type `_arrow_to_datasets_dtype` conversion
closed
1
2022-07-04T16:20:15
2022-07-07T14:08:38
2022-07-07T13:57:12
mariosasko
[]
Fix #4620 The issue stems from the fact that `pa.array([time_data]).type` returns `DataType(time64[unit])`, which doesn't expose the `unit` attribute, instead of `Time64Type(time64[unit])`. I believe this is a bug in PyArrow. Luckily, the both types have the same `str()`, so in this PR I call `pa.type_for_alias(str(type))` to convert them both to the `Time64Type(time64[unit])` format. cc @severo
true
1,293,287,798
https://api.github.com/repos/huggingface/datasets/issues/4627
https://github.com/huggingface/datasets/pull/4627
4,627
fixed duplicate calculation of spearmanr function in metrics wrapper.
closed
3
2022-07-04T15:02:01
2022-07-07T12:41:09
2022-07-07T12:41:09
benlipkin
[]
During _compute, the scipy.stats spearmanr function was called twice, redundantly, once for calculating the score and once for calculating the p-value, under the conditional branch where return_pvalue=True. I adjusted the _compute function to execute the spearmanr function once, store the results tuple in a temporary variable, and then pass the indexed contents to the expected keys of the returned dictionary.
true
1,293,256,269
https://api.github.com/repos/huggingface/datasets/issues/4626
https://github.com/huggingface/datasets/issues/4626
4,626
Add non-commercial licensing info for datasets for which we removed tags
open
1
2022-07-04T14:32:43
2022-07-08T14:27:29
null
lhoestq
[]
We removed several YAML tags saying that certain datasets can't be used for commercial purposes: https://github.com/huggingface/datasets/pull/4613#discussion_r911919753 Reason for this is that we only allow tags that are part of our [supported list of licenses](https://github.com/huggingface/datasets/blob/84fc3ad73c85de4eda5d152dfede7671491449cb/src/datasets/utils/resources/standard_licenses.tsv) We should update the Licensing Information section of the concerned dataset cards, now that the non-commercial tag doesn't exist anymore for certain datasets
false
1,293,163,744
https://api.github.com/repos/huggingface/datasets/issues/4625
https://github.com/huggingface/datasets/pull/4625
4,625
Unpack `dl_manager.iter_files` to allow parallization
closed
2
2022-07-04T13:16:58
2022-07-05T11:11:54
2022-07-05T11:00:48
mariosasko
[]
Iterate over data files outside `dl_manager.iter_files` to allow parallelization in streaming mode. (The issue reported [here](https://discuss.huggingface.co/t/dataset-only-have-n-shard-1-when-has-multiple-shards-in-repo/19887)) PS: Another option would be to override `FilesIterable.__getitem__` to make it indexable and check for that type in `_shard_kwargs` and `n_shards,` but IMO this solution adds too much unnecessary complexity.
true
1,293,085,058
https://api.github.com/repos/huggingface/datasets/issues/4624
https://github.com/huggingface/datasets/pull/4624
4,624
Remove all paperswithcode_id: null
closed
3
2022-07-04T12:11:32
2023-09-24T10:05:19
2022-07-04T13:10:38
lhoestq
[]
On the Hub there is a validation error on the `paperswithcode_id` tag when the value is `null`: <img width="686" alt="image" src="https://user-images.githubusercontent.com/42851186/177151825-93d341c5-25bd-41ab-96c2-c0b516d51c68.png"> We've been using `null` to specify that we checked on pwc but the dataset doesn't exist there. To have the validation working again we can simply remove all the `paperswithcode_id: null`. cc @julien-c
true
1,293,042,894
https://api.github.com/repos/huggingface/datasets/issues/4623
https://github.com/huggingface/datasets/issues/4623
4,623
Loading MNIST as Pytorch Dataset
open
4
2022-07-04T11:33:10
2022-07-04T14:40:50
null
jameschapman19
[ "bug" ]
## Describe the bug Conversion of MNIST dataset to pytorch fails with bug ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("mnist", split="train") dataset.set_format('torch') dataset[0] print() ``` ## Expected results Expect to see torch tensors image and label ## Actual results Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\pydevd.py", line 1491, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm 2020.3.3\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:/Users/chapm/PycharmProjects/multiviewdata/multiviewdata/huggingface/mnist.py", line 13, in <module> dataset[0] File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2154, in __getitem__ return self._getitem( File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\arrow_dataset.py", line 2139, in _getitem formatted_output = format_table( File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 532, in format_table return formatter(pa_table, query_type=query_type) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\formatting.py", line 281, in __call__ return self.format_row(pa_table) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 58, in format_row return self.recursive_tensorize(row) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 54, in recursive_tensorize return map_nested(self._recursive_tensorize, data_struct, map_list=False) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 356, in map_nested mapped = [ File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 357, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in _single_map_nested return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 309, in <dictcomp> return {k: _single_map_nested((function, v, types, None, True, None)) for k, v in pbar} File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\utils\py_utils.py", line 293, in _single_map_nested return function(data_struct) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 51, in _recursive_tensorize return self._tensorize(data_struct) File "C:\Users\chapm\PycharmProjects\multiviewdata\venv\lib\site-packages\datasets\formatting\torch_formatter.py", line 38, in _tensorize if np.issubdtype(value.dtype, np.integer): AttributeError: 'bytes' object has no attribute 'dtype' python-BaseException ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.3.2 - Platform: Windows-10-10.0.22579-SP0 - Python version: 3.9.2 - PyArrow version: 8.0.0 - Pandas version: 1.4.1
false
1,293,031,939
https://api.github.com/repos/huggingface/datasets/issues/4622
https://github.com/huggingface/datasets/pull/4622
4,622
Fix ImageFolder with parameters drop_metadata=True and drop_labels=False (when metadata.jsonl is present)
closed
5
2022-07-04T11:23:20
2022-07-15T14:37:23
2022-07-15T14:24:24
polinaeterna
[]
Will fix #4621 ImageFolder raises `KeyError: 'label'` with params `drop_metadata=True` and `drop_labels=False` (if there is at least one metadata.jsonl file a data directory). This happens because metadata files are collected inside `analyze()` function regardless of `drop_metadata` value. And then the following condition doesn't pass: https://github.com/huggingface/datasets/blob/master/src/datasets/packaged_modules/imagefolder/imagefolder.py#L167 So I suggest to double check it inside `analyze()` not to collect metadata files if they are not needed. (and labels too, to be consistent) --- Also, I added a test to check if labels are inferred correctly from directories names in general (because we didn't have it) :)
true
1,293,030,128
https://api.github.com/repos/huggingface/datasets/issues/4621
https://github.com/huggingface/datasets/issues/4621
4,621
ImageFolder raises an error with parameters drop_metadata=True and drop_labels=False when metadata.jsonl is present
closed
0
2022-07-04T11:21:44
2022-07-15T14:24:24
2022-07-15T14:24:24
polinaeterna
[ "bug" ]
## Describe the bug If you pass `drop_metadata=True` and `drop_labels=False` when a `data_dir` contains at least one `matadata.jsonl` file, you will get a KeyError. This is probably not a very useful case but we shouldn't get an error anyway. Asking users to move metadata files manually outside `data_dir` or pass features manually (when there is a tool that can infer them automatically) don't look like a good idea to me either. ## Steps to reproduce the bug ### Clone an example dataset from the Hub ```bash git clone https://huggingface.co/datasets/nateraw/test-imagefolder-metadata ``` ### Try to load it ```python from datasets import load_dataset ds = load_dataset("test-imagefolder-metadata", drop_metadata=True, drop_labels=False) ``` or even just ```python ds = load_dataset("test-imagefolder-metadata", drop_metadata=True) ``` as `drop_labels=False` is a default value. ## Expected results A DatasetDict object with two features: `"image"` and `"label"`. ## Actual results ``` Traceback (most recent call last): File "/home/polina/workspace/datasets/debug.py", line 18, in <module> ds = load_dataset( File "/home/polina/workspace/datasets/src/datasets/load.py", line 1732, in load_dataset builder_instance.download_and_prepare( File "/home/polina/workspace/datasets/src/datasets/builder.py", line 704, in download_and_prepare self._download_and_prepare( File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1227, in _download_and_prepare super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos) File "/home/polina/workspace/datasets/src/datasets/builder.py", line 793, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/polina/workspace/datasets/src/datasets/builder.py", line 1218, in _prepare_split example = self.info.features.encode_example(record) File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1596, in encode_example return encode_nested_example(self, example) File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in encode_nested_example { File "/home/polina/workspace/datasets/src/datasets/features/features.py", line 1165, in <dictcomp> { File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in zip_dict yield key, tuple(d[key] for d in dicts) File "/home/polina/workspace/datasets/src/datasets/utils/py_utils.py", line 249, in <genexpr> yield key, tuple(d[key] for d in dicts) KeyError: 'label' ``` ## Environment info `datasets` master branch - `datasets` version: 2.3.3.dev0 - Platform: Linux-5.14.0-1042-oem-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.1 - Pandas version: 1.4.1
false