id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,090,413,758
https://api.github.com/repos/huggingface/datasets/issues/3501
https://github.com/huggingface/datasets/pull/3501
3,501
Update pib dataset card
closed
0
2021-12-29T10:14:40
2021-12-29T11:13:21
2021-12-29T11:13:21
albertvillanova
[]
Related to #3496
true
1,090,406,133
https://api.github.com/repos/huggingface/datasets/issues/3500
https://github.com/huggingface/datasets/pull/3500
3,500
Docs: Add VCTK dataset description
closed
0
2021-12-29T10:02:05
2022-01-04T10:46:02
2022-01-04T10:25:09
jaketae
[]
This PR is a very minor followup to #1837, with only docs changes (single comment string).
true
1,090,132,618
https://api.github.com/repos/huggingface/datasets/issues/3499
https://github.com/huggingface/datasets/issues/3499
3,499
Adjusting chunk size for streaming datasets
closed
2
2021-12-28T21:17:53
2022-05-06T16:29:05
2022-05-06T16:29:05
JoelNiklaus
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the frequent decompressing. **Describe the solution you'd like** I would appreciate a parameter in the load_dataset function, that allows me to set the chunksize myself (to a value like 100'000 in my case). Like that, I hope to improve the processing time.
false
1,090,096,332
https://api.github.com/repos/huggingface/datasets/issues/3498
https://github.com/huggingface/datasets/pull/3498
3,498
update `pretty_name` for first 200 datasets
closed
0
2021-12-28T19:50:07
2022-07-10T14:36:53
2022-01-05T16:38:21
bhavitvyamalik
[]
I made a script some time back to fetch `pretty_names` from `papers_with_code` dataset along with some other rules incase that dataset wasn't available on `papers_with_code`. Updating them in the `README` of `datasets`. Took only the first 200 datasets into consideration and after some eyeballing, most of them were looking good to me!
true
1,090,050,148
https://api.github.com/repos/huggingface/datasets/issues/3497
https://github.com/huggingface/datasets/issues/3497
3,497
Changing sampling rate in audio dataset and subsequently mapping with `num_proc > 1` leads to weird bug
closed
2
2021-12-28T18:03:49
2022-01-21T13:22:27
2022-01-21T13:22:27
patrickvonplaten
[ "bug" ]
Running: ```python from datasets import load_dataset, DatasetDict import datasets from transformers import AutoFeatureExtractor raw_datasets = DatasetDict() raw_datasets["train"] = load_dataset("common_voice", "ab", split="train") feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base") raw_datasets = raw_datasets.cast_column( "audio", datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate) ) num_workers = 16 def prepare_dataset(batch): sample = batch["audio"] inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"]) batch["input_values"] = inputs.input_values[0] batch["input_length"] = len(batch["input_values"]) return batch raw_datasets.map( prepare_dataset, remove_columns=next(iter(raw_datasets.values())).column_names, num_proc=16, desc="preprocess datasets", ) ``` gives ```bash File "/home/patrick/experiments/run_bug.py", line 25, in <module> raw_datasets.map( File "/home/patrick/python_bin/datasets/dataset_dict.py", line 492, in map { File "/home/patrick/python_bin/datasets/dataset_dict.py", line 493, in <dictcomp> k: dataset.map( File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2139, in map shards = [ File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2140, in <listcomp> self.shard(num_shards=num_proc, index=rank, contiguous=True, keep_in_memory=keep_in_memory) File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 3164, in shard return self.select( File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 485, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/patrick/python_bin/datasets/fingerprint.py", line 411, in wrapper out = func(self, *args, **kwargs) File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2756, in select return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint) File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 2667, in _new_dataset_with_indices return Dataset( File "/home/patrick/python_bin/datasets/arrow_dataset.py", line 659, in __init__ raise ValueError( ValueError: External features info don't match the dataset: Got {'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)} with type struct<client_id: string, path: string, audio: string, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string> but expected something like {'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)} with type struct<client_id: string, path: string, audio: struct<path: string, bytes: binary>, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string> ``` Versions: ```python - `datasets` version: 1.16.2.dev0 - Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyArrow version: 6.0.1 ``` and `transformers`: ``` - `transformers` version: 4.16.0.dev0 - Platform: Linux-5.15.8-76051508-generic-x86_64-with-glibc2.33 - Python version: 3.9.7 ```
false
1,089,989,155
https://api.github.com/repos/huggingface/datasets/issues/3496
https://github.com/huggingface/datasets/pull/3496
3,496
Update version of pib dataset and make it streamable
closed
3
2021-12-28T16:01:55
2022-01-03T14:42:28
2021-12-29T08:42:57
albertvillanova
[]
This PR: - Updates version of pib dataset: from 0.0.0 to 1.3.0 - Makes the dataset streamable Fix #3491. CC: @severo
true
1,089,983,632
https://api.github.com/repos/huggingface/datasets/issues/3495
https://github.com/huggingface/datasets/issues/3495
3,495
Add VoxLingua107
open
0
2021-12-28T15:51:43
2021-12-28T15:51:43
null
jaketae
[ "dataset request" ]
## Adding a Dataset - **Name:** VoxLingua107 - **Description:** VoxLingua107 is a speech dataset for training spoken language identification models. - **Paper:** https://arxiv.org/abs/2011.12998 - **Data:** http://bark.phon.ioc.ee/voxlingua107/ - **Motivation:** 107 languages, totaling 6628 hours for the train split. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,089,983,103
https://api.github.com/repos/huggingface/datasets/issues/3494
https://github.com/huggingface/datasets/pull/3494
3,494
Clone full repo to detect new tags when mirroring datasets on the Hub
closed
2
2021-12-28T15:50:47
2021-12-28T16:07:21
2021-12-28T16:07:20
lhoestq
[]
The new releases of `datasets` were not detected because the shallow clone in the CI wasn't getting the git tags. By cloning the full repository we can properly detect a new release, and tag all the dataset repositories accordingly cc @SBrandeis
true
1,089,967,286
https://api.github.com/repos/huggingface/datasets/issues/3493
https://github.com/huggingface/datasets/pull/3493
3,493
Fix VCTK encoding
closed
0
2021-12-28T15:23:36
2021-12-28T15:48:18
2021-12-28T15:48:17
lhoestq
[]
utf-8 encoding was missing in the VCTK dataset builder added in #3351
true
1,089,952,943
https://api.github.com/repos/huggingface/datasets/issues/3492
https://github.com/huggingface/datasets/pull/3492
3,492
Add `gzip` for `to_json`
closed
0
2021-12-28T15:01:11
2022-07-10T14:36:52
2022-01-05T13:03:36
bhavitvyamalik
[]
(Partially) closes #3480. I have added `gzip` compression for `to_json`. I realised we can run into this compression problem with `to_csv` as well. `IOHandler` can be used for `to_csv` too. Please let me know if any changes are required.
true
1,089,918,018
https://api.github.com/repos/huggingface/datasets/issues/3491
https://github.com/huggingface/datasets/issues/3491
3,491
Update version of pib dataset
closed
0
2021-12-28T14:03:58
2021-12-29T08:42:57
2021-12-29T08:42:57
albertvillanova
[ "dataset request" ]
On the Hub we have v0, while there exists v1.3. Related to bigscience-workshop/data_tooling#130
false
1,089,730,181
https://api.github.com/repos/huggingface/datasets/issues/3490
https://github.com/huggingface/datasets/issues/3490
3,490
Does datasets support load text from HDFS?
open
1
2021-12-28T08:56:02
2022-02-14T14:00:51
null
dancingpipi
[ "enhancement" ]
The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine, so I wander does datasets support read data from hdfs?
false
1,089,401,926
https://api.github.com/repos/huggingface/datasets/issues/3489
https://github.com/huggingface/datasets/pull/3489
3,489
Avoid unnecessary list creations
open
1
2021-12-27T18:20:56
2022-07-06T15:19:49
null
bryant1410
[]
Like in `join([... for s in ...])`. Also changed other things that I saw: * Use a `with` statement for many `open` that missed them, so the files don't remain open. * Remove unused variables. * Many HTTP links converted into HTTPS (verified). * Remove unnecessary "r" mode arg in `open` (double-checked it was actually the default in each case). * Remove Python 2 style of using `super`. * Run `pyupgrade $(find . -name "*.py" -type f) --py36-plus` (which already does some of the previous points). * Run `dos2unix $(find . -name "*.py" -type f)` (CRLF to LF line endings). * Fix typos.
true
1,089,345,653
https://api.github.com/repos/huggingface/datasets/issues/3488
https://github.com/huggingface/datasets/issues/3488
3,488
URL query parameters are set as path in the compression hop for fsspec
open
1
2021-12-27T16:29:00
2022-01-05T15:15:25
null
albertvillanova
[ "bug" ]
## Describe the bug There is an ssue with `StreamingDownloadManager._extract`. I don't know how the test `test_streaming_gg_drive_gzipped` passes: For ```python TEST_GG_DRIVE_GZIPPED_URL = "https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz" urlpath = StreamingDownloadManager().download_and_extract(TEST_GG_DRIVE_GZIPPED_URL) ``` gives `urlpath`: ```python 'gzip://uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz::https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz' ``` The gzip path makes no sense: `gzip://uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz` ## Steps to reproduce the bug ```python from datasets.utils.streaming_download_manager import StreamingDownloadManager dl_manager = StreamingDownloadManager() urlpath = dl_manager.extract("https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz") print(urlpath) ``` ## Expected results The query parameters should not be set as part of the path.
false
1,089,209,031
https://api.github.com/repos/huggingface/datasets/issues/3487
https://github.com/huggingface/datasets/pull/3487
3,487
Update ADD_NEW_DATASET.md
closed
0
2021-12-27T12:24:51
2021-12-27T15:00:45
2021-12-27T15:00:45
apergo-ai
[]
fixed make style prompt for Windows Terminal
true
1,089,171,551
https://api.github.com/repos/huggingface/datasets/issues/3486
https://github.com/huggingface/datasets/pull/3486
3,486
Fix weird spacing in ManualDownloadError message
closed
0
2021-12-27T11:20:36
2021-12-28T09:03:26
2021-12-28T09:00:28
bryant1410
[]
`textwrap.dedent` works based on the spaces at the beginning. Before this change, there wasn't any space.
true
1,089,027,581
https://api.github.com/repos/huggingface/datasets/issues/3485
https://github.com/huggingface/datasets/issues/3485
3,485
skip columns which cannot set to specific format when set_format
closed
2
2021-12-27T07:19:55
2021-12-27T09:07:07
2021-12-27T09:07:07
tshu-w
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns. **Describe the solution you'd like** skip columns which cannot set to specific format when set_format instead of raise an error.
false
1,088,910,402
https://api.github.com/repos/huggingface/datasets/issues/3484
https://github.com/huggingface/datasets/issues/3484
3,484
make shape verification to use ArrayXD instead of nested lists for map
open
1
2021-12-27T02:16:02
2022-01-05T13:54:03
null
tshu-w
[ "enhancement" ]
As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=1&uniqifier=1), IMO make shape verifcaiton to use ArrayXD instead of nested lists for map can help user reduce unnecessary cast. I notice datasets have done something special for `input_ids` and `attention_mask` which is also unnecessary after this feature added.
false
1,088,784,157
https://api.github.com/repos/huggingface/datasets/issues/3483
https://github.com/huggingface/datasets/pull/3483
3,483
Remove unused phony rule from Makefile
closed
1
2021-12-26T14:37:13
2022-01-05T19:44:56
2022-01-05T16:34:12
bryant1410
[]
null
true
1,088,317,921
https://api.github.com/repos/huggingface/datasets/issues/3482
https://github.com/huggingface/datasets/pull/3482
3,482
Fix duplicate keys in NewsQA
closed
2
2021-12-24T11:01:59
2022-09-23T12:57:10
2022-09-23T12:57:10
bryant1410
[ "dataset contribution" ]
* Fix duplicate keys in NewsQA when loading from CSV files. * Fix s/narqa/newsqa/ in the download manually error message. * Make the download manually error message show nicely when printed. Otherwise, is hard to read due to spacing issues. * Fix the format of the license text. * Reformat the code to make it simpler.
true
1,088,308,343
https://api.github.com/repos/huggingface/datasets/issues/3481
https://github.com/huggingface/datasets/pull/3481
3,481
Fix overriding of filesystem info
closed
0
2021-12-24T10:42:31
2021-12-24T11:08:59
2021-12-24T11:08:59
albertvillanova
[]
Previously, `BaseCompressedFileFileSystem.info` was overridden and transformed from function to dict. This generated a bug for filesystem methods that use `self.info()`, like e.g. `fs.isfile()`. This PR: - Adds tests for `fs.isfile` (that use `fs.info`). - Fixes custom `BaseCompressedFileFileSystem.info` by removing its overriding.
true
1,088,267,110
https://api.github.com/repos/huggingface/datasets/issues/3480
https://github.com/huggingface/datasets/issues/3480
3,480
the compression format requested when saving a dataset in json format is not respected
closed
3
2021-12-24T09:23:51
2022-01-05T13:03:35
2022-01-05T13:03:35
SaulLu
[ "bug" ]
## Describe the bug In the documentation of the `to_json` method, it is stated in the parameters that > **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json. however when we pass for example `compression="gzip"`, the saved file is not compressed. Would you also have expected compression to be applied? :relaxed: ## Steps to reproduce the bug ```python my_dict = {"a": [1, 2, 3], "b": [1, 2, 3]} ``` ### Result with datasets ```python from datasets import Dataset dataset = Dataset.from_dict(my_dict) dataset.to_json("dic_with_datasets.jsonl.gz", compression="gzip") !cat dic_with_datasets.jsonl.gz ``` output ``` {"a":1,"b":1} {"a":2,"b":2} {"a":3,"b":3} ``` Note: I would expected to see binary data here ### Result with pandas ```python import pandas as pd df = pd.DataFrame(my_dict) df.to_json("dic_with_pandas.jsonl.gz", lines=True, orient="records", compression="gzip") !cat dic_with_pandas.jsonl.gz ``` output ``` 4��a�dic_with_pandas.jsonl��VJT�2�QJ��\� ��g��yƵ���������)��� ``` Note: It looks like binary data ## Expected results I would have expected that the saved result with datasets would also be a binary file ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-4.18.0-193.70.1.el8_2.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.11 - PyArrow version: 5.0.0
false
1,088,232,880
https://api.github.com/repos/huggingface/datasets/issues/3479
https://github.com/huggingface/datasets/issues/3479
3,479
Dataset preview is not available (I think for all Hugging Face datasets)
closed
4
2021-12-24T08:18:48
2021-12-24T14:27:46
2021-12-24T14:27:46
Abirate
[ "bug", "dataset-viewer" ]
## Dataset viewer issue for '*french_book_reviews*' **Link:** https://huggingface.co/datasets/Abirate/french_book_reviews **short description of the issue** For my dataset, the dataset preview is no longer functional (it used to work: The dataset had been added the day before and it was fine...) And, after looking over the datasets, I discovered that this issue affects all Hugging Face datasets (as of yesterday, December 23, 2021, around 10 p.m. (CET)). **Am I the one who added this dataset** : Yes **Note**: here a screenshot showing the issue ![Dataset preview is not available for my dataset](https://user-images.githubusercontent.com/66887439/147333078-60734578-420d-4e91-8691-a90afeaa8948.jpg) **And here for glue dataset :** ![Dataset preview is not available for other Hugging Face datasets(glue)](https://user-images.githubusercontent.com/66887439/147333492-26fa530c-befd-4992-8361-70c51397a25a.jpg)
false
1,087,860,180
https://api.github.com/repos/huggingface/datasets/issues/3478
https://github.com/huggingface/datasets/pull/3478
3,478
Extend support for streaming datasets that use os.walk
closed
1
2021-12-23T16:42:55
2021-12-24T10:50:20
2021-12-24T10:50:19
albertvillanova
[]
This PR extends the support in streaming mode for datasets that use `os.walk`, by patching that function. This PR adds support for streaming mode to datasets: 1. autshumato 1. code_x_glue_cd_code_to_text 1. code_x_glue_tc_nl_code_search_adv 1. nchlt CC: @severo
true
1,087,850,253
https://api.github.com/repos/huggingface/datasets/issues/3477
https://github.com/huggingface/datasets/pull/3477
3,477
Use `iter_files` instead of `str(Path(...)` in image dataset
closed
6
2021-12-23T16:26:55
2021-12-28T15:15:02
2021-12-28T15:15:02
mariosasko
[]
Use `iter_files` in the `beans` and the `cats_vs_dogs` dataset scripts as suggested by @albertvillanova. Additional changes: * Fix `iter_files` in `MockDownloadManager` (see this [CI error](https://app.circleci.com/pipelines/github/huggingface/datasets/9247/workflows/2657ff8a-b531-4fd9-a9fc-6541a72e8d83/jobs/57028)) * Add support for `os.path.isdir` and `os.path.isfile` in streaming (`os.path.isfile` is needed in `StreamingDownloadManager`'s `iter_files` to make `cats_vs_dogs` streamable) TODO: - [ ] add tests for `xisdir` and `xisfile`
true
1,087,622,872
https://api.github.com/repos/huggingface/datasets/issues/3476
https://github.com/huggingface/datasets/pull/3476
3,476
Extend support for streaming datasets that use ET.parse
closed
0
2021-12-23T11:18:46
2021-12-23T15:34:30
2021-12-23T15:34:30
albertvillanova
[]
This PR extends the support in streaming mode for datasets that use `ET.parse`, by patching the function. This PR adds support for streaming mode to datasets: 1. ami 1. assin 1. assin2 1. counter 1. enriched_web_nlg 1. europarl_bilingual 1. hyperpartisan_news_detection 1. polsum 1. qa4mre 1. quail 1. ted_talks_iwslt 1. udhr 1. web_nlg 1. winograd_wsc CC: @severo
true
1,087,352,041
https://api.github.com/repos/huggingface/datasets/issues/3475
https://github.com/huggingface/datasets/issues/3475
3,475
The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish
open
2
2021-12-23T03:56:43
2021-12-24T00:23:03
null
puzzler10
[ "bug" ]
## Describe the bug See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user. ## Steps to reproduce the bug Go to the [dataset viewer](https://huggingface.co/datasets/viewer/?dataset=rotten_tomatoes) for the dataset, set the offset to 4160 for the train dataset, and scroll through the results. I found ones at index 4166 and 4173. There's others too (e.g. index 2888) but those two are easy to find like that. ## Expected results English movie reviews only. ## Actual results Example of a Spanish movie review (4173): > "É uma pena que , mais tarde , o próprio filme abandone o tom de paródia e passe a utilizar os mesmos clichês que havia satirizado "
false
1,086,945,384
https://api.github.com/repos/huggingface/datasets/issues/3474
https://github.com/huggingface/datasets/pull/3474
3,474
Decode images when iterating
closed
0
2021-12-22T15:34:49
2023-09-24T09:54:04
2021-12-28T16:08:10
lhoestq
[]
If I iterate over a vision dataset, the images are not decoded, and the dictionary with the bytes is returned. This PR enables image decoding in `Dataset.__iter__` Close https://github.com/huggingface/datasets/issues/3473
true
1,086,937,610
https://api.github.com/repos/huggingface/datasets/issues/3473
https://github.com/huggingface/datasets/issues/3473
3,473
Iterating over a vision dataset doesn't decode the images
closed
9
2021-12-22T15:26:32
2021-12-27T14:13:21
2021-12-23T15:21:57
lhoestq
[ "bug", "vision" ]
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # passes first_image = next(iter(mnist))["image"] assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # fails ``` ## Expected results The image should be decoded, as a PIL Image ## Actual results We get a dictionary ``` {'bytes': b'\x89PNG\r\n\x1a\n\x00..., 'path': None} ``` ## Environment info - `datasets` version: 1.17.1.dev0 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.2 - PyArrow version: 6.0.0 The bug also exists in 1.17.0 ## Investigation I think the issue is that decoding is disabled in `__iter__`: https://github.com/huggingface/datasets/blob/dfe5b73387c5e27de6a16b0caeb39d3b9ded66d6/src/datasets/arrow_dataset.py#L1651-L1661 Do you remember why it was disabled in the first place @albertvillanova ? Also cc @mariosasko @NielsRogge
false
1,086,908,508
https://api.github.com/repos/huggingface/datasets/issues/3472
https://github.com/huggingface/datasets/pull/3472
3,472
Fix `str(Path(...))` conversion in streaming on Linux
closed
0
2021-12-22T15:06:03
2021-12-22T16:52:53
2021-12-22T16:52:52
mariosasko
[]
Fix `str(Path(...))` conversion in streaming on Linux. This should fix the streaming of the `beans` and `cats_vs_dogs` datasets.
true
1,086,588,074
https://api.github.com/repos/huggingface/datasets/issues/3471
https://github.com/huggingface/datasets/pull/3471
3,471
Fix Tashkeela dataset to yield stripped text
closed
0
2021-12-22T08:41:30
2021-12-22T10:12:08
2021-12-22T10:12:07
albertvillanova
[]
This PR: - Yields stripped text - Fix path for Windows - Adds license - Adds more info in dataset card Close bigscience-workshop/data_tooling#279
true
1,086,049,888
https://api.github.com/repos/huggingface/datasets/issues/3470
https://github.com/huggingface/datasets/pull/3470
3,470
Fix rendering of docs
closed
0
2021-12-21T17:17:01
2021-12-22T09:23:47
2021-12-22T09:23:47
albertvillanova
[]
Minor fix in docs. Currently, `ClassLabel` docstring rendering is not right.
true
1,085,882,664
https://api.github.com/repos/huggingface/datasets/issues/3469
https://github.com/huggingface/datasets/pull/3469
3,469
Fix METEOR missing NLTK's omw-1.4
closed
1
2021-12-21T14:19:11
2021-12-21T14:52:28
2021-12-21T14:49:28
lhoestq
[]
NLTK 3.6.6 now requires `omw-1.4` to be downloaded for METEOR to work. This should fix the CI on master
true
1,085,871,301
https://api.github.com/repos/huggingface/datasets/issues/3468
https://github.com/huggingface/datasets/pull/3468
3,468
Add COCO dataset
closed
7
2021-12-21T14:07:50
2023-09-24T09:33:31
2022-10-03T09:36:08
mariosasko
[ "dataset contribution" ]
This PR adds the MS COCO dataset. Compared to the [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/object_detection/coco.py) script, this implementation adds 8 additional configs to cover the tasks other than object detection. Some notes: * the data exposed by TFDS is contained in the `2014`, `2015`, `2017` and `2017_panoptic_segmentation` configs here * I've updated `encode_nested_example` for easier handling of missing values (cc @lhoestq @albertvillanova; will add tests if you are OK with the changes in `features.py`) * this implementation should fix https://github.com/huggingface/datasets/pull/3377#issuecomment-985559427 TODOs: - [x] dataset card - [ ] dummy data cc @merveenoyan Closes #2526
true
1,085,870,665
https://api.github.com/repos/huggingface/datasets/issues/3467
https://github.com/huggingface/datasets/pull/3467
3,467
Push dataset infos.json to Hub
closed
1
2021-12-21T14:07:13
2021-12-21T17:00:10
2021-12-21T17:00:09
lhoestq
[]
When doing `push_to_hub`, the feature types are lost (see issue https://github.com/huggingface/datasets/issues/3394). This PR fixes this by also pushing a `dataset_infos.json` file to the Hub, that stores the feature types. Other minor changes: - renamed the `___` separator to `--`, since `--` is now disallowed in a name in the back-end. I tested this feature with datasets like conll2003 that has feature types like `ClassLabel` that were previously lost. Close https://github.com/huggingface/datasets/issues/3394 I would like to include this in today's release (though not mandatory), so feel free to comment/suggest changes
true
1,085,722,837
https://api.github.com/repos/huggingface/datasets/issues/3466
https://github.com/huggingface/datasets/pull/3466
3,466
Add CRASS dataset
closed
2
2021-12-21T11:17:22
2022-10-03T09:37:06
2022-10-03T09:37:06
apergo-ai
[ "dataset contribution" ]
Added crass dataset
true
1,085,400,432
https://api.github.com/repos/huggingface/datasets/issues/3465
https://github.com/huggingface/datasets/issues/3465
3,465
Unable to load 'cnn_dailymail' dataset
closed
4
2021-12-21T03:32:21
2024-06-12T14:41:17
2022-02-17T14:13:57
talha1503
[ "bug", "duplicate", "dataset bug" ]
## Describe the bug I wanted to load cnn_dailymail dataset from huggingface datasets on Google Colab, but I am getting an error while loading it. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0', ignore_verifications = True) ``` ## Expected results Expecting to load 'cnn_dailymail' dataset. ## Actual results `NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0
false
1,085,399,097
https://api.github.com/repos/huggingface/datasets/issues/3464
https://github.com/huggingface/datasets/issues/3464
3,464
struct.error: 'i' format requires -2147483648 <= number <= 2147483647
open
2
2021-12-21T03:29:01
2022-11-21T19:55:11
null
koukoulala
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. using latest datasets=datasets-1.16.1-py3-none-any.whl process my own multilingual dataset by following codes, and the number of rows in all dataset is 306000, the max_length of each sentence is 256: ![image](https://user-images.githubusercontent.com/30341159/146865779-3d25d011-1f42-4026-9e1b-76f6e1d172e9.png) then I get this error: ![image](https://user-images.githubusercontent.com/30341159/146865844-e60a404c-5f3a-403c-b2f1-acd943b5cdb8.png) I have seen the issue in #2134 and #2150, so I don't understand why latest repo still can't deal with big dataset. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: linux docker - Python version: 3.6
false
1,085,078,795
https://api.github.com/repos/huggingface/datasets/issues/3463
https://github.com/huggingface/datasets/pull/3463
3,463
Update swahili_news dataset
closed
0
2021-12-20T18:20:20
2021-12-21T06:24:03
2021-12-21T06:24:02
albertvillanova
[]
Update dataset with latest verion data files. Fix #3462. Close bigscience-workshop/data_tooling#107
true
1,085,049,661
https://api.github.com/repos/huggingface/datasets/issues/3462
https://github.com/huggingface/datasets/issues/3462
3,462
Update swahili_news dataset
closed
0
2021-12-20T17:44:01
2021-12-21T06:24:02
2021-12-21T06:24:01
albertvillanova
[ "dataset request" ]
Please note also: the HuggingFace version at https://huggingface.co/datasets/swahili_news is outdated. An updated version, with deduplicated text and official splits, can be found at https://zenodo.org/record/5514203. ## Adding a Dataset - **Name:** swahili_news Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Related to: - bigscience-workshop/data_tooling#107
false
1,085,007,346
https://api.github.com/repos/huggingface/datasets/issues/3461
https://github.com/huggingface/datasets/pull/3461
3,461
Fix links in metrics description
closed
0
2021-12-20T16:56:19
2021-12-20T17:14:52
2021-12-20T17:14:51
albertvillanova
[]
Remove Markdown syntax for links in metrics description, as it is not properly rendered. Related to #3437.
true
1,085,002,469
https://api.github.com/repos/huggingface/datasets/issues/3460
https://github.com/huggingface/datasets/pull/3460
3,460
Don't encode lists as strings when using `Value("string")`
closed
3
2021-12-20T16:50:49
2023-09-25T10:28:30
2023-09-25T09:20:28
lhoestq
[]
Following https://github.com/huggingface/datasets/pull/3456#event-5792250497 it looks like `datasets` can silently convert lists to strings using `str()`, instead of raising an error. This PR fixes this and should fix the issue with WER showing low values if the input format is not right.
true
1,084,969,672
https://api.github.com/repos/huggingface/datasets/issues/3459
https://github.com/huggingface/datasets/issues/3459
3,459
dataset.filter overwriting previously set dataset._indices values, resulting in the wrong elements being selected.
closed
2
2021-12-20T16:16:49
2021-12-20T16:34:57
2021-12-20T16:34:57
mmajurski
[ "bug" ]
## Describe the bug When using dataset.select to select a subset of a dataset, dataset._indices are set to indicate which elements are now considered in the dataset. The same thing happens when you shuffle the dataset; dataset._indices are set to indicate what the new order of the data is. However, if you then use a dataset.filter, that filter interacts with those dataset._indices values in a non-intuitive manner. https://huggingface.co/docs/datasets/_modules/datasets/arrow_dataset.html#Dataset.filter Effectively, it looks like the original set of _indices were discared and overwritten by the set created during the filter operation. I think this is actually an issue with how the map function handles dataset._indices. Ideally it should use the _indices it gets passed, and then return an updated _indices which reflect the map transformation applied to the starting _indices. ## Steps to reproduce the bug ```python dataset = load_dataset('imdb', split='train', keep_in_memory=True) dataset = dataset.shuffle(keep_in_memory=True) dataset = dataset.select(range(0, 10), keep_in_memory=True) print("initial 10 elements") print(dataset['label']) # -> [1, 1, 0, 1, 0, 0, 0, 1, 0, 0] dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True) print("filtered 10 elements looking for label 0") print(dataset['label']) # -> [1, 1, 1, 1, 1, 1] ``` ## Actual results ``` $ python indices_bug.py initial 10 elements [1, 1, 0, 1, 0, 0, 0, 1, 0, 0] filtered 10 elements looking for label 0 [1, 1, 1, 1, 1, 1] ``` This code block first shuffles the dataset (to get a mix of label 0 and label 1). Then it selects just the first 10 elements (the number of elements does not matter, 10 is just easy to visualize). The important part is that you select some subset of the dataset. Finally, a filter is applied to pull out just the elements with `label == 0`. The bug is that you cannot combine any dataset operation which sets the dataset._indices with filter. In this case I have 2, shuffle and subset. If you just use a single dataset._indices operation (in this case shuffle) the bug still shows up. The shuffle sets the dataset._indices and then filter uses those indices in the map, then overwrites dataset._indices with the filter results. ```python dataset = load_dataset('imdb', split='train', keep_in_memory=True) dataset = dataset.shuffle(keep_in_memory=True) dataset = dataset.filter(lambda x: x['label'] == 0, keep_in_memory=True) dataset = dataset.select(range(0, 10), keep_in_memory=True) print(dataset['label']) # -> [1, 1, 1, 1, 1, 1, 1, 1, 1, 1] ``` ## Expected results In an ideal world, the dataset filter would respect any dataset._indices values which had previously been set. If you use dataset.filter with the base dataset (where dataset._indices has not been set) then the filter command works as expected. ## Environment info Here are the commands required to rebuild the conda environment from scratch. ``` # create a virtual environment conda create -n dataset_indices python=3.8 -y # activate the virtual environment conda activate dataset_indices # install huggingface datasets conda install datasets ``` <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Linux-5.11.0-41-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 3.0.0 ### Full Conda Environment ``` $ conda env export name: dasaset_indices channels: - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=4.5=1_gnu - abseil-cpp=20210324.2=h2531618_0 - aiohttp=3.8.1=py38h7f8727e_0 - aiosignal=1.2.0=pyhd3eb1b0_0 - arrow-cpp=3.0.0=py38h6b21186_4 - attrs=21.2.0=pyhd3eb1b0_0 - aws-c-common=0.4.57=he6710b0_1 - aws-c-event-stream=0.1.6=h2531618_5 - aws-checksums=0.1.9=he6710b0_0 - aws-sdk-cpp=1.8.185=hce553d0_0 - bcj-cffi=0.5.1=py38h295c915_0 - blas=1.0=mkl - boost-cpp=1.73.0=h27cfd23_11 - bottleneck=1.3.2=py38heb32a55_1 - brotli=1.0.9=he6710b0_2 - brotli-python=1.0.9=py38heb0550a_2 - brotlicffi=1.0.9.2=py38h295c915_0 - brotlipy=0.7.0=py38h27cfd23_1003 - bzip2=1.0.8=h7b6447c_0 - c-ares=1.17.1=h27cfd23_0 - ca-certificates=2021.10.26=h06a4308_2 - certifi=2021.10.8=py38h06a4308_0 - cffi=1.14.6=py38h400218f_0 - conllu=4.4.1=pyhd3eb1b0_0 - cryptography=36.0.0=py38h9ce1e76_0 - dataclasses=0.8=pyh6d0b6a4_7 - dill=0.3.4=pyhd3eb1b0_0 - double-conversion=3.1.5=he6710b0_1 - et_xmlfile=1.1.0=py38h06a4308_0 - filelock=3.4.0=pyhd3eb1b0_0 - frozenlist=1.2.0=py38h7f8727e_0 - gflags=2.2.2=he6710b0_0 - glog=0.5.0=h2531618_0 - gmp=6.2.1=h2531618_2 - grpc-cpp=1.39.0=hae934f6_5 - huggingface_hub=0.0.17=pyhd3eb1b0_0 - icu=58.2=he6710b0_3 - idna=3.3=pyhd3eb1b0_0 - importlib-metadata=4.8.2=py38h06a4308_0 - importlib_metadata=4.8.2=hd3eb1b0_0 - intel-openmp=2021.4.0=h06a4308_3561 - krb5=1.19.2=hac12032_0 - ld_impl_linux-64=2.35.1=h7274673_9 - libboost=1.73.0=h3ff78a5_11 - libcurl=7.80.0=h0b77cf5_0 - libedit=3.1.20210910=h7f8727e_0 - libev=4.33=h7f8727e_1 - libevent=2.1.8=h1ba5d50_1 - libffi=3.3=he6710b0_2 - libgcc-ng=9.3.0=h5101ec6_17 - libgomp=9.3.0=h5101ec6_17 - libnghttp2=1.46.0=hce63b2e_0 - libprotobuf=3.17.2=h4ff587b_1 - libssh2=1.9.0=h1ba5d50_1 - libstdcxx-ng=9.3.0=hd4cf53a_17 - libthrift=0.14.2=hcc01f38_0 - libxml2=2.9.12=h03d6c58_0 - libxslt=1.1.34=hc22bd24_0 - lxml=4.6.3=py38h9120a33_0 - lz4-c=1.9.3=h295c915_1 - mkl=2021.4.0=h06a4308_640 - mkl-service=2.4.0=py38h7f8727e_0 - mkl_fft=1.3.1=py38hd3c417c_0 - mkl_random=1.2.2=py38h51133e4_0 - multiprocess=0.70.12.2=py38h7f8727e_0 - multivolumefile=0.2.3=pyhd3eb1b0_0 - ncurses=6.3=h7f8727e_2 - numexpr=2.7.3=py38h22e1b3c_1 - numpy=1.21.2=py38h20f2e39_0 - numpy-base=1.21.2=py38h79a1101_0 - openpyxl=3.0.9=pyhd3eb1b0_0 - openssl=1.1.1l=h7f8727e_0 - orc=1.6.9=ha97a36c_3 - packaging=21.3=pyhd3eb1b0_0 - pip=21.2.4=py38h06a4308_0 - py7zr=0.16.1=pyhd3eb1b0_1 - pycparser=2.21=pyhd3eb1b0_0 - pycryptodomex=3.10.1=py38h27cfd23_1 - pyopenssl=21.0.0=pyhd3eb1b0_1 - pyparsing=3.0.4=pyhd3eb1b0_0 - pyppmd=0.16.1=py38h295c915_0 - pysocks=1.7.1=py38h06a4308_0 - python=3.8.12=h12debd9_0 - python-dateutil=2.8.2=pyhd3eb1b0_0 - python-xxhash=2.0.2=py38h7f8727e_0 - pyzstd=0.14.4=py38h7f8727e_3 - re2=2020.11.01=h2531618_1 - readline=8.1=h27cfd23_0 - requests=2.26.0=pyhd3eb1b0_0 - setuptools=58.0.4=py38h06a4308_0 - six=1.16.0=pyhd3eb1b0_0 - snappy=1.1.8=he6710b0_0 - sqlite=3.36.0=hc218d9a_0 - texttable=1.6.4=pyhd3eb1b0_0 - tk=8.6.11=h1ccaba5_0 - typing_extensions=3.10.0.2=pyh06a4308_0 - uriparser=0.9.3=he6710b0_1 - utf8proc=2.6.1=h27cfd23_0 - wheel=0.37.0=pyhd3eb1b0_1 - xxhash=0.8.0=h7f8727e_3 - xz=5.2.5=h7b6447c_0 - zipp=3.6.0=pyhd3eb1b0_0 - zlib=1.2.11=h7f8727e_4 - zstd=1.4.9=haebb681_0 - pip: - async-timeout==4.0.2 - charset-normalizer==2.0.9 - datasets==1.16.1 - fsspec==2021.11.1 - huggingface-hub==0.2.1 - multidict==5.2.0 - pandas==1.3.5 - pyarrow==6.0.1 - pytz==2021.3 - pyyaml==6.0 - tqdm==4.62.3 - typing-extensions==4.0.1 - urllib3==1.26.7 - yarl==1.7.2 ```
false
1,084,926,025
https://api.github.com/repos/huggingface/datasets/issues/3458
https://github.com/huggingface/datasets/pull/3458
3,458
Fix duplicated tag in wikicorpus dataset card
closed
1
2021-12-20T15:34:16
2021-12-20T16:03:25
2021-12-20T16:03:24
lhoestq
[]
null
true
1,084,862,121
https://api.github.com/repos/huggingface/datasets/issues/3457
https://github.com/huggingface/datasets/issues/3457
3,457
Add CMU Graphics Lab Motion Capture dataset
open
3
2021-12-20T14:34:39
2022-03-16T16:53:09
null
osanseviero
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** CMU Graphics Lab Motion Capture database - **Description:** The database contains free motions which you can download and use. - **Data:** http://mocap.cs.cmu.edu/ - **Motivation:** Nice motion capture dataset Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,084,687,973
https://api.github.com/repos/huggingface/datasets/issues/3456
https://github.com/huggingface/datasets/pull/3456
3,456
[WER] Better error message for wer
closed
4
2021-12-20T11:38:40
2021-12-20T16:53:37
2021-12-20T16:53:36
patrickvonplaten
[]
Currently we have the following problem when using the WER. When the input format to the WER metric is wrong, instead of throwing an error message a word-error-rate is computed which is incorrect. E.g. when doing the following: ```python from datasets import load_metric wer = load_metric("wer") target_str = ["hello this is nice", "hello the weather is bloomy"] pred_str = [["hello it's nice"], ["hello it's the weather"]] print("Wrong:", wer.compute(predictions=pred_str, references=target_str)) print("Correct", wer.compute(predictions=[x[0] for x in pred_str], references=target_str)) ``` We get: ``` Wrong: 1.0 Correct 0.5555555555555556 ``` meaning that we get a word-error rate for incorrectly passed input formats. We should raise an error here instead so that people don't spent hours fixing a model while it's their incorrect evaluation metric is the problem for a low WER.
true
1,084,599,650
https://api.github.com/repos/huggingface/datasets/issues/3455
https://github.com/huggingface/datasets/issues/3455
3,455
Easier information editing
closed
2
2021-12-20T10:10:43
2023-07-25T15:36:14
2023-07-25T15:36:14
borgr
[ "enhancement", "generic discussion" ]
**Is your feature request related to a problem? Please describe.** It requires a lot of effort to improve a datasheet. **Describe the solution you'd like** UI or at least a link to the place where the code that needs to be edited is (and an easy way to edit this code directly from the site, without cloning, branching, makefile etc.) **Describe alternatives you've considered** The current Ux is to have the 8 steps for contribution while One just wishes to change a line a type etc.
false
1,084,519,107
https://api.github.com/repos/huggingface/datasets/issues/3454
https://github.com/huggingface/datasets/pull/3454
3,454
Fix iter_archive generator
closed
0
2021-12-20T08:50:15
2021-12-20T10:05:00
2021-12-20T10:04:59
albertvillanova
[]
This PR: - Adds tests to DownloadManager and StreamingDownloadManager `iter_archive` for both path and file inputs - Fixes bugs in `iter_archive` introduced in: - #3443 Fix #3453.
true
1,084,515,911
https://api.github.com/repos/huggingface/datasets/issues/3453
https://github.com/huggingface/datasets/issues/3453
3,453
ValueError while iter_archive
closed
0
2021-12-20T08:46:18
2021-12-20T10:04:59
2021-12-20T10:04:59
albertvillanova
[ "bug" ]
## Describe the bug After the merge of: - #3443 the method `iter_archive` throws a ValueError: ``` ValueError: read of closed file ``` ## Steps to reproduce the bug ```python for path, file in dl_manager.iter_archive(archive_path): pass ```
false
1,083,803,178
https://api.github.com/repos/huggingface/datasets/issues/3452
https://github.com/huggingface/datasets/issues/3452
3,452
why the stratify option is omitted from test_train_split function?
closed
4
2021-12-18T10:37:47
2022-05-25T20:43:51
2022-05-25T20:43:51
j-sieger
[ "enhancement", "good second issue" ]
why the stratify option is omitted from test_train_split function? is there any other way implement the stratify option while splitting the dataset? as it is important point to be considered while splitting the dataset.
false
1,083,459,137
https://api.github.com/repos/huggingface/datasets/issues/3451
https://github.com/huggingface/datasets/pull/3451
3,451
[Staging] Update dataset repos automatically on the Hub
closed
2
2021-12-17T17:12:11
2021-12-21T10:25:46
2021-12-20T14:09:51
lhoestq
[]
Let's have a script that updates the dataset repositories on staging for now. This way we can make sure it works fine before going in prod. Related to https://github.com/huggingface/datasets/issues/3341 The script runs on each commit on `master`. It checks the datasets that were changed, and it pushes the changes to the corresponding repositories on the Hub. If there's a new dataset, then a new repository is created. If the commit is a new release of `datasets`, it also pushes the tag to all the repositories.
true
1,083,450,158
https://api.github.com/repos/huggingface/datasets/issues/3450
https://github.com/huggingface/datasets/issues/3450
3,450
Unexpected behavior doing Split + Filter
closed
1
2021-12-17T17:00:39
2023-07-25T15:38:47
2023-07-25T15:38:47
jbrachat
[ "bug" ]
## Describe the bug I observed unexpected behavior when applying 'train_test_split' followed by 'filter' on dataset. Elements of the training dataset eventually end up in the test dataset (after applying the 'filter') ## Steps to reproduce the bug ``` from datasets import Dataset import pandas as pd dic = {'x': [1,2,3,4,5,6,7,8,9], 'y':['q','w','e','r','t','y','u','i','o']} df = pd.DataFrame.from_dict(dic) dataset = Dataset.from_pandas(df) split_dataset = dataset.train_test_split(test_size=0.5, shuffle=False, seed=42) train_dataset = split_dataset["train"] eval_dataset = split_dataset["test"] eval_dataset_2 = eval_dataset.filter(lambda example: example['x'] % 2 == 0) print( eval_dataset['x']) print(eval_dataset_2['x']) ``` One observes that elements in eval_dataset2 are actually coming from the training dataset... ## Expected results The expected results would be that the filtered eval dataset would only contain elements from the original eval dataset. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.12.1 - Platform: Windows 10 - Python version: 3.7 - PyArrow version: 5.0.0
false
1,083,373,018
https://api.github.com/repos/huggingface/datasets/issues/3449
https://github.com/huggingface/datasets/issues/3449
3,449
Add `__add__()`, `__iadd__()` and similar to `Dataset` class
closed
2
2021-12-17T15:29:11
2024-02-29T16:47:56
2023-07-25T15:33:56
sgraaf
[ "enhancement", "generic discussion" ]
**Is your feature request related to a problem? Please describe.** No. **Describe the solution you'd like** I would like to be able to concatenate datasets as follows: ```python >>> dataset["train"] += dataset["validation"] ``` ... instead of using `concatenate_datasets()`: ```python >>> raw_datasets["train"] = concatenate_datasets([raw_datasets["train"], raw_datasets["validation"]]) >>> del raw_datasets["validation"] ``` **Describe alternatives you've considered** Well, I have considered `concatenate_datasets()` 😀 **Additional context** N.a.
false
1,083,231,080
https://api.github.com/repos/huggingface/datasets/issues/3448
https://github.com/huggingface/datasets/issues/3448
3,448
JSONDecodeError with HuggingFace dataset viewer
closed
3
2021-12-17T12:52:41
2022-02-24T09:10:26
2022-02-24T09:10:26
kathrynchapman
[ "dataset-viewer" ]
## Dataset viewer issue for 'pubmed_neg' **Link:** https://huggingface.co/datasets/IGESML/pubmed_neg I am getting the error: Status code: 400 Exception: JSONDecodeError Message: Expecting property name enclosed in double quotes: line 61 column 2 (char 1202) I have checked all files - I am not using single quotes anywhere. Not sure what is causing this issue. Am I the one who added this dataset ? Yes
false
1,082,539,790
https://api.github.com/repos/huggingface/datasets/issues/3447
https://github.com/huggingface/datasets/issues/3447
3,447
HF_DATASETS_OFFLINE=1 didn't stop datasets.builder from downloading
closed
3
2021-12-16T18:51:13
2022-02-17T14:16:27
2022-02-17T14:16:27
dunalduck0
[ "bug" ]
## Describe the bug According to https://huggingface.co/docs/datasets/loading_datasets.html#loading-a-dataset-builder, setting HF_DATASETS_OFFLINE to 1 should make datasets to "run in full offline mode". It didn't work for me. At the very beginning, datasets still tried to download "custom data configuration" for JSON, despite I have run the program once and cached all data into the same --cache_dir. "Downloading" is not an issue when running with local disk, but crashes often with cloud storage because (1) multiply GPU processes try to access the same file, AND (2) FileLocker fails to synchronize all processes, due to storage throttling. 99% of times, when the main process releases FileLocker, the file is not actually ready for access in cloud storage and thus triggers "FileNotFound" errors for all other processes. Well, another way to resolve the problem is to investigate super reliable cloud storage, but that's out of scope here. ## Steps to reproduce the bug ``` export HF_DATASETS_OFFLINE=1 python run_clm.py --model_name_or_path=models/gpt-j-6B --train_file=trainpy.v2.train.json --validation_file=trainpy.v2.eval.json --cache_dir=datacache/trainpy.v2 ``` ## Expected results datasets should stop all "downloading" behavior but reuse the cached JSON configuration. I think the problem here is part of the cache directory path, "default-471372bed4b51b53", is randomly generated, and it could change if some parameters changed. And I didn't find a way to use a fixed path to ensure datasets to reuse cached data every time. ## Actual results The logging shows datasets are still downloading into "datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426". ``` 12/16/2021 10:25:59 - WARNING - datasets.builder - Using custom data configuration default-471372bed4b51b53 12/16/2021 10:25:59 - INFO - datasets.builder - Generating dataset json (datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426) Downloading and preparing dataset json/default to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426... 100%|██████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 17623.13it/s] 12/16/2021 10:25:59 - INFO - datasets.utils.download_manager - Downloading took 0.0 min 12/16/2021 10:26:00 - INFO - datasets.utils.download_manager - Checksum Computation took 0.0 min 100%|███████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1206.99it/s] 12/16/2021 10:26:00 - INFO - datasets.utils.info_utils - Unable to verify checksums. 12/16/2021 10:26:00 - INFO - datasets.builder - Generating split train 12/16/2021 10:26:01 - INFO - datasets.builder - Generating split validation 12/16/2021 10:26:02 - INFO - datasets.utils.info_utils - Unable to verify splits sizes. Dataset json downloaded and prepared to datacache/trainpy.v2/json/default-471372bed4b51b53/0.0.0/c2d554c3377ea79c7664b93dc65d0803b45e3279000f993c7bfd18937fd7f426. Subsequent calls will reuse this data. 100%|█████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 53.54it/s] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux - Python version: 3.8.10 - PyArrow version: 6.0.1
false
1,082,414,229
https://api.github.com/repos/huggingface/datasets/issues/3446
https://github.com/huggingface/datasets/pull/3446
3,446
Remove redundant local path information in audio/image datasets
closed
3
2021-12-16T16:35:15
2023-09-24T10:09:30
2023-09-24T10:09:27
mariosasko
[ "dataset contribution" ]
Remove the redundant path information in the audio/image dataset as discussed in https://github.com/huggingface/datasets/pull/3430#issuecomment-994734828 TODOs: * [ ] merge https://github.com/huggingface/datasets/pull/3430 * [ ] merge https://github.com/huggingface/datasets/pull/3364 * [ ] re-generate the info files of the updated audio datasets cc: @patrickvonplaten @anton-l @nateraw (I expect this to break the audio/vision examples in Transformers; after this change you'll be able to access underlying paths as follows `dset = dset.cast_column("audio", Audio(..., decode=False)); path = dset[0]["audio"]`)
true
1,082,370,968
https://api.github.com/repos/huggingface/datasets/issues/3445
https://github.com/huggingface/datasets/issues/3445
3,445
question
closed
1
2021-12-16T15:57:00
2022-01-03T10:09:00
2022-01-03T10:09:00
BAKAYOKO0232
[ "dataset-viewer" ]
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
false
1,082,078,961
https://api.github.com/repos/huggingface/datasets/issues/3444
https://github.com/huggingface/datasets/issues/3444
3,444
Align the Dataset and IterableDataset processing API
open
11
2021-12-16T11:26:11
2025-01-31T11:07:07
null
lhoestq
[ "enhancement", "generic discussion" ]
## Intro items marked like <s>this</s> are done already :) Currently the two classes have two distinct API for processing: ### The `.map()` method Both have those parameters in common: function, batched, batch_size - IterableDataset is missing those parameters: <s>with_indices</s>, with_rank, <s>input_columns</s>, <s>drop_last_batch</s>, <s>remove_columns</s>, features, disable_nullable, fn_kwargs, num_proc - Dataset also has additional parameters that are exclusive, due to caching: keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, suffix_template, new_fingerprint - <s>There is also an important difference in terms of behavior: **Dataset.map adds new columns** (with dict.update) BUT **IterableDataset discards previous columns** (it overwrites the dict) IMO the two methods should have the same behavior. This would be an important breaking change though.</s> - Dataset.map is eager while IterableDataset.map is lazy ### The `.shuffle()` method - <s>Both have an optional seed parameter, but IterableDataset requires a mandatory parameter buffer_size to control the size of the local buffer used for approximate shuffling.</s> - <s>IterableDataset is missing the parameter generator</s> - Also Dataset has exclusive parameters due to caching: keep_in_memory, load_from_cache_file, indices_cache_file_name, writer_batch_size, new_fingerprint ### The `.with_format()` method - <s>IterableDataset only supports "torch" (it misses tf, jax, pandas, arrow)</s> and is missing the parameters: columns, output_all_columns and format_kwargs - other methods like `set_format`, `reset_format` or `formatted_as` are also missing ### Other methods - Both have the same `remove_columns` method - IterableDataset is missing: <s>cast</s>, <s>cast_column</s>, <s>filter</s>, <s>rename_column</s>, <s>rename_columns</s>, class_encode_column, flatten, train_test_split, <s>shard</s> - Some other methods are missing but we can discuss them: set_transform, formatted_as, with_transform - And others don't really make sense for an iterable dataset: select, sort, <s>add_column</s>, add_item - Dataset is missing skip and take, that IterableDataset implements. ## Questions I think it would be nice to be able to switch between streaming and regular dataset easily, without changing the processing code significantly. 1. What should be aligned and what shouldn't between those two APIs ? IMO the minimum is to align the main processing methods. It would mean aligning breaking the current `Iterable.map` to have the same behavior as `Dataset.map` (add columns with dict.update), and add multiprocessing as well as the missing parameters. DONE ✅ It would also mean implementing the missing methods: cast, cast_column, filter, rename_column, rename_columns, class_encode_column, flatten, prepare_for_task, train_test_split, shard. WIP 🟠 2. What are the breaking changes for IterableDataset ? The main breaking change would be the change of behavior of `IterableDataset.map`, because currently it discards all the previous columns instead of keeping them. DONE ✅ 3. Shall we also do some changes for regular datasets ? I agree the simplest would be to have the exact same methods for both Dataset and IterableDataset. However this is probably not a good idea because it would prevent users from using the best benefits of them. That's why we can keep some aspects of regular datasets as they are: - keep the eager Dataset.map with caching - keep the with_transform method for lazy processing - keep Dataset.select (it could also be added to IterableDataset even though it's not recommended) We could have a completely aligned `map` method if both methods were lazy by default, but this is a very big breaking change so I'm not sure we can consider doing that. For information, TFDS does lazy map by default, and has an additional `.cache()` method. ## Opinions ? I'd love to gather some opinions about this here. If the two APIs are more aligned it would be awesome for the examples in `transformers`, and it would create a satisfactory experience for users that want to switch from one mode to the other. cc @mariosasko @albertvillanova @thomwolf @patrickvonplaten @sgugger
false
1,082,052,833
https://api.github.com/repos/huggingface/datasets/issues/3443
https://github.com/huggingface/datasets/pull/3443
3,443
Extend iter_archive to support file object input
closed
0
2021-12-16T10:59:14
2021-12-17T17:53:03
2021-12-17T17:53:02
albertvillanova
[]
This PR adds support to passing a file object to `[Streaming]DownloadManager.iter_archive`. With this feature, we can iterate over a tar file inside another tar file.
true
1,081,862,747
https://api.github.com/repos/huggingface/datasets/issues/3442
https://github.com/huggingface/datasets/pull/3442
3,442
Extend text to support yielding lines, paragraphs or documents
closed
5
2021-12-16T07:33:17
2021-12-20T16:59:10
2021-12-20T16:39:18
albertvillanova
[]
Add `config.row` option to `text` module to allow yielding lines (default, current case), paragraphs or documents. Feel free to comment on the name of the config parameter `row`: - Currently, the docs state datasets are made of rows and columns - Other names I considered: `example`, `item`
true
1,081,571,784
https://api.github.com/repos/huggingface/datasets/issues/3441
https://github.com/huggingface/datasets/issues/3441
3,441
Add QuALITY dataset
open
1
2021-12-15T22:26:19
2021-12-28T15:17:05
null
lewtun
[ "dataset request" ]
## Adding a Dataset - **Name:** QuALITY - **Description:** A challenging question answering with very long contexts (Twitter [thread](https://twitter.com/sleepinyourhat/status/1471225421794529281?s=20)) - **Paper:** No ArXiv link yet, but draft is [here](https://github.com/nyu-mll/quality/blob/main/quality_preprint.pdf) - **Data:** GitHub repo [here](https://github.com/nyu-mll/quality) - **Motivation:** This dataset would serve as a nice way to benchmark long-range Transformer models like BigBird, Longformer and their descendants. In particular, it would be very interesting to see how the S4 model fares on this given it's impressive performance on the Long Range Arena Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,081,528,426
https://api.github.com/repos/huggingface/datasets/issues/3440
https://github.com/huggingface/datasets/issues/3440
3,440
datasets keeps reading from cached files, although I disabled it
closed
1
2021-12-15T21:26:22
2022-02-24T09:12:22
2022-02-24T09:12:22
dorost1234
[ "bug" ]
## Describe the bug Hi, I am trying to avoid dataset library using cached files, I get the following bug when this tried to read the cached files. I tried to do the followings: ``` from datasets import set_caching_enabled set_caching_enabled(False) ``` also force redownlaod: ``` download_mode='force_redownload' ``` but none worked so far, this is on a cluster and on some of the machines this reads from the cached files, I really appreciate any idea on how to fully remove caching @lhoestq many thanks ``` File "run_clm.py", line 496, in <module> main() File "run_clm.py", line 419, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 943, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/transformers/trainer.py", line 1445, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 172, in evaluate output = self.eval_loop( File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 241, in eval_loop metrics = self.compute_pet_metrics(eval_datasets, model, self.extra_info[metric_key_prefix], task=task) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 268, in compute_pet_metrics centroids = self._compute_per_token_train_centroids(model, task=task) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 353, in _compute_per_token_train_centroids data = get_label_samples(self.get_per_task_train_dataset(task), label) File "/users/dara/codes/fewshot/debug/fewshot/third_party/trainers/trainer.py", line 350, in get_label_samples return dataset.filter(lambda example: int(example['labels']) == label) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 470, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/fingerprint.py", line 406, in wrapper out = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2519, in filter indices = self.map( File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2036, in map return self._map_single( File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 503, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 470, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/fingerprint.py", line 406, in wrapper out = func(self, *args, **kwargs) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2248, in _map_single return Dataset.from_file(cache_file_name, info=info, split=self.split) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 654, in from_file return cls( File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 593, in __init__ self.info.features = self.info.features.reorder_fields_as(inferred_features) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/features/features.py", line 1092, in reorder_fields_as return Features(recursive_reorder(self, other)) File "/users/dara/conda/envs/multisuccess/lib/python3.8/site-packages/datasets/features/features.py", line 1081, in recursive_reorder raise ValueError(f"Keys mismatch: between {source} and {target}" + stack_position) ValueError: Keys mismatch: between {'indices': Value(dtype='uint64', id=None)} and {'candidates_ids': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'labels': Value(dtype='int64', id=None), 'attention_mask': Sequence(feature=Value(dtype='int8', id=None), length=-1, id=None), 'input_ids': Sequence(feature=Value(dtype='int32', id=None), length=-1, id=None), 'extra_fields': {}, 'task': Value(dtype='string', id=None)} ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: linux - Python version: 3.8.12 - PyArrow version: 6.0.1
false
1,081,389,723
https://api.github.com/repos/huggingface/datasets/issues/3439
https://github.com/huggingface/datasets/pull/3439
3,439
Add `cast_column` to `IterableDataset`
closed
1
2021-12-15T19:00:45
2021-12-16T15:55:20
2021-12-16T15:55:19
mariosasko
[]
Closes #3369. cc: @patrickvonplaten
true
1,081,302,203
https://api.github.com/repos/huggingface/datasets/issues/3438
https://github.com/huggingface/datasets/pull/3438
3,438
Update supported versions of Python in setup.py
closed
0
2021-12-15T17:30:12
2021-12-20T14:22:13
2021-12-20T14:22:12
mariosasko
[]
Update the list of supported versions of Python in `setup.py` to keep the PyPI project description updated.
true
1,081,247,889
https://api.github.com/repos/huggingface/datasets/issues/3437
https://github.com/huggingface/datasets/pull/3437
3,437
Update BLEURT hyperlink
closed
2
2021-12-15T16:34:47
2021-12-17T13:28:26
2021-12-17T13:28:25
lewtun
[]
The description of BLEURT on the hf.co website has a strange use of URL hyperlinking. This PR attempts to fix this, although I am not 100% sure Markdown syntax is allowed on the frontend or not. ![Screen Shot 2021-12-15 at 17 31 27](https://user-images.githubusercontent.com/26859204/146226432-c83cbdaf-f57d-4999-b53c-85da718ff7fb.png)
true
1,081,068,139
https://api.github.com/repos/huggingface/datasets/issues/3436
https://github.com/huggingface/datasets/pull/3436
3,436
Add the OneStopQa dataset
closed
0
2021-12-15T13:53:31
2021-12-17T14:32:00
2021-12-17T13:25:29
OmerShubi
[]
Adding OneStopQA, a multiple choice reading comprehension dataset annotated according to the STARC (Structured Annotations for Reading Comprehension) scheme.
true
1,081,043,756
https://api.github.com/repos/huggingface/datasets/issues/3435
https://github.com/huggingface/datasets/pull/3435
3,435
Improve Wikipedia Loading Script
closed
9
2021-12-15T13:30:06
2022-03-04T08:16:00
2022-03-04T08:16:00
geohci
[]
* More structured approach to detecting redirects * Remove redundant template filter code (covered by strip_code) * Add language-specific lists of additional media namespace aliases for filtering * Add language-specific lists of category namespace aliases for new link text cleaning step * Remove magic words (parser directions like __TOC__ that occasionally occur in text) Fix #3400 With support from @albertvillanova CC @yjernite
true
1,080,917,446
https://api.github.com/repos/huggingface/datasets/issues/3434
https://github.com/huggingface/datasets/issues/3434
3,434
Add The People's Speech
closed
1
2021-12-15T11:21:21
2023-02-28T16:22:29
2023-02-28T16:22:28
mariosasko
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** The People's Speech - **Description:** a massive English-language dataset of audio transcriptions of full sentences. - **Paper:** https://openreview.net/pdf?id=R8CwidgJ0yT - **Data:** https://mlcommons.org/en/peoples-speech/ - **Motivation:** With over 30,000 hours of speech, this dataset is the largest and most diverse freely available English speech recognition corpus today. [The article](https://thegradient.pub/new-datasets-to-democratize-speech-recognition-technology-2/) which may be useful when working on the dataset. cc: @anton-l Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,080,910,724
https://api.github.com/repos/huggingface/datasets/issues/3433
https://github.com/huggingface/datasets/issues/3433
3,433
Add Multilingual Spoken Words dataset
closed
0
2021-12-15T11:14:44
2022-02-22T10:03:53
2022-02-22T10:03:53
albertvillanova
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** Multilingual Spoken Words - **Description:** Multilingual Spoken Words Corpus is a large and growing audio dataset of spoken words in 50 languages for academic research and commercial applications in keyword spotting and spoken term search, licensed under CC-BY 4.0. The dataset contains more than 340,000 keywords, totaling 23.4 million 1-second spoken examples (over 6,000 hours). Read more: https://mlcommons.org/en/news/spoken-words-blog/ - **Paper:** https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/file/fe131d7f5a6b38b23cc967316c13dae2-Paper-round2.pdf - **Data:** https://mlcommons.org/en/multilingual-spoken-words/ - **Motivation:** Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,079,910,769
https://api.github.com/repos/huggingface/datasets/issues/3432
https://github.com/huggingface/datasets/pull/3432
3,432
Correctly indent builder config in dataset script docs
closed
0
2021-12-14T15:39:47
2021-12-14T17:35:17
2021-12-14T17:35:17
mariosasko
[]
null
true
1,079,866,083
https://api.github.com/repos/huggingface/datasets/issues/3431
https://github.com/huggingface/datasets/issues/3431
3,431
Unable to resolve any data file after loading once
closed
2
2021-12-14T15:02:15
2022-12-11T10:53:04
2022-02-24T09:13:52
LzyFischer
[]
when I rerun my program, it occurs this error " Unable to resolve any data file that matches '['**train*']' at /data2/whr/lzy/open_domain_data/retrieval/wiki_dpr with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'zip']", so how could i deal with this problem? thx. And below is my code . ![image](https://user-images.githubusercontent.com/84694183/146023446-d75fdec8-65c1-484f-80d8-6c20ff5e994b.png)
false
1,079,811,124
https://api.github.com/repos/huggingface/datasets/issues/3430
https://github.com/huggingface/datasets/pull/3430
3,430
Make decoding of Audio and Image feature optional
closed
7
2021-12-14T14:15:08
2022-01-25T18:57:52
2022-01-25T18:57:52
mariosasko
[]
Add the `decode` argument (`True` by default) to the `Audio` and the `Image` feature to make it possible to toggle on/off decoding of these features. Even though we've discussed that on Slack, I'm not removing the `_storage_dtype` argument of the Audio feature in this PR to avoid breaking the Audio feature tests.
true
1,078,902,390
https://api.github.com/repos/huggingface/datasets/issues/3429
https://github.com/huggingface/datasets/pull/3429
3,429
Make cast cacheable (again) on Windows
closed
0
2021-12-13T19:32:02
2021-12-14T14:39:51
2021-12-14T14:39:50
mariosasko
[]
`cast` currently emits the following warning when called on Windows: ``` Parameter 'function'=<function Dataset.cast.<locals>.<lambda> at 0x000001C930571EA0> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed. ``` It seems like the issue stems from the `config.PYARROW_VERSION` object not being serializable on Windows (tested with `dumps(lambda: config.PYARROW_VERSION)`), so I'm fixing this by capturing `config.PYARROW_VERSION.major` before the lambda definition.
true
1,078,863,468
https://api.github.com/repos/huggingface/datasets/issues/3428
https://github.com/huggingface/datasets/pull/3428
3,428
Clean squad dummy data
closed
0
2021-12-13T18:46:29
2021-12-13T18:57:50
2021-12-13T18:57:50
lhoestq
[]
Some unused files were remaining, this PR removes them. We just need to keep the dummy_data.zip file
true
1,078,782,159
https://api.github.com/repos/huggingface/datasets/issues/3427
https://github.com/huggingface/datasets/pull/3427
3,427
Add The Pile Enron Emails subset
closed
0
2021-12-13T17:14:16
2021-12-14T17:30:59
2021-12-14T17:30:57
albertvillanova
[]
Add: - Enron Emails subset of The Pile: "enron_emails" config Close bigscience-workshop/data_tooling#310. CC: @StellaAthena
true
1,078,670,031
https://api.github.com/repos/huggingface/datasets/issues/3426
https://github.com/huggingface/datasets/pull/3426
3,426
Update disaster_response_messages download urls (+ add validation split)
closed
0
2021-12-13T15:30:12
2021-12-14T14:38:30
2021-12-14T14:38:29
mariosasko
[]
Fixes #3240, fixes #3416
true
1,078,598,140
https://api.github.com/repos/huggingface/datasets/issues/3425
https://github.com/huggingface/datasets/issues/3425
3,425
Getting configs names takes too long
open
3
2021-12-13T14:27:57
2021-12-13T14:53:33
null
severo
[ "bug" ]
## Steps to reproduce the bug ```python from datasets import get_dataset_config_names get_dataset_config_names("allenai/c4") ``` ## Expected results I would expect to get the answer quickly, at least in less than 10s ## Actual results It takes about 45s on my environment ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-5.11.0-1022-aws-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
false
1,078,543,625
https://api.github.com/repos/huggingface/datasets/issues/3424
https://github.com/huggingface/datasets/pull/3424
3,424
Add RedCaps dataset
closed
2
2021-12-13T13:38:13
2022-01-12T14:13:16
2022-01-12T14:13:15
mariosasko
[]
Add the RedCaps dataset. I'm not adding the generated `dataset_infos.json` file for now due to its size (11 MB). TODOs: - [x] dummy data - [x] dataset card Close #3316
true
1,078,049,638
https://api.github.com/repos/huggingface/datasets/issues/3423
https://github.com/huggingface/datasets/issues/3423
3,423
data duplicate when setting num_works > 1 with streaming data
closed
14
2021-12-13T03:43:17
2022-12-14T16:04:22
2022-12-14T16:04:22
cloudyuyuyu
[ "bug", "streaming" ]
## Describe the bug The data is repeated num_works times when we load_dataset with streaming and set num_works > 1 when construct dataloader ## Steps to reproduce the bug ```python # Sample code to reproduce the bug import pandas as pd import numpy as np import os from datasets import load_dataset from torch.utils.data import DataLoader from tqdm import tqdm import shutil NUM_OF_USER = 1000000 NUM_OF_ACTION = 50000 NUM_OF_SEQUENCE = 10000 NUM_OF_FILES = 32 NUM_OF_WORKERS = 16 if __name__ == "__main__": shutil.rmtree("./dataset") for i in range(NUM_OF_FILES): sequence_data = pd.DataFrame( { "imei": np.random.randint(1, NUM_OF_USER, size=NUM_OF_SEQUENCE), "sequence": np.random.randint(1, NUM_OF_ACTION, size=NUM_OF_SEQUENCE) } ) if not os.path.exists("./dataset"): os.makedirs("./dataset") sequence_data.to_csv(f"./dataset/sequence_data_{i}.csv", index=False) dataset = load_dataset("csv", data_files=[os.path.join("./dataset",file) for file in os.listdir("./dataset") if file.endswith(".csv")], split="train", streaming=True).with_format("torch") data_loader = DataLoader(dataset, batch_size=1024, num_workers=NUM_OF_WORKERS) result = pd.DataFrame() for i, batch in tqdm(enumerate(data_loader)): result = pd.concat([result, pd.DataFrame(batch)], axis=0) result.to_csv(f"num_work_{NUM_OF_WORKERS}.csv", index=False) ``` ## Expected results data do not duplicate ## Actual results data duplicate NUM_OF_WORKERS = 16 ![image](https://user-images.githubusercontent.com/16486492/145748707-9d2df25b-2f4f-4d7b-a83e-242be4fc8934.png) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:datasets==1.14.0 - Platform:transformers==4.11.3 - Python version:3.8 - PyArrow version:
false
1,078,022,619
https://api.github.com/repos/huggingface/datasets/issues/3422
https://github.com/huggingface/datasets/issues/3422
3,422
Error about load_metric
closed
1
2021-12-13T02:49:51
2022-01-07T14:06:47
2022-01-07T14:06:47
jiacheng-ye
[ "bug" ]
## Describe the bug File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1371, in load_metric metric = metric_cls( TypeError: 'NoneType' object is not callable ## Steps to reproduce the bug ```python metric = load_metric("glue", "sst2") ``` ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyArrow version: 6.0.1
false
1,077,966,571
https://api.github.com/repos/huggingface/datasets/issues/3421
https://github.com/huggingface/datasets/pull/3421
3,421
Adding mMARCO dataset
closed
7
2021-12-13T00:56:43
2022-10-03T09:37:15
2022-10-03T09:37:15
lhbonifacio
[ "dataset contribution" ]
Adding mMARCO (v1.1) to HF datasets.
true
1,077,913,468
https://api.github.com/repos/huggingface/datasets/issues/3420
https://github.com/huggingface/datasets/pull/3420
3,420
Add eli5_category dataset
closed
1
2021-12-12T21:30:45
2021-12-14T17:53:03
2021-12-14T17:53:02
jingshenSN2
[]
This pull request adds a categorized Long-form question answering dataset `ELI5_Category`. It's a new variant of the [ELI5](https://huggingface.co/datasets/eli5) dataset that uses the Reddit tags to alleviate the training/validation overlapping in the origin ELI5 dataset. A [report](https://celeritasml.netlify.app/posts/2021-12-01-eli5c/)(Section 2) on this dataset.
true
1,077,350,974
https://api.github.com/repos/huggingface/datasets/issues/3419
https://github.com/huggingface/datasets/issues/3419
3,419
`.to_json` is extremely slow after `.select`
open
6
2021-12-11T01:36:31
2021-12-21T15:49:07
null
eladsegal
[ "bug" ]
## Describe the bug Saving a dataset to JSON with `to_json` is extremely slow after using `.select` on the original dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset original = load_dataset("squad", split="train") original.to_json("from_original.json") # Takes 0 seconds selected_subset1 = original.select([i for i in range(len(original))]) selected_subset1.to_json("from_select1.json") # Takes 212 seconds selected_subset2 = original.select([i for i in range(int(len(original) / 2))]) selected_subset2.to_json("from_select2.json") # Takes 90 seconds ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: master (https://github.com/huggingface/datasets/commit/6090f3cfb5c819f441dd4a4bb635e037c875b044) - Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyArrow version: 6.0.0
false
1,077,053,296
https://api.github.com/repos/huggingface/datasets/issues/3418
https://github.com/huggingface/datasets/pull/3418
3,418
Add Wikisource dataset
closed
1
2021-12-10T17:04:44
2022-10-04T09:35:56
2022-10-03T09:37:20
albertvillanova
[ "dataset contribution" ]
Add loading script for Wikisource dataset. Fix #3399. CC: @geohci, @yjernite
true
1,076,943,343
https://api.github.com/repos/huggingface/datasets/issues/3417
https://github.com/huggingface/datasets/pull/3417
3,417
Fix type of bridge field in QED
closed
0
2021-12-10T15:07:21
2021-12-14T14:39:06
2021-12-14T14:39:05
mariosasko
[]
Use `Value("string")` instead of `Value("bool")` for the feature type of the `"bridge"` field in the QED dataset. If the value is `False`, set to `None`. The following paragraph in the QED repo explains the purpose of this field: >Each annotation in referential_equalities is a pair of spans, the question_reference and the sentence_reference, corresponding to an entity mention in the question and the selected_sentence respectively. As described in the paper, sentence_references can be "bridged in", in which case they do not correspond with any actual span in the selected_sentence. Hence, sentence_reference spans contain an additional field, bridge, which is a prepositional phrase when a reference is bridged, and is False otherwise. Prepositional phrases serve to link bridged references to an anchoring phrase in the selected_sentence. In the case a sentence_reference is bridged, the start and end, as well as the span string, map to such an anchoring phrase in the selected_sentence. Fix #3346 cc @VictorSanh
true
1,076,868,771
https://api.github.com/repos/huggingface/datasets/issues/3416
https://github.com/huggingface/datasets/issues/3416
3,416
disaster_response_messages unavailable
closed
1
2021-12-10T13:49:17
2021-12-14T14:38:29
2021-12-14T14:38:29
sacdallago
[ "dataset-viewer" ]
## Dataset viewer issue for '* disaster_response_messages*' **Link:** https://huggingface.co/datasets/disaster_response_messages Dataset unavailable. Link dead: https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv Am I the one who added this dataset ?No
false
1,076,472,534
https://api.github.com/repos/huggingface/datasets/issues/3415
https://github.com/huggingface/datasets/issues/3415
3,415
Non-deterministic tests: CI tests randomly fail
closed
2
2021-12-10T06:08:59
2022-03-31T16:38:51
2022-03-31T16:38:51
albertvillanova
[ "bug" ]
## Describe the bug Some CI tests fail randomly. 1. In https://github.com/huggingface/datasets/pull/3375/commits/c10275fe36085601cb7bdb9daee9a8f1fc734f48, there were 3 failing tests, only on Linux: ``` =========================== short test summary info ============================ FAILED tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_extraction_protocol[https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh-zip] FAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive - Fi... FAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped = 3 failed, 3553 passed, 2950 skipped, 2 xfailed, 1 xpassed, 125 warnings in 192.79s (0:03:12) = ``` 2. After re-running the CI (without any change in the code) in https://github.com/huggingface/datasets/pull/3375/commits/57bfe1f342cd3c59d2510b992d5f06a0761eb147, there was only 1 failing test (one on Linux and a different one on Windows): - On Linux: ``` =========================== short test summary info ============================ FAILED tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped = 1 failed, 3555 passed, 2950 skipped, 2 xfailed, 1 xpassed, 125 warnings in 199.76s (0:03:19) = ``` - On Windows: ``` =========================== short test summary info =========================== FAILED tests/test_load.py::test_load_dataset_builder_for_community_dataset_without_script = 1 failed, 3551 passed, 2954 skipped, 2 xfailed, 1 xpassed, 121 warnings in 478.58s (0:07:58) = ``` The test `tests/test_streaming_download_manager.py::test_streaming_gg_drive_zipped` passes locally. 3. After re-running again the CI (without any change in the code) in https://github.com/huggingface/datasets/pull/3375/commits/39f32f2119cf91b86867216bb5c356c586503c6a, ALL the tests passed.
false
1,076,028,998
https://api.github.com/repos/huggingface/datasets/issues/3414
https://github.com/huggingface/datasets/pull/3414
3,414
Skip None encoding (line deleted by accident in #3195)
closed
0
2021-12-09T21:17:33
2021-12-10T11:00:03
2021-12-10T11:00:02
mariosasko
[]
Return the line deleted by accident in #3195 while [resolving merge conflicts](https://github.com/huggingface/datasets/pull/3195/commits/8b0ed15be08559056b817836a07d47acda0c4510). Fix #3181 (finally :))
true
1,075,854,325
https://api.github.com/repos/huggingface/datasets/issues/3413
https://github.com/huggingface/datasets/pull/3413
3,413
Add WIDER FACE dataset
closed
0
2021-12-09T18:03:38
2022-01-12T14:13:47
2022-01-12T14:13:47
mariosasko
[]
Adds the WIDER FACE face detection benchmark. TODOs: * [x] dataset card * [x] dummy data
true
1,075,846,368
https://api.github.com/repos/huggingface/datasets/issues/3412
https://github.com/huggingface/datasets/pull/3412
3,412
Fix flaky test again for s3 serialization
closed
0
2021-12-09T17:54:41
2021-12-09T18:00:52
2021-12-09T18:00:52
lhoestq
[]
Following https://github.com/huggingface/datasets/pull/3388 that wasn't enough (see CI error [here](https://app.circleci.com/pipelines/github/huggingface/datasets/9080/workflows/b971fb27-ff20-4220-9416-c19acdfdf6f4/jobs/55985))
true
1,075,846,272
https://api.github.com/repos/huggingface/datasets/issues/3411
https://github.com/huggingface/datasets/issues/3411
3,411
[chinese wwm] load_datasets behavior not as expected when using run_mlm_wwm.py script
open
2
2021-12-09T17:54:35
2021-12-22T11:21:33
null
hyusterr
[ "bug" ]
## Describe the bug Model I am using (Bert, XLNet ...): bert-base-chinese The problem arises when using: * [https://github.com/huggingface/transformers/blob/master/examples/research_projects/mlm_wwm/run_mlm_wwm.py] the official example scripts: `rum_mlm_wwm.py` The tasks I am working on is: pretraining whole word masking with my own dataset and ref.json file I tried follow the run_mlm_wwm.py procedure to do whole word masking on pretraining task. my file is in .txt form, where one line represents one sample, with `9,264,784` chinese lines in total. the ref.json file is also contains 9,264,784 lines of whole word masking reference data for my chinese corpus. but when I try to adapt the run_mlm_wwm.py script, it shows that somehow after `datasets["train"] = load_dataset(...` `len(datasets["train"])` returns `9,265,365` then, after `tokenized_datasets = datasets.map(...` `len(tokenized_datasets["train"])` returns `9,265,279` I'm really confused and tried to trace code by myself but can't know what happened after a week trial. I want to know what happened in the `load_dataset()` function and `datasets.map` here and how did I get more lines of data than I input. so I'm here to ask. ## To reproduce Sorry that I can't provide my data here since it did not belong to me. but I'm sure I remove the blank lines. ## Expected behavior I expect the code run as it should. but the AssertionError in line 167 keeps raise as the line of reference json and datasets['train'] differs. Thanks for your patient reading! ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 3.0.0
false
1,075,815,415
https://api.github.com/repos/huggingface/datasets/issues/3410
https://github.com/huggingface/datasets/pull/3410
3,410
Fix dependencies conflicts in Windows CI after conda update to 4.11
closed
0
2021-12-09T17:19:11
2021-12-09T17:36:20
2021-12-09T17:36:19
lhoestq
[]
For some reason the CI wasn't using python 3.6 but python 3.7 after the update to conda 4.11
true
1,075,684,593
https://api.github.com/repos/huggingface/datasets/issues/3409
https://github.com/huggingface/datasets/pull/3409
3,409
Pass new_fingerprint in multiprocessing
closed
2
2021-12-09T15:12:00
2022-08-19T10:41:04
2021-12-09T17:38:43
lhoestq
[]
Following https://github.com/huggingface/datasets/pull/3045 Currently one can pass `new_fingerprint` to `.map()` to use a custom fingerprint instead of the one computed by hashing the map transform. However it's ignored if `num_proc>1`. In this PR I fixed that by passing `new_fingerprint` to `._map_single()` when `num_proc>1`. More specifically, `new_fingerprint` with a suffix based on the process `rank` is passed, so that each process has a different `new_fingerprint` cc @TevenLeScao @vlievin
true
1,075,642,915
https://api.github.com/repos/huggingface/datasets/issues/3408
https://github.com/huggingface/datasets/issues/3408
3,408
Typo in Dataset viewer error message
closed
1
2021-12-09T14:34:02
2021-12-22T11:02:53
2021-12-22T11:02:53
lewtun
[ "dataset-viewer" ]
## Dataset viewer issue for '*name of the dataset*' **Link:** *link to the dataset viewer page* *short description of the issue* When creating an empty dataset repo, the Dataset Preview provides a helpful message that no files were found. There is a tiny typo in that message: "ressource" should be "resource" ![Screen Shot 2021-12-09 at 15 31 31](https://user-images.githubusercontent.com/26859204/145415725-9cd728f0-c2c8-4b4e-a8e1-4f4d7841c94a.png) Am I the one who added this dataset ? N/A
false
1,074,502,225
https://api.github.com/repos/huggingface/datasets/issues/3407
https://github.com/huggingface/datasets/pull/3407
3,407
Use max number of data files to infer module
closed
1
2021-12-08T14:58:43
2021-12-14T17:08:42
2021-12-14T17:08:42
albertvillanova
[]
When inferring the module for datasets without script, set a maximum number of iterations over data files. This PR fixes the issue of taking too long when hundred of data files present. Please, feel free to agree on both numbers: ``` # Datasets without script DATA_FILES_MAX_NUMBER = 10 ARCHIVED_DATA_FILES_MAX_NUMBER = 5 ``` Fix #3404.
true
1,074,366,050
https://api.github.com/repos/huggingface/datasets/issues/3406
https://github.com/huggingface/datasets/pull/3406
3,406
Fix module inference for archive with a directory
closed
0
2021-12-08T12:39:12
2021-12-08T13:03:30
2021-12-08T13:03:29
albertvillanova
[]
Fix module inference for an archive file that contains files within a directory. Fix #3405.
true
1,074,360,362
https://api.github.com/repos/huggingface/datasets/issues/3405
https://github.com/huggingface/datasets/issues/3405
3,405
ZIP format inference does not work when files located in a dir inside the archive
closed
0
2021-12-08T12:32:15
2021-12-08T13:03:29
2021-12-08T13:03:29
albertvillanova
[ "bug" ]
## Describe the bug When a zipped file contains archived files within a directory, the function `infer_module_for_data_files_in_archives` does not work. It only works for files located in the root directory of the ZIP file. ## Steps to reproduce the bug ```python infer_module_for_data_files_in_archives(["path/to/zip/file.zip"], False) ```
false
1,073,657,561
https://api.github.com/repos/huggingface/datasets/issues/3404
https://github.com/huggingface/datasets/issues/3404
3,404
Optimize ZIP format inference
closed
0
2021-12-07T18:44:49
2021-12-14T17:08:41
2021-12-14T17:08:41
albertvillanova
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** When hundreds of ZIP files are present in a dataset, format inference takes too long. See: https://github.com/bigscience-workshop/data_tooling/issues/232#issuecomment-986685497 **Describe the solution you'd like** Iterate over a maximum number of files. CC: @lhoestq
false
1,073,622,120
https://api.github.com/repos/huggingface/datasets/issues/3403
https://github.com/huggingface/datasets/issues/3403
3,403
Cannot import name 'maybe_sync'
closed
4
2021-12-07T17:57:59
2021-12-17T07:00:35
2021-12-17T07:00:35
KMFODA
[ "bug" ]
## Describe the bug Cannot seem to import datasets when running run_summarizer.py script on a VM set up on ovhcloud ## Steps to reproduce the bug ```python from datasets import load_dataset ``` ## Expected results No error ## Actual results Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/conda/lib/python3.7/site-packages/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 48, in <module> from .arrow_writer import ArrowWriter, OptimizedTypedSequence File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py", line 27, in <module> from .features import ( File "/opt/conda/lib/python3.7/site-packages/datasets/features/__init__.py", line 2, in <module> from .audio import Audio File "/opt/conda/lib/python3.7/site-packages/datasets/features/audio.py", line 8, in <module> from ..utils.streaming_download_manager import xopen File "/opt/conda/lib/python3.7/site-packages/datasets/utils/streaming_download_manager.py", line 16, in <module> from ..filesystems import COMPRESSION_FILESYSTEMS File "/opt/conda/lib/python3.7/site-packages/datasets/filesystems/__init__.py", line 13, in <module> from .s3filesystem import S3FileSystem # noqa: F401 File "/opt/conda/lib/python3.7/site-packages/datasets/filesystems/s3filesystem.py", line 1, in <module> import s3fs File "/opt/conda/lib/python3.7/site-packages/s3fs/__init__.py", line 1, in <module> from .core import S3FileSystem, S3File File "/opt/conda/lib/python3.7/site-packages/s3fs/core.py", line 11, in <module> from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync ImportError: cannot import name 'maybe_sync' from 'fsspec.asyn' (/opt/conda/lib/python3.7/site-packages/fsspec/asyn.py) ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.0 - Platform: OVH Cloud Tesla V100 Machine - Python version: 3.7.9 - PyArrow version: 6.0.1
false
1,073,614,815
https://api.github.com/repos/huggingface/datasets/issues/3402
https://github.com/huggingface/datasets/pull/3402
3,402
More robust first elem check in encode/cast example
closed
0
2021-12-07T17:48:16
2021-12-08T13:02:16
2021-12-08T13:02:15
mariosasko
[]
Fix #3306
true