id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
1,058,718,957
https://api.github.com/repos/huggingface/datasets/issues/3301
https://github.com/huggingface/datasets/pull/3301
3,301
Add wikipedia tags
closed
0
2021-11-19T16:39:25
2021-11-19T16:49:30
2021-11-19T16:49:29
lhoestq
[]
Add the missing tags to the wikipedia dataset card. I also added the missing languages code in our language codes list. This should also fix the code snippet that is presented on the Hub to load the dataset: fix https://github.com/huggingface/datasets/issues/3292
true
1,058,644,459
https://api.github.com/repos/huggingface/datasets/issues/3300
https://github.com/huggingface/datasets/issues/3300
3,300
❓ Dataset loading script from Hugging Face Hub
closed
8
2021-11-19T15:20:52
2021-12-22T10:57:56
2021-12-22T10:57:56
pietrolesci
[ "dataset request", "dataset-viewer" ]
Hi there, I am trying to add my custom `ag_news` with its own loading script on the Hugging Face datasets hub. In particular, I would like to test the addition of a second configuration to the existing `ag_news` dataset. Once it works in my hub, I plan to make a PR to the original dataset. However, in trying to do so I have encountered certain problems as detailed below. Issues I have encountered: - Without a loading script, the train and test files are loaded together into a unique `dataset.Dataset` -> so I wrote a loading script. Also, I need a loading script otherwise I cannot specify multiple configurations - Once my loading script is working locally, I do not manage to make it work on the hub. In particular, I would like to be able to load the dataset like this ```python load_dataset("pietrolesci/ag_news", name="my_configuration") ``` Apparently, the `load_dataset` is able to pick up the loading script from the hub and run it. However, it errors because it is unable to find the files. The structure of my hub repo is the following ``` ag_news.py train.csv test.csv ``` and the loading script I specify `data_dir=Path(__file__).parent` and `data_files=DataFilesDict({"train": "train.csv", "test": "test.csv"})`. In the documentation I could not find info regarding loading a dataset from the hub using a loading script present on the hub. Any suggestion is very much appreciated. Best, Pietro Link to the hub repo: https://huggingface.co/datasets/pietrolesci/ag_news BONUS: how can I make the data viewer work in this specific case? :)
false
1,058,518,213
https://api.github.com/repos/huggingface/datasets/issues/3299
https://github.com/huggingface/datasets/issues/3299
3,299
Add option to find unique elements in nested sequences when calling `Dataset.unique`
open
4
2021-11-19T13:16:06
2023-05-19T14:45:40
null
mariosasko
[ "enhancement" ]
It would be nice to have an option to flatten nested sequences to find unique elements stored in them when calling `Dataset.unique`. ~~Currently, `Dataset.unique` only supports finding unique sequences and not unique elements in that situation.~~
false
1,058,420,201
https://api.github.com/repos/huggingface/datasets/issues/3298
https://github.com/huggingface/datasets/issues/3298
3,298
Agnews dataset viewer is not working
closed
3
2021-11-19T11:18:59
2021-12-21T16:24:05
2021-12-21T16:24:05
pietrolesci
[ "dataset-viewer" ]
## Dataset viewer issue for '*name of the dataset*' **Link:** https://huggingface.co/datasets/ag_news Hi there, the `ag_news` dataset viewer is not working. Am I the one who added this dataset? No
false
1,058,263,859
https://api.github.com/repos/huggingface/datasets/issues/3297
https://github.com/huggingface/datasets/issues/3297
3,297
.map() cache is wrongfully reused - only happens when the mapping function is imported
open
5
2021-11-19T08:18:36
2025-07-31T16:29:29
null
eladsegal
[ "bug" ]
## Describe the bug When `.map` is used with a mapping function that is imported, the cache is reused even if the mapping function has been modified. The reason for this is that `dill` that is used for creating the fingerprint [pickles imported functions by reference](https://stackoverflow.com/a/67851411). I guess it is not a widespread case, but it can still lead to unwanted results unnoticeably. ## Steps to reproduce the bug Create files `a.py` and `b.py`: ```python # a.py from datasets import load_dataset def main(): squad = load_dataset("squad") squad.map(mapping_func, batched=True) def mapping_func(examples): ID_LENGTH = 4 examples["id"] = [id_[:ID_LENGTH] for id_ in examples["id"]] return examples if __name__ == "__main__": main() ``` ```python # b.py from datasets import load_dataset from a import mapping_func def main(): squad = load_dataset("squad") squad.map(mapping_func, batched=True) if __name__ == "__main__": main() ``` Run `python b.py` twice: In the first run you will see tqdm bars showing that the data is processed, and in the second run you will see "Loading cached processed dataset at...". Now change `ID_LENGTH` to another number in order to change the mapping function, and run `python b.py` again. You'll see that `.map` loads from the cache the result of the previous mapping function. ## Expected results Run `python a.py` twice: In the first run you will see tqdm bars showing that the data is processed, and in the second run you will see "Loading cached processed dataset at...". Now change `ID_LENGTH` to another number in order to change the mapping function, and run `python a.py` again. You'll see that the dataset is being processed and that there's no reuse of the previous mapping function result. ## Workaround Put the mapping function inside a dummy class as a static method: ```python # a.py class MappingFuncClass: @staticmethod def mapping_func(examples): ID_LENGTH = 4 examples["id"] = [id_[:ID_LENGTH] for id_ in examples["id"]] return examples ``` ```python # b.py from datasets import load_dataset from a import MappingFuncClass def main(): squad = load_dataset("squad") squad.map(MappingFuncClass.mapping_func, batched=True) if __name__ == "__main__": main() ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.17 - Python version: 3.8.10 - PyArrow version: 4.0.1
false
1,057,970,638
https://api.github.com/repos/huggingface/datasets/issues/3296
https://github.com/huggingface/datasets/pull/3296
3,296
Fix temporary dataset_path creation for URIs related to remote fs
closed
2
2021-11-18T23:32:45
2021-12-06T10:45:04
2021-12-06T10:45:04
francisco-perez-sorrosal
[]
This aims to close #3295
true
1,057,954,892
https://api.github.com/repos/huggingface/datasets/issues/3295
https://github.com/huggingface/datasets/issues/3295
3,295
Temporary dataset_path for remote fs URIs not built properly in arrow_dataset.py::load_from_disk
closed
1
2021-11-18T23:24:02
2021-12-06T10:45:04
2021-12-06T10:45:04
francisco-perez-sorrosal
[ "bug" ]
## Describe the bug When trying to build a temporary dataset path from a remote URI in this block of code: https://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1038-L1042 the result is not the expected when passing an absolute path in an URI like `hdfs:///absolute/path`. ## Steps to reproduce the bug ```python dataset_path = "hdfs:///absolute/path" src_dataset_path = extract_path_from_uri(dataset_path) tmp_dir = get_temporary_cache_files_directory() dataset_path = Path(tmp_dir, src_dataset_path) print(dataset_path) ``` ## Expected results With the code above, we would expect a value in `dataset_path` similar to: `/tmp/tmpnwxyvao5/absolute/path` ## Actual results However, we get a `dataset_path` value like: `/absolute/path` This is because this line here: https://github.com/huggingface/datasets/blob/42f6b1d18a4a1b6009b6e62d115491be16dfca22/src/datasets/arrow_dataset.py#L1041 returns the last absolute path when two absolute paths (the one in `tmp_dir` and the one extracted from the URI in `src_dataset_path`) are passed as arguments. ## Environment info - `datasets` version: 1.13.3 - Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.33 - Python version: 3.9.7 - PyArrow version: 5.0.0
false
1,057,495,473
https://api.github.com/repos/huggingface/datasets/issues/3294
https://github.com/huggingface/datasets/issues/3294
3,294
Add Natural Adversarial Objects dataset
open
0
2021-11-18T15:34:44
2021-12-08T12:00:02
null
osanseviero
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** Natural Adversarial Objects (NAO) - **Description:** Natural Adversarial Objects (NAO) is a new dataset to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence. - **Paper:** https://arxiv.org/abs/2111.04204v1 - **Data:** https://drive.google.com/drive/folders/15P8sOWoJku6SSEiHLEts86ORfytGezi8 - **Motivation:** interesting object detection dataset useful for miscclassifications cc @NielsRogge Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,057,004,431
https://api.github.com/repos/huggingface/datasets/issues/3293
https://github.com/huggingface/datasets/pull/3293
3,293
Pin version exclusion for Markdown
closed
0
2021-11-18T06:56:01
2021-11-18T10:28:05
2021-11-18T10:28:04
albertvillanova
[]
As Markdown version 3.3.5 has a bug, it is better to exclude it in case the users have it previously installed in their environment. Related to #3289, #3286.
true
1,056,962,554
https://api.github.com/repos/huggingface/datasets/issues/3292
https://github.com/huggingface/datasets/issues/3292
3,292
Not able to load 'wikipedia' dataset
closed
1
2021-11-18T05:41:18
2021-11-19T16:49:29
2021-11-19T16:49:29
abhibisht89
[ "bug" ]
## Describe the bug I am following the instruction for loading the wikipedia dataset using datasets. However getting the below error. ## Steps to reproduce the bug from datasets import load_dataset dataset = load_dataset("wikipedia") ``` ## Expected results A clear and concise description of the expected results. ## Actual results ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs) 339 "Config name is missing." 340 "\nPlease pick one among the available configs: %s" % list(self.builder_configs.keys()) --> 341 + "\nExample of usage:\n\t`{}`".format(example_of_usage) 342 ) 343 builder_config = self.BUILDER_CONFIGS[0] ValueError: Config name is missing. Please pick one among the available configs: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu'] Example of usage: `load_dataset('wikipedia', '20200501.aa')` I think the other parameter is missing in the load_dataset function that is not shown in the instruction.
false
1,056,689,876
https://api.github.com/repos/huggingface/datasets/issues/3291
https://github.com/huggingface/datasets/pull/3291
3,291
Use f-strings in the dataset scripts
closed
0
2021-11-17T22:20:19
2021-11-22T16:40:16
2021-11-22T16:40:16
Carlosbogo
[]
Uses f-strings to format the .py files in the dataset folder
true
1,056,414,856
https://api.github.com/repos/huggingface/datasets/issues/3290
https://github.com/huggingface/datasets/pull/3290
3,290
Make several audio datasets streamable
closed
4
2021-11-17T17:43:41
2022-02-01T21:00:52
2021-11-19T15:08:57
lhoestq
[]
<s>Needs https://github.com/huggingface/datasets/pull/3129 to be merged first</s> Make those audio datasets streamable: - [x] common_voice - [x] openslr - [x] vivos - [x] librispeech_asr <s>(still has some issues to read FLAC)</s> *actually it's ok* - [ ] <s>multilingual_librispeech (yet to be converted)</S> *TODO in a separate PR*
true
1,056,323,715
https://api.github.com/repos/huggingface/datasets/issues/3289
https://github.com/huggingface/datasets/pull/3289
3,289
Unpin markdown for build_docs now that it's fixed
closed
0
2021-11-17T16:22:53
2021-11-17T16:23:09
2021-11-17T16:23:08
lhoestq
[]
`markdown`'s bug has been fixed, so this PR reverts #3286
true
1,056,145,703
https://api.github.com/repos/huggingface/datasets/issues/3288
https://github.com/huggingface/datasets/pull/3288
3,288
Allow datasets with indices table when concatenating along axis=1
closed
0
2021-11-17T13:41:28
2021-11-17T15:41:12
2021-11-17T15:41:11
mariosasko
[]
Calls `flatten_indices` on the datasets with indices table in `concatenate_datasets` to fix issues when concatenating along `axis=1`. cc @lhoestq: I decided to flatten all the datasets instead of flattening all the datasets except the largest one in the end. The latter approach fails on the following example: ```python a = Dataset.from_dict({"a": [10, 20, 30, 40]}) b = Dataset.from_dict({"b": [10, 20, 30, 40, 50, 60]}) # largest dataset a = a.select([1, 2, 3]) b = b.select([1, 2, 3]) concatenate_datasets([a, b], axis=1) # fails at line concat_tables(...) because the real length of b's data is 6 and a's length is 3 after flattening (was 4 before flattening) ``` Also, it requires additional re-ordering of indices to prepare them for working with the indices table of the largest dataset. IMO not worth when we save only one `flatten_indices` call. (feel free to check the code of that approach at https://github.com/huggingface/datasets/commit/6acd10481c70950dcfdbfd2bab0bf0c74ad80bcb if you are interested) Fixes #3273
true
1,056,079,724
https://api.github.com/repos/huggingface/datasets/issues/3287
https://github.com/huggingface/datasets/pull/3287
3,287
Add The Pile dataset and PubMed Central subset
closed
0
2021-11-17T12:35:58
2021-12-01T15:29:08
2021-12-01T15:29:07
albertvillanova
[]
Add: - The complete final version of The Pile dataset: "all" config - PubMed Central subset of The Pile: "pubmed_central" config Close #1675, close bigscience-workshop/data_tooling#74. CC: @StellaAthena, @lewtun
true
1,056,008,586
https://api.github.com/repos/huggingface/datasets/issues/3286
https://github.com/huggingface/datasets/pull/3286
3,286
Fix build_docs CI
closed
0
2021-11-17T11:18:56
2021-11-17T11:19:20
2021-11-17T11:19:19
lhoestq
[]
Because of https://github.com/Python-Markdown/markdown/issues/1196 we have to temporarily pin `markdown` to 3.3.4 for the docs to build without issues
true
1,055,506,730
https://api.github.com/repos/huggingface/datasets/issues/3285
https://github.com/huggingface/datasets/issues/3285
3,285
Add IEMOCAP dataset
open
8
2021-11-16T22:47:20
2023-06-10T08:14:52
null
osanseviero
[ "dataset request", "speech", "vision" ]
## Adding a Dataset - **Name:** IEMOCAP - **Description:** acted, multimodal and multispeaker database - **Paper:** https://sail.usc.edu/iemocap/Busso_2008_iemocap.pdf - **Data:** https://sail.usc.edu/iemocap/index.html - **Motivation:** Useful multimodal dataset cc @anton-l Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,055,502,909
https://api.github.com/repos/huggingface/datasets/issues/3284
https://github.com/huggingface/datasets/issues/3284
3,284
Add VoxLingua107 dataset
open
1
2021-11-16T22:44:08
2021-12-06T09:49:45
null
osanseviero
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** VoxLingua107 - **Description:** VoxLingua107 is a speech dataset for training spoken language identification models. The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives. - **Paper:** https://arxiv.org/abs/2011.12998 - **Data:** http://bark.phon.ioc.ee/voxlingua107/ - **Motivation:** Nice audio classification dataset cc @anton-l Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,055,495,874
https://api.github.com/repos/huggingface/datasets/issues/3283
https://github.com/huggingface/datasets/issues/3283
3,283
Add Speech Commands dataset
closed
1
2021-11-16T22:39:56
2021-12-10T10:30:15
2021-12-10T10:30:15
osanseviero
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** Speech commands - **Description:** A Dataset for Limited-Vocabulary Speech Recognition - **Paper:** https://arxiv.org/abs/1804.03209 - **Data:** https://www.tensorflow.org/datasets/catalog/speech_commands, Available: http://download.tensorflow.org/data/speech_commands_v0.02.tar.gz - **Motivation:** Nice dataset for audio classification training cc @anton-l Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,055,054,898
https://api.github.com/repos/huggingface/datasets/issues/3282
https://github.com/huggingface/datasets/issues/3282
3,282
ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py
closed
7
2021-11-16T16:05:19
2022-04-12T11:57:43
2022-04-12T11:57:43
MinionAttack
[ "dataset-viewer" ]
## Dataset viewer issue for '*oscar-corpus/OSCAR-2109*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)* *The dataset library cannot download any language from the oscar-corpus/OSCAR-2109 dataset. By entering the URL in your browser I can access the file.* ``` raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://huggingface.co/datasets/oscar-corpus/OSCAR-2109/resolve/main/OSCAR-2109.py ``` Am I the one who added this dataset ? No Using the older version of [OSCAR](https://huggingface.co/datasets/oscar) I don't have any issues downloading languages with the dataset library.
false
1,055,018,876
https://api.github.com/repos/huggingface/datasets/issues/3281
https://github.com/huggingface/datasets/pull/3281
3,281
[Datasets] Improve Covost 2
closed
2
2021-11-16T15:32:19
2022-01-26T16:17:06
2021-11-18T10:44:04
patrickvonplaten
[]
It's currently quite confusing to understand the manual data download instruction of Covost and not very user-friendly. Currenty the user has to: 1. Go on Common Voice website 2. Find the correct dataset which is **not** mentioned in the error message 3. Download it 4. Untar it 5. Create a language id folder (why? this folder does not exist in the `.tar` downloaded file) 6. pass the folder containing the created language id folder This PR improves this to: 1. Go on Common Voice website 2. Find the correct dataset which **is** mentioned in the error message 3. Download it 4. Untar it 5. pass the untared folder **Note**: This PR is not at all time-critical
true
1,054,766,828
https://api.github.com/repos/huggingface/datasets/issues/3280
https://github.com/huggingface/datasets/pull/3280
3,280
Fix bookcorpusopen RAM usage
closed
0
2021-11-16T11:27:52
2021-11-17T15:53:28
2021-11-16T13:34:30
lhoestq
[]
Each document is a full book, so the default arrow writer batch size of 10,000 is too big, and it can fill up RAM quickly before flushing the first batch on disk. I changed its batch size to 256 to use maximum 100MB of memory Fix #3167.
true
1,054,711,852
https://api.github.com/repos/huggingface/datasets/issues/3279
https://github.com/huggingface/datasets/pull/3279
3,279
Minor Typo Fix - Precision to Recall
closed
0
2021-11-16T10:32:22
2021-11-16T11:18:03
2021-11-16T11:18:02
SebastinSanty
[]
null
true
1,054,249,463
https://api.github.com/repos/huggingface/datasets/issues/3278
https://github.com/huggingface/datasets/pull/3278
3,278
Proposed update to the documentation for WER
closed
0
2021-11-15T23:28:31
2021-11-16T11:19:37
2021-11-16T11:19:37
wooters
[]
I wanted to submit a minor update to the description of WER for your consideration. Because of the possibility of insertions, the numerator in the WER formula can be larger than N, so the value of WER can be greater than 1.0: ``` >>> from datasets import load_metric >>> metric = load_metric("wer") >>> metric.compute(predictions=["hello how are you"], references=["hello"]) 3.0 ``` and similarly from the underlying jiwer module's `wer` function: ``` >>> from jiwer import wer >>> wer("hello", "hello how are you") 3.0 ```
true
1,054,122,656
https://api.github.com/repos/huggingface/datasets/issues/3277
https://github.com/huggingface/datasets/pull/3277
3,277
f-string formatting
closed
1
2021-11-15T21:37:05
2021-11-19T20:40:08
2021-11-17T16:18:38
Mehdi2402
[]
**Fix #3257** Replaced _.format()_ and _%_ by f-strings in the following modules : - [x] **tests** - [x] **metrics** - [x] **benchmarks** - [x] **utils** - [x] **templates** - [x] **src/Datasets/\*.py** Modules in **_src/Datasets/_**: - [x] **commands** - [x] **features** - [x] **formatting** - [x] **io** - [x] **tasks** - [x] **utils** Module **datasets** will not be edited as asked by @mariosasko -A correction of the first PR (#3267)-
true
1,053,793,063
https://api.github.com/repos/huggingface/datasets/issues/3276
https://github.com/huggingface/datasets/pull/3276
3,276
Update KILT metadata JSON
closed
0
2021-11-15T15:25:25
2021-11-16T11:21:59
2021-11-16T11:21:58
albertvillanova
[]
Fix #3265.
true
1,053,698,898
https://api.github.com/repos/huggingface/datasets/issues/3275
https://github.com/huggingface/datasets/pull/3275
3,275
Force data files extraction if download_mode='force_redownload'
closed
0
2021-11-15T14:00:24
2021-11-15T14:45:23
2021-11-15T14:45:23
mariosasko
[]
Avoids weird issues when redownloading a dataset due to cached data not being fully updated. With this change, issues #3122 and https://github.com/huggingface/datasets/issues/2956 (not a fix, but a workaround) can be fixed as follows: ```python dset = load_dataset(..., download_mode="force_redownload") ```
true
1,053,689,140
https://api.github.com/repos/huggingface/datasets/issues/3274
https://github.com/huggingface/datasets/pull/3274
3,274
Fix some contact information formats
closed
1
2021-11-15T13:50:34
2021-11-15T14:43:55
2021-11-15T14:43:54
lhoestq
[]
As reported in https://github.com/huggingface/datasets/issues/3188 some contact information are not displayed correctly. This PR fixes this for CoNLL-2002 and some other datasets with the same issue
true
1,053,554,038
https://api.github.com/repos/huggingface/datasets/issues/3273
https://github.com/huggingface/datasets/issues/3273
3,273
Respect row ordering when concatenating datasets along axis=1
closed
0
2021-11-15T11:27:14
2021-11-17T15:41:11
2021-11-17T15:41:11
mariosasko
[ "bug" ]
Currently, there is a bug when concatenating datasets along `axis=1` if more than one dataset has the `_indices` attribute defined. In that scenario, all indices mappings except the first one get ignored. A minimal reproducible example: ```python >>> from datasets import Dataset, concatenate_datasets >>> a = Dataset.from_dict({"a": [30, 20, 10]}) >>> b = Dataset.from_dict({"b": [2, 1, 3]}) >>> d = concatenate_datasets([a.sort("a"), b.sort("b")], axis=1) >>> print(d[:3]) # expected: {'a': [10, 20, 30], 'b': [1, 2, 3]} {'a': [10, 20, 30], 'b': [3, 1, 2]} ``` I've noticed the bug while working on #3195.
false
1,053,516,479
https://api.github.com/repos/huggingface/datasets/issues/3272
https://github.com/huggingface/datasets/issues/3272
3,272
Make iter_archive work with ZIP files
open
4
2021-11-15T10:50:42
2021-11-25T00:08:47
null
lhoestq
[ "enhancement" ]
Currently users can use `dl_manager.iter_archive` in their dataset script to iterate over all the files of a TAR archive. It would be nice if it could work with ZIP files too !
false
1,053,482,919
https://api.github.com/repos/huggingface/datasets/issues/3271
https://github.com/huggingface/datasets/pull/3271
3,271
Decode audio from remote
closed
0
2021-11-15T10:25:56
2021-11-16T11:35:58
2021-11-16T11:35:58
lhoestq
[]
Currently the Audio feature type can only decode local audio files, not remote files. To fix this I replaced `open` with our `xopen` functoin that is compatible with remote files in audio.py cc @albertvillanova @mariosasko
true
1,053,465,662
https://api.github.com/repos/huggingface/datasets/issues/3270
https://github.com/huggingface/datasets/pull/3270
3,270
Add os.listdir for streaming
closed
0
2021-11-15T10:14:04
2021-11-15T10:27:03
2021-11-15T10:27:03
lhoestq
[]
Extend `os.listdir` to support streaming data from remote files. This is often used to navigate in remote ZIP files for example
true
1,053,218,769
https://api.github.com/repos/huggingface/datasets/issues/3269
https://github.com/huggingface/datasets/issues/3269
3,269
coqa NonMatchingChecksumError
closed
18
2021-11-15T05:04:07
2022-01-19T13:58:19
2022-01-19T13:58:19
ZhaofengWu
[ "bug" ]
``` >>> from datasets import load_dataset >>> dataset = load_dataset("coqa") Downloading: 3.82kB [00:00, 1.26MB/s] Downloading: 1.79kB [00:00, 733kB/s] Using custom data configuration default Downloading and preparing dataset coqa/default (download: 55.40 MiB, generated: 18.35 MiB, post-processed: Unknown size, total: 73.75 MiB) to /Users/zhaofengw/.cache/huggingface/datasets/coqa/default/1.0.0/553ce70bfdcd15ff4b5f4abc4fc2f37137139cde1f58f4f60384a53a327716f0... Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.38MB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 222/222 [00:00<00:00, 1.32MB/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.91it/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1117.44it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/builder.py", line 679, in _download_and_prepare verify_checksums( File "/Users/zhaofengw/miniconda3/lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://nlp.stanford.edu/data/coqa/coqa-train-v1.0.json', 'https://nlp.stanford.edu/data/coqa/coqa-dev-v1.0.json'] ```
false
1,052,992,681
https://api.github.com/repos/huggingface/datasets/issues/3268
https://github.com/huggingface/datasets/issues/3268
3,268
Dataset viewer issue for 'liweili/c4_200m'
closed
5
2021-11-14T17:18:46
2021-12-21T10:25:20
2021-12-21T10:24:51
liliwei25
[ "dataset-viewer" ]
## Dataset viewer issue for '*liweili/c4_200m*' **Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/liweili/c4_200m)* *Server Error* ``` Status code: 404 Exception: Status404Error Message: Not found. Maybe the cache is missing, or maybe the ressource does not exist. ``` Am I the one who added this dataset ? Yes
false
1,052,750,084
https://api.github.com/repos/huggingface/datasets/issues/3267
https://github.com/huggingface/datasets/pull/3267
3,267
Replacing .format() and % by f-strings
closed
4
2021-11-13T19:12:02
2021-11-16T21:00:26
2021-11-16T14:55:43
Mehdi2402
[]
**Fix #3257** Replaced _.format()_ and _%_ by f-strings in the following modules : - [x] **tests** - [x] **metrics** - [x] **benchmarks** - [x] **utils** - [x] **templates** Will follow in the next PR the modules left : - [ ] **src** Module **datasets** will not be edited as asked by @mariosasko PS : black and isort applied to files
true
1,052,700,155
https://api.github.com/repos/huggingface/datasets/issues/3266
https://github.com/huggingface/datasets/pull/3266
3,266
Fix URLs for WikiAuto Manual, jeopardy and definite_pronoun_resolution
closed
10
2021-11-13T15:01:34
2021-12-06T11:16:31
2021-12-06T11:16:31
LashaO
[]
[#3264](https://github.com/huggingface/datasets/issues/3264)
true
1,052,666,558
https://api.github.com/repos/huggingface/datasets/issues/3265
https://github.com/huggingface/datasets/issues/3265
3,265
Checksum error for kilt_task_wow
closed
2
2021-11-13T12:04:17
2021-11-16T11:23:53
2021-11-16T11:21:58
slyviacassell
[ "bug" ]
## Describe the bug Checksum failed when downloads kilt_tasks_wow. See error output for details. ## Steps to reproduce the bug ```python import datasets datasets.load_datasets('kilt_tasks','wow') ``` ## Expected results Download successful ## Actual results ``` Downloading and preparing dataset kilt_tasks/wow (download: 72.07 MiB, generated: 61.82 MiB, post-processed: Unknown size, total: 133.89 MiB) to /root/.cache/huggingface/datasets/kilt_tasks/wow/1.0.0/57dc8b2431e76637e0c6ef79689ca4af61ed3a330e2e0cd62c8971465a35db3a... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 5121.25it/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1527.42it/s] Traceback (most recent call last): File "kilt_wow.py", line 30, in <module> main() File "kilt_wow.py", line 27, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data dataset = self.load_dataset() File "kilt_wow.py", line 21, in load_dataset return datasets.load_dataset('kilt_tasks','wow') File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 679, in _download_and_prepare verify_checksums( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['http://dl.fbaipublicfiles.com/KILT/wow-train-kilt.jsonl', 'http://dl.fbaipublicfiles.com/KILT/wow-dev-kilt.jsonl'] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyArrow version: 4.0.1
false
1,052,663,513
https://api.github.com/repos/huggingface/datasets/issues/3264
https://github.com/huggingface/datasets/issues/3264
3,264
Downloading URL change for WikiAuto Manual, jeopardy and definite_pronoun_resolution
closed
3
2021-11-13T11:47:12
2022-06-01T17:38:16
2022-06-01T17:38:16
slyviacassell
[ "bug" ]
## Describe the bug - WikiAuto Manual The original manual datasets with the following downloading URL in this [repository](https://github.com/chaojiang06/wiki-auto) was [deleted](https://github.com/chaojiang06/wiki-auto/commit/0af9b066f2b4e02726fb8a9be49283c0ad25367f) by the author. ``` https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv ``` - jeopardy The downloading URL for jeopardy may move from ``` http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` to ``` https://drive.google.com/file/d/0BwT5wj_P7BKXb2hfM3d2RHU1ckE/view?resourcekey=0-1abK4cJq-mqxFoSg86ieIg ``` - definite_pronoun_resolution The following downloading URL for definite_pronoun_resolution cannot be reached for some reasons. ``` http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt ``` ## Steps to reproduce the bug ```python import datasets datasets.load_datasets('wiki_auto','manual') datasets.load_datasets('jeopardy') datasets.load_datasets('definite_pronoun_resolution') ``` ## Expected results Download successfully ## Actual results - WikiAuto Manual ``` Downloading and preparing dataset wiki_auto/manual (download: 151.65 MiB, generated: 155.97 MiB, post-processed: Unknown size, total: 307.61 MiB) to /root/.cache/huggingface/datasets/wiki_auto/manual/1.0.0/5ffdd9fc62422d29bd02675fb9606f77c1251ee17169ac10b143ce07ef2f4db8... 0%| | 0/3 [00:00<?, ?it/s]Traceback (most recent call last): File "wiki_auto.py", line 43, in <module> main() File "wiki_auto.py", line 40, in main train, dev, test = dataset.generate_k_shot_data(k=16, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 24, in generate_k_shot_data dataset = self.load_dataset() File "wiki_auto.py", line 34, in load_dataset return datasets.load_dataset('wiki_auto', 'manual') File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wiki_auto/5ffdd9fc62422d29bd02675fb9606f77c1251ee17169ac10b143ce07ef2f4db8/wiki_auto.py", line 193, in _split_generators data_dir = dl_manager.download_and_extract(my_urls) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 216, in map_nested mapped = [ File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 217, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 152, in _single_map_nested return function(data_struct) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 592, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://github.com/chaojiang06/wiki-auto/raw/master/wiki-manual/train.tsv ``` - jeopardy ``` Using custom data configuration default Downloading and preparing dataset jeopardy/default (download: 12.13 MiB, generated: 34.46 MiB, post-processed: Unknown size, total: 46.59 MiB) to /root/.cache/huggingface/datasets/jeopardy/default/0.1.0/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810... Traceback (most recent call last): File "jeopardy.py", line 45, in <module> main() File "jeopardy.py", line 42, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data dataset = self.load_dataset() File "jeopardy.py", line 36, in load_dataset return datasets.load_dataset("jeopardy") File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/jeopardy/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810/jeopardy.py", line 72, in _split_generators filepath = dl_manager.download_and_extract(_DATA_URL) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested return function(data_struct) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` - definite_pronoun_resolution ``` Downloading and preparing dataset definite_pronoun_resolution/plain_text (download: 222.12 KiB, generated: 239.12 KiB, post-processed: Unknown size, total: 461.24 KiB) to /root/.cache/huggingface/datasets/definite_pronoun_resolution/plain_text/1.0.0/35a1dfd4fba4afb8ba226cbbb65ac7cef0dd3cf9302d8f803740f05d2f16ceff... 0%| | 0/2 [00:00<?, ?it/s]Traceback (most recent call last): File "definite_pronoun_resolution.py", line 37, in <module> main() File "definite_pronoun_resolution.py", line 34, in main train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/") File "/workspace/projects/CrossFit/tasks/fewshot_gym_dataset.py", line 79, in generate_k_shot_data dataset = self.load_dataset() File "definite_pronoun_resolution.py", line 28, in load_dataset return datasets.load_dataset('definite_pronoun_resolution') File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/definite_pronoun_resolution/35a1dfd4fba4afb8ba226cbbb65ac7cef0dd3cf9302d8f803740f05d2f16ceff/definite_pronoun_resolution.py", line 76, in _split_generators files = dl_manager.download_and_extract( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 216, in map_nested mapped = [ File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 217, in <listcomp> _single_map_nested((function, obj, types, None, True)) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 152, in _single_map_nested return function(data_struct) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://www.hlt.utdallas.edu/~vince/data/emnlp12/train.c.txt ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-4.15.0-161-generic-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyArrow version: 4.0.1
false
1,052,552,516
https://api.github.com/repos/huggingface/datasets/issues/3263
https://github.com/huggingface/datasets/issues/3263
3,263
FET DATA
closed
0
2021-11-13T05:46:06
2021-11-13T13:31:47
2021-11-13T13:31:47
FStell01
[ "dataset request" ]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,052,455,082
https://api.github.com/repos/huggingface/datasets/issues/3262
https://github.com/huggingface/datasets/pull/3262
3,262
asserts replaced with exception for image classification task, csv, json
closed
0
2021-11-12T22:34:59
2021-11-15T11:08:37
2021-11-15T11:08:37
manisnesan
[]
Fixes for csv, json in io module and image_classification task with tests referenced in https://github.com/huggingface/datasets/issues/3171
true
1,052,346,381
https://api.github.com/repos/huggingface/datasets/issues/3261
https://github.com/huggingface/datasets/issues/3261
3,261
Scifi_TV_Shows: Having trouble getting viewer to find appropriate files
closed
2
2021-11-12T19:25:19
2021-12-21T10:24:10
2021-12-21T10:24:10
lara-martin
[ "dataset-viewer" ]
## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*' **Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows) I tried adding both a script (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/blob/main/Scifi_TV_Shows.py) and some dummy examples (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/tree/main/dummy), but the viewer still has a 404 error ("Not found. Maybe the cache is missing, or maybe the ressource does not exist."). I'm not sure what to try next. Thanks in advance! Am I the one who added this dataset? Yes
false
1,052,247,373
https://api.github.com/repos/huggingface/datasets/issues/3260
https://github.com/huggingface/datasets/pull/3260
3,260
Fix ConnectionError in Scielo dataset
closed
1
2021-11-12T18:02:37
2021-11-16T18:18:17
2021-11-16T17:55:22
mariosasko
[]
This PR: * allows 403 status code in HEAD requests to S3 buckets to fix the connection error in the Scielo dataset (instead of `url`, uses `response.url` to check the URL of the final endpoint) * makes the Scielo dataset streamable Fixes #3255.
true
1,052,189,775
https://api.github.com/repos/huggingface/datasets/issues/3259
https://github.com/huggingface/datasets/pull/3259
3,259
Updating details of IRC disentanglement data
closed
1
2021-11-12T17:16:58
2021-11-18T17:19:33
2021-11-18T17:19:33
jkkummerfeld
[]
I was pleasantly surprised to find that someone had already added my dataset to the huggingface library, but some details were missing or incorrect. This PR fixes the documentation.
true
1,052,188,195
https://api.github.com/repos/huggingface/datasets/issues/3258
https://github.com/huggingface/datasets/issues/3258
3,258
Reload dataset that was already downloaded with `load_from_disk` from cloud storage
open
0
2021-11-12T17:14:59
2021-11-12T17:14:59
null
lhoestq
[ "enhancement" ]
`load_from_disk` downloads the dataset to a temporary directory without checking if the dataset has already been downloaded once. It would be nice to have some sort of caching for datasets downloaded this way. This could leverage the fingerprint of the dataset that was saved in the `state.json` file.
false
1,052,118,365
https://api.github.com/repos/huggingface/datasets/issues/3257
https://github.com/huggingface/datasets/issues/3257
3,257
Use f-strings for string formatting
closed
5
2021-11-12T16:02:15
2021-11-17T16:18:38
2021-11-17T16:18:38
mariosasko
[ "good first issue" ]
f-strings offer better readability/performance than `str.format` and `%`, so we should use them in all places in our codebase unless there is good reason to keep the older syntax. > **NOTE FOR CONTRIBUTORS**: To avoid large PRs and possible merge conflicts, do 1-3 modules per PR. Also, feel free to ignore the files located under `datasets/*`.
false
1,052,000,613
https://api.github.com/repos/huggingface/datasets/issues/3256
https://github.com/huggingface/datasets/pull/3256
3,256
asserts replaced by exception for text classification task with test.
closed
2
2021-11-12T14:05:36
2021-11-12T15:09:33
2021-11-12T14:59:32
manisnesan
[]
I have replaced only a single assert in text_classification.py along with a unit test to verify an exception is raised based on https://github.com/huggingface/datasets/issues/3171 . I would like to first understand the code contribution workflow. So keeping the change to a single file rather than making too many changes. Once this gets approved, I will look into the rest. Thanks.
true
1,051,783,129
https://api.github.com/repos/huggingface/datasets/issues/3255
https://github.com/huggingface/datasets/issues/3255
3,255
SciELO dataset ConnectionError
closed
0
2021-11-12T09:57:14
2021-11-16T17:55:22
2021-11-16T17:55:22
WojciechKusa
[ "bug" ]
## Describe the bug I get `ConnectionError` when I am trying to load the SciELO dataset. When I try the URL with `requests` I get: ``` >>> requests.head("https://ndownloader.figstatic.com/files/14019287") <Response [302]> ``` And as far as I understand redirections in `datasets` are not supported for downloads. https://github.com/huggingface/datasets/blob/807341d0db0728073ab605c812c67f927d148f38/datasets/scielo/scielo.py#L45 ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("scielo", "en-es") ``` ## Expected results Download SciELO dataset and load Dataset object ## Actual results ``` Downloading and preparing dataset scielo/en-es (download: 21.90 MiB, generated: 68.45 MiB, post-processed: Unknown size, total: 90.35 MiB) to /Users/test/.cache/huggingface/datasets/scielo/en-es/1.0.0/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e... Traceback (most recent call last): File "scielo.py", line 3, in <module> dataset = load_dataset("scielo", "en-es") File "../lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "../lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "../lib/python3.8/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/test/.cache/huggingface/modules/datasets_modules/datasets/scielo/7e05d55a20257efeb9925ff5de65bd4884fc6ddb6d765f1ea3e8860449d90e0e/scielo.py", line 77, in _split_generators data_dir = dl_manager.download_and_extract(_URLS[self.config.name]) File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 196, in download downloaded_path_or_paths = map_nested( File "../lib/python3.8/site-packages/datasets/utils/py_utils.py", line 206, in map_nested return function(data_struct) File "../lib/python3.8/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 295, in cached_path output_path = get_from_cache( File "../lib/python3.8/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://ndownloader.figstatic.com/files/14019287 ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.12 - PyArrow version: 6.0.0
false
1,051,351,172
https://api.github.com/repos/huggingface/datasets/issues/3254
https://github.com/huggingface/datasets/pull/3254
3,254
Update xcopa dataset (fix checksum issues + add translated data)
closed
1
2021-11-11T20:51:33
2021-11-12T10:30:58
2021-11-12T10:30:57
mariosasko
[]
This PR updates the checksums (as reported [here](https://discuss.huggingface.co/t/how-to-load-dataset-locally/11601/2)) of the `xcopa` dataset. Additionally, it adds new configs that hold the translated data of the original set of configs. This data was not available at the time of adding this dataset to the lib.
true
1,051,308,972
https://api.github.com/repos/huggingface/datasets/issues/3253
https://github.com/huggingface/datasets/issues/3253
3,253
`GeneratorBasedBuilder` does not support `None` values
closed
1
2021-11-11T19:51:21
2021-12-09T14:26:58
2021-12-09T14:26:58
pavel-lexyr
[ "bug" ]
## Describe the bug `GeneratorBasedBuilder` does not support `None` values. ## Steps to reproduce the bug See [this repository](https://github.com/pavel-lexyr/huggingface-datasets-bug-reproduction) for minimal reproduction. ## Expected results Dataset is initialized with a `None` value in the `value` column. ## Actual results ``` Traceback (most recent call last): File "main.py", line 3, in <module> datasets.load_dataset("./bad-data") File ".../datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File ".../datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File ".../datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File ".../datasets/builder.py", line 1103, in _prepare_split example = self.info.features.encode_example(record) File ".../datasets/features/features.py", line 1033, in encode_example return encode_nested_example(self, example) File ".../datasets/features/features.py", line 808, in encode_nested_example return { File ".../datasets/features/features.py", line 809, in <dictcomp> k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File ".../datasets/features/features.py", line 855, in encode_nested_example return schema.encode_example(obj) File ".../datasets/features/features.py", line 299, in encode_example return float(value) TypeError: float() argument must be a string or a number, not 'NoneType' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 6.0.0
false
1,051,124,749
https://api.github.com/repos/huggingface/datasets/issues/3252
https://github.com/huggingface/datasets/pull/3252
3,252
Fix failing CER metric test in CI after update
closed
0
2021-11-11T15:57:16
2021-11-12T14:06:44
2021-11-12T14:06:43
mariosasko
[]
Fixes the [failing CER metric test](https://app.circleci.com/pipelines/github/huggingface/datasets/8644/workflows/79816553-fa2f-4756-b022-d5937f00bf7b/jobs/53298) in CI by adding support for `jiwer==2.3.0`, which was released yesterday. Also, I verified that all the tests in `metrics/cer/test_cer.py` pass after the change, so the results should be the same irrespective of the `jiwer` version.
true
1,050,541,348
https://api.github.com/repos/huggingface/datasets/issues/3250
https://github.com/huggingface/datasets/pull/3250
3,250
Add ETHICS dataset
closed
1
2021-11-11T03:45:34
2022-10-03T09:37:25
2022-10-03T09:37:25
ssss1029
[ "dataset contribution" ]
This PR adds the ETHICS dataset, including all 5 sub-datasets. From https://arxiv.org/abs/2008.02275
true
1,050,193,138
https://api.github.com/repos/huggingface/datasets/issues/3249
https://github.com/huggingface/datasets/pull/3249
3,249
Fix streaming for id_newspapers_2018
closed
0
2021-11-10T18:55:30
2021-11-12T14:01:32
2021-11-12T14:01:31
lhoestq
[]
To be compatible with streaming, this dataset must use `dl_manager.iter_archive` since the data are in a .tgz file
true
1,050,171,082
https://api.github.com/repos/huggingface/datasets/issues/3248
https://github.com/huggingface/datasets/pull/3248
3,248
Stream from Google Drive and other hosts
closed
3
2021-11-10T18:32:32
2021-11-30T16:03:43
2021-11-12T17:18:11
lhoestq
[]
Streaming from Google Drive is a bit more challenging than the other host we've been supporting: - the download URL must be updated to add the confirm token obtained by HEAD request - it requires to use cookies to keep the connection alive - the URL doesn't tell any information about whether the file is compressed or not Therefore I did two things: - I added a step for URL and headers/cookies preparation in the StreamingDownloadManager - I added automatic compression type inference by reading the [magic number](https://en.wikipedia.org/wiki/List_of_file_signatures) This allows to do do fancy things like ```python from datasets.utils.streaming_download_manager import StreamingDownloadManager, xopen, xjoin, xglob # zip file containing a train.tsv file url = "https://drive.google.com/uc?export=download&id=1k92sUfpHxKq8PXWRr7Y5aNHXwOCNUmqh" extracted = StreamingDownloadManager().download_and_extract(url) for inner_file in xglob(xjoin(extracted, "*.tsv")): with xopen(inner_file) as f: # streaming starts here for line in f: print(line) ``` This should make around 80 datasets streamable. It concerns those hosted on Google Drive but also any dataset for which the URL doesn't give any information about compression. Here is the full list: ``` amazon_polarity, ami, arabic_billion_words, ascent_kb, asset, big_patent, billsum, capes, cmrc2018, cnn_dailymail, code_x_glue_cc_code_completion_token, code_x_glue_cc_code_refinement, code_x_glue_cc_code_to_code_trans, code_x_glue_tt_text_to_text, conll2002, craigslist_bargains, dbpedia_14, docred, ehealth_kd, emo, euronews, germeval_14, gigaword, grail_qa, great_code, has_part, head_qa, health_fact, hope_edi, id_newspapers_2018, igbo_english_machine_translation, irc_disentangle, jfleg, jnlpba, journalists_questions, kor_ner, linnaeus, med_hop, mrqa, mt_eng_vietnamese, multi_news, norwegian_ner, offcombr, offenseval_dravidian, para_pat, peoples_daily_ner, pn_summary, poleval2019_mt, pubmed_qa, qangaroo, reddit_tifu, refresd, ro_sts_parallel, russian_super_glue, samsum, sberquad, scielo, search_qa, species_800, spider, squad_adversarial, tamilmixsentiment, tashkeela, ted_talks_iwslt, trec, turk, turkish_ner, twi_text_c3, universal_morphologies, web_of_science, weibo_ner, wiki_bio, wiki_hop, wiki_lingua, wiki_summary, wili_2018, wisesight1000, wnut_17, yahoo_answers_topics, yelp_review_full, yoruba_text_c3 ``` Some of them may not work if the host doesn't support HTTP range requests for example Fix https://github.com/huggingface/datasets/issues/2742 Fix https://github.com/huggingface/datasets/issues/3188
true
1,049,699,088
https://api.github.com/repos/huggingface/datasets/issues/3247
https://github.com/huggingface/datasets/issues/3247
3,247
Loading big json dataset raises pyarrow.lib.ArrowNotImplementedError
closed
3
2021-11-10T11:17:59
2022-04-10T14:05:57
2022-04-10T14:05:57
maxzirps
[ "bug" ]
## Describe the bug When trying to create a dataset from a json file with around 25MB, the following error is raised `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct` Splitting the big file into smaller ones and then loading it with the `load_dataset` method did also not work. Creating a pandas dataframe from it and then loading it with `Dataset.from_pandas` works ## Steps to reproduce the bug ```python load_dataset("json", data_files="test.json") ``` test.json ~25MB ```json {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} ... ``` working.json ~160bytes ```json {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} {"a": {"b": 7, "c": 6}} {"a": {"c": 8, "b": 5}} ``` ## Expected results It should load the dataset from the json file without error. ## Actual results It raises Exception `pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct` ``` Traceback (most recent call last): File "/Users/m/workspace/xxx/project/main.py", line 60, in <module> dataset = load_dataset("json", data_files="result.json") File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/load.py", line 1627, in load_dataset builder_instance.download_and_prepare( File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/builder.py", line 1159, in _prepare_split writer.write_table(table) File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/datasets/arrow_writer.py", line 428, in write_table pa_table = pa.Table.from_arrays([pa_table[name] for name in self._schema.names], schema=self._schema) File "pyarrow/table.pxi", line 1685, in pyarrow.lib.Table.from_arrays File "pyarrow/table.pxi", line 630, in pyarrow.lib._sanitize_arrays File "pyarrow/array.pxi", line 338, in pyarrow.lib.asarray File "pyarrow/table.pxi", line 304, in pyarrow.lib.ChunkedArray.cast File "/opt/homebrew/Caskroom/miniforge/base/envs/xxx/lib/python3.9/site-packages/pyarrow/compute.py", line 309, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 528, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 327, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 120, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Unsupported cast from struct<b: int64, c: int64> to struct using function cast_struct ``` ## Environment info - `datasets` version: 1.14.0 - Platform: macOS-12.0.1-arm64-arm-64bit - Python version: 3.9.7 - PyArrow version: 6.0.0
false
1,049,662,746
https://api.github.com/repos/huggingface/datasets/issues/3246
https://github.com/huggingface/datasets/pull/3246
3,246
[tiny] fix typo in stream docs
closed
0
2021-11-10T10:40:02
2021-11-10T11:10:39
2021-11-10T11:10:39
verbiiyo
[]
null
true
1,048,726,062
https://api.github.com/repos/huggingface/datasets/issues/3245
https://github.com/huggingface/datasets/pull/3245
3,245
Fix load_from_disk temporary directory
closed
0
2021-11-09T15:15:15
2021-11-09T15:30:52
2021-11-09T15:30:51
lhoestq
[]
`load_from_disk` uses `tempfile.TemporaryDirectory()` instead of our `get_temporary_cache_files_directory()` function. This can cause the temporary directory to be deleted before the dataset object is garbage collected. In practice, it prevents anyone from using methods like `shuffle` on a dataset loaded this way, because it can't write the shuffled indices in a directory that doesn't exist anymore. In this PR I switch to using `get_temporary_cache_files_directory()` and I update the tests. cc @mariosasko since you worked on `get_temporary_cache_files_directory()`
true
1,048,675,741
https://api.github.com/repos/huggingface/datasets/issues/3244
https://github.com/huggingface/datasets/pull/3244
3,244
Fix filter method for batched=True
closed
0
2021-11-09T14:30:59
2021-11-09T15:52:58
2021-11-09T15:52:57
thomasw21
[]
null
true
1,048,630,754
https://api.github.com/repos/huggingface/datasets/issues/3243
https://github.com/huggingface/datasets/pull/3243
3,243
Remove redundant isort module placement
closed
0
2021-11-09T13:50:30
2021-11-12T14:02:45
2021-11-12T14:02:45
mariosasko
[]
`isort` can place modules by itself from [version 5.0.0](https://pycqa.github.io/isort/docs/upgrade_guides/5.0.0.html#module-placement-changes-known_third_party-known_first_party-default_section-etc) onwards, making the `known_first_party` and `known_third_party` fields in `setup.cfg` redundant (this is why our CI works, even though we haven't touched these options in a while).
true
1,048,527,232
https://api.github.com/repos/huggingface/datasets/issues/3242
https://github.com/huggingface/datasets/issues/3242
3,242
Adding ANERcorp-CAMeLLab dataset
open
1
2021-11-09T12:04:04
2021-11-09T12:41:15
null
vitalyshalumov
[ "dataset request" ]
null
false
1,048,461,852
https://api.github.com/repos/huggingface/datasets/issues/3241
https://github.com/huggingface/datasets/pull/3241
3,241
Swap descriptions of v1 and raw-v1 configs of WikiText dataset and fix metadata
closed
0
2021-11-09T10:54:15
2022-02-14T15:46:00
2021-11-09T13:49:28
albertvillanova
[]
Fix #3237, fix #795.
true
1,048,376,021
https://api.github.com/repos/huggingface/datasets/issues/3240
https://github.com/huggingface/datasets/issues/3240
3,240
Couldn't reach data file for disaster_response_messages
closed
1
2021-11-09T09:26:42
2021-12-14T14:38:29
2021-12-14T14:38:29
pandya6988
[ "dataset bug" ]
## Describe the bug Following command gives an ConnectionError. ## Steps to reproduce the bug ```python disaster = load_dataset('disaster_response_messages') ``` ## Error ``` ConnectionError: Couldn't reach https://datasets.appen.com/appen_datasets/disaster_response_data/disaster_response_messages_training.csv ``` ## Expected results It should load dataset without an error ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Google Colab - Python version: 3.7 - PyArrow version:
false
1,048,360,232
https://api.github.com/repos/huggingface/datasets/issues/3239
https://github.com/huggingface/datasets/issues/3239
3,239
Inconsistent performance of the "arabic_billion_words" dataset
open
0
2021-11-09T09:11:00
2021-11-09T09:11:00
null
vitalyshalumov
[ "bug" ]
## Describe the bug When downloaded from macine 1 the dataset is downloaded and parsed correctly. When downloaded from machine two (which has a different cache directory), the following script: import datasets from datasets import load_dataset raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alittihad', split="train",download_mode='force_redownload') gives the following error: **Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 1.49 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17... Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 348M/348M [00:24<00:00, 14.0MB/s] Traceback (most recent call last): File ".../why_mismatch.py", line 3, in <module> File "/opt/conda/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/opt/conda/lib/python3.8/site-packages/datasets/builder.py", line 709, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/opt/conda/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words')}]** Note that the package versions of datasets (1.15.1) and rarfile (4.0) are identical. ## Steps to reproduce the bug import datasets from datasets import load_dataset raw_dataset_elkhair_1 = load_dataset('arabic_billion_words', 'Alittihad', split="train",download_mode='force_redownload') # Sample code to reproduce the bug ## Expected results Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 1.49 GiB, post-processed: Unknown size, total: 1.82 GiB) to .../.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17... Downloading: 100%|███████████████████████████| 348M/348M [00:22<00:00, 15.8MB/s] Dataset arabic_billion_words downloaded and prepared to .../.cache/huggingface/datasets/arabic_billion_words/Alittihad/1.1.0/687a1f963284c8a766558661375ea8f7ab3fa3633f8cd9c9f42a53ebe83bfe17. Subsequent calls will reuse this data. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> Machine 1: - `datasets` version: 1.15.1 - Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 4.0.1 Machine 2 (the bugged one) - `datasets` version: 1.15.1 - Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 6.0.0
false
1,048,226,086
https://api.github.com/repos/huggingface/datasets/issues/3238
https://github.com/huggingface/datasets/issues/3238
3,238
Reuters21578 Couldn't reach
closed
2
2021-11-09T06:08:56
2021-11-11T00:02:57
2021-11-11T00:02:57
TingNLP
[ "dataset bug" ]
``## Adding a Dataset - **Name:** *Reuters21578* - **Description:** *ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz* - **Data:** *https://huggingface.co/datasets/reuters21578* `from datasets import load_dataset` `dataset = load_dataset("reuters21578", 'ModLewis')` ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz And I try to request the link as follow: `import requests` `requests.head('https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz')` SSLError: HTTPSConnectionPool(host='kdd.ics.uci.edu', port=443): Max retries exceeded with url: /databases/reuters21578/reuters21578.tar.gz (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),)) This problem likes #575 What should I do ?
false
1,048,165,525
https://api.github.com/repos/huggingface/datasets/issues/3237
https://github.com/huggingface/datasets/issues/3237
3,237
wikitext description wrong
closed
2
2021-11-09T04:06:52
2022-02-14T15:45:11
2021-11-09T13:49:28
hongyuanmei
[ "bug" ]
## Describe the bug Descriptions of the wikitext datasests are wrong. ## Steps to reproduce the bug Please see: https://github.com/huggingface/datasets/blob/f6dcafce996f39b6a4bbe3a9833287346f4a4b68/datasets/wikitext/wikitext.py#L50 ## Expected results The descriptions for raw-v1 and v1 should be switched.
false
1,048,026,358
https://api.github.com/repos/huggingface/datasets/issues/3236
https://github.com/huggingface/datasets/issues/3236
3,236
Loading of datasets changed in #3110 returns no examples
closed
7
2021-11-08T23:29:46
2021-11-09T16:46:05
2021-11-09T16:45:47
eladsegal
[ "bug" ]
## Describe the bug Loading of datasets changed in https://github.com/huggingface/datasets/pull/3110 returns no examples: ```python DatasetDict({ train: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 0 }) validation: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 0 }) }) ``` ## Steps to reproduce the bug Load any of the datasets that were changed in https://github.com/huggingface/datasets/pull/3110: ```python from datasets import load_dataset load_dataset("qasper") # The problem only started with the commit of #3110 load_dataset("qasper", revision="b6469baa22c174b3906c631802a7016fedea6780") ``` ## Expected results ```python DatasetDict({ train: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 888 }) validation: Dataset({ features: ['id', 'title', 'abstract', 'full_text', 'qas'], num_rows: 281 }) }) ``` Which can be received when specifying revision of the commit before https://github.com/huggingface/datasets/pull/3110: ```python from datasets import load_dataset load_dataset("qasper", revision="acfe2abda1ca79f0ce5c1896aa83b4b78af76b7d") ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.2.dev0 (master) - Python version: 3.8.10 - PyArrow version: 3.0.0
false
1,047,808,263
https://api.github.com/repos/huggingface/datasets/issues/3235
https://github.com/huggingface/datasets/pull/3235
3,235
Addd options to use updated bleurt checkpoints
closed
0
2021-11-08T18:53:54
2021-11-12T14:05:28
2021-11-12T14:05:28
jaehlee
[]
Adds options to use newer recommended checkpoint (as of 2021/10/8) bleurt-20 and its distilled versions. Updated checkpoints are described in https://github.com/google-research/bleurt/blob/master/checkpoints.md#the-recommended-checkpoint-bleurt-20 This change won't affect the default behavior of metrics/bleurt. It only adds option to load newer checkpoints as `datasets.load_metric('bleurt', 'bleurt-20')` `bluert-20` generates scores roughly between 0 and 1, which wasn't the case for the previous checkpoints.
true
1,047,634,236
https://api.github.com/repos/huggingface/datasets/issues/3234
https://github.com/huggingface/datasets/pull/3234
3,234
Avoid PyArrow type optimization if it fails
closed
5
2021-11-08T16:10:27
2021-11-10T12:04:29
2021-11-10T12:04:28
mariosasko
[]
Adds a new variable, `DISABLE_PYARROW_TYPES_OPTIMIZATION`, to `config.py` for easier control of the Arrow type optimization. Fix #2206
true
1,047,474,931
https://api.github.com/repos/huggingface/datasets/issues/3233
https://github.com/huggingface/datasets/pull/3233
3,233
Improve repository structure docs
closed
0
2021-11-08T13:51:35
2021-11-09T10:02:18
2021-11-09T10:02:17
lhoestq
[]
Continuation of the documentation started in https://github.com/huggingface/datasets/pull/3221, taking into account @stevhliu 's comments
true
1,047,361,573
https://api.github.com/repos/huggingface/datasets/issues/3232
https://github.com/huggingface/datasets/issues/3232
3,232
The Xsum datasets seems not able to download.
closed
4
2021-11-08T11:58:54
2021-11-09T15:07:16
2021-11-09T15:07:16
FYYFU
[ "bug" ]
## Describe the bug The download Link of the Xsum dataset provided in the repository is [Link](http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz). It seems not able to download. ## Steps to reproduce the bug ```python load_dataset('xsum') ``` ## Actual results ``` python raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://bollin.inf.ed.ac.uk/public/direct/XSUM-EMNLP18-Summary-Data-Original.tar.gz ```
false
1,047,170,906
https://api.github.com/repos/huggingface/datasets/issues/3231
https://github.com/huggingface/datasets/pull/3231
3,231
Group tests in multiprocessing workers by test file
closed
0
2021-11-08T08:46:03
2021-11-08T13:19:18
2021-11-08T08:59:44
albertvillanova
[]
By grouping tests by test file, we make sure that all the tests in `test_load.py` are sent to the same worker. Therefore, the fixture `hf_token` will be called only once (and from the same worker). Related to: #3200. Fix #3219.
true
1,047,135,583
https://api.github.com/repos/huggingface/datasets/issues/3230
https://github.com/huggingface/datasets/pull/3230
3,230
Add full tagset to conll2003 README
closed
1
2021-11-08T08:06:04
2021-11-09T10:48:38
2021-11-09T10:40:58
BramVanroy
[]
Even though it is possible to manually get the tagset list with ```python dset.features[field_name].feature.names ``` I think it is useful to have an overview of the used tagset on the dataset card. This is particularly useful in light of the **dataset viewer**: the tags are encoded, so it is not immediately obvious what they are for a given sample. Adding a label-int mapping should make it easier for visitors to get a grasp of what they mean. From user-experience perspective, I would urge the full tagsets to always be available in the README's but I understand that that would take a lot of work, probably. Perhaps it can be automated? closes #3189
true
1,046,706,425
https://api.github.com/repos/huggingface/datasets/issues/3229
https://github.com/huggingface/datasets/pull/3229
3,229
Fix URL in CITATION file
closed
0
2021-11-07T10:04:35
2021-11-07T10:04:46
2021-11-07T10:04:45
albertvillanova
[]
Currently the BibTeX citation parsed from the CITATION file has wrong URL (it shows the repo URL instead of the proceedings paper URL): ``` @inproceedings{Lhoest_Datasets_A_Community_2021, author = {Lhoest, Quentin and Villanova del Moral, Albert and von Platen, Patrick and Wolf, Thomas and Šaško, Mario and Jernite, Yacine and Thakur, Abhishek and Tunstall, Lewis and Patil, Suraj and Drame, Mariama and Chaumond, Julien and Plu, Julien and Davison, Joe and Brandeis, Simon and Sanh, Victor and Le Scao, Teven and Canwen Xu, Kevin and Patry, Nicolas and Liu, Steven and McMillan-Major, Angelina and Schmid, Philipp and Gugger, Sylvain and Raw, Nathan and Lesage, Sylvain and Lozhkov, Anton and Carrigan, Matthew and Matussière, Théo and von Werra, Leandro and Debut, Lysandre and Bekman, Stas and Delangue, Clément}, booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations}, month = {11}, pages = {175--184}, publisher = {Association for Computational Linguistics}, title = {{Datasets: A Community Library for Natural Language Processing}}, url = {https://github.com/huggingface/datasets}, year = {2021} } ```
true
1,046,702,143
https://api.github.com/repos/huggingface/datasets/issues/3228
https://github.com/huggingface/datasets/pull/3228
3,228
Add CITATION file
closed
0
2021-11-07T09:40:19
2021-11-07T09:51:47
2021-11-07T09:51:46
albertvillanova
[]
Add CITATION file.
true
1,046,667,845
https://api.github.com/repos/huggingface/datasets/issues/3227
https://github.com/huggingface/datasets/issues/3227
3,227
Error in `Json(datasets.ArrowBasedBuilder)` class
closed
3
2021-11-07T05:50:32
2021-11-09T19:09:15
2021-11-09T19:09:15
JunShern
[ "bug" ]
## Describe the bug When a json file contains a `text` field that is larger than the block_size, the JSON dataset builder fails. ## Steps to reproduce the bug Create a folder that contains the following: ``` . ├── testdata │   └── mydata.json └── test.py ``` Please download [this file](https://github.com/huggingface/datasets/files/7491797/mydata.txt) as `mydata.json`. (The error does not occur in JSON files with shorter text, but it is reproducible when the text is long as in the file I provide) :exclamation: :exclamation: GitHub doesn't allow me to upload JSON so this file is a TXT, and you should rename it to `.json`! `test.py` simply contains: ```python from datasets import load_dataset my_dataset = load_dataset("testdata") ``` To reproduce the error, simply run ``` python test.py ``` ## Expected results The data should load correctly without error. ## Actual results The dataset builder fails with: ``` Using custom data configuration testdata-d490389b8ab4fd82 Downloading and preparing dataset json/testdata to /home/junshern.chan/.cache/huggingface/datasets/json/testdata-d490389b8ab4fd82/0.0.0/3333a8af0db9764dfcff43a42ff26228f0f2e267f0d8a0a294452d188beadb34... 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 2264.74it/s] 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 447.01it/s] Failed to read file '/home/junshern.chan/hf-json-bug/testdata/mydata.json' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Missing a name for object member. in row 0 Traceback (most recent call last): File "test.py", line 28, in <module> my_dataset = load_dataset("testdata") File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/builder.py", line 1156, in _prepare_split for key, table in utils.tqdm( File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/tqdm/std.py", line 1168, in __iter__ for obj in iterable: File "/home/junshern.chan/.casio/miniconda/envs/hf-json-bug/lib/python3.8/site-packages/datasets/packaged_modules/json/json.py", line 146, in _generate_tables raise ValueError( ValueError: Not able to read records in the JSON file at /home/junshern.chan/hf-json-bug/testdata/mydata.json. You should probably indicate the field of the JSON file containing your records. This JSON file contain the following fields: ['text']. Select the correct one and provide it as `field='XXX'` to the dataset loading method. ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyArrow version: 6.0.0
false
1,046,584,518
https://api.github.com/repos/huggingface/datasets/issues/3226
https://github.com/huggingface/datasets/pull/3226
3,226
Fix paper BibTeX citation with proceedings reference
closed
0
2021-11-06T19:52:59
2021-11-07T07:05:28
2021-11-07T07:05:27
albertvillanova
[]
Fix paper BibTeX citation with proceedings reference.
true
1,046,530,493
https://api.github.com/repos/huggingface/datasets/issues/3225
https://github.com/huggingface/datasets/pull/3225
3,225
Update tatoeba to v2021-07-22
closed
4
2021-11-06T15:14:31
2021-11-12T11:13:13
2021-11-12T11:13:13
KoichiYasuoka
[]
Tatoeba's latest version is v2021-07-22
true
1,046,495,831
https://api.github.com/repos/huggingface/datasets/issues/3224
https://github.com/huggingface/datasets/pull/3224
3,224
User-pickling with dynamic sub-classing
closed
18
2021-11-06T12:08:24
2025-03-26T19:45:37
2025-03-26T19:45:36
BramVanroy
[]
This is a continuation of the now closed PR in https://github.com/huggingface/datasets/pull/3206. The discussion there has shaped a new approach to do this. In this PR, behavior of `pklregister` and `Pickler` is extended. Earlier, users were already able to register custom pickle functions. That is useful if they have objects that are not easily picklable with default methods. When one registers a custom function to a type, an object of that type will be pickled with the given function by `Pickler` which looks up the type in its `dispatch` table. The downside of this method, and of `pickle` in general, is that it is limited to direct type-matching and does not allow sub-classes. In many, default, cases that is not an issue. But when you are using external libraries where classes (e.g. parsers, models) are sub-classed this is not ideal. ```python from datasets.fingerprint import Hasher from datasets.utils.py_utils import pklregister class BaseParser: pass class EnglishParser(BaseParser): pass @pklregister(BaseParser) def custom_pkl_func(pickler, obj): print(f"Called the custom pickle function for type {type(obj)}!") # do something with the obj and ultimately save with the pickler base = BaseParser() en = EnglishParser() # Hasher.hash uses the Pickler behind the scenes # `custom_pkl_func` called for base Hasher.hash(base) # `custom_pkl_func` not called for en :-( Hasher.hash(en) ``` In the example above we'd want to sub-class `EnglishParser` to be handled in the same way as its super-class `BaseParser`. This PR solves that by allowing for a keyword-argument `allow_subclasses` in `pklregister` (default: `False`). ```python @pklregister(BaseParser, allow_subclasses=True) ``` When this option is enabled, we not only save the function in `Pickler.dispatch` but also save it in a custom table `Pickler.subclass_dispatch` **which allows us to dynamically add sub-classes of that class to the real dispatch table**. Then, if we want to pickle an object `obj` with `Pickler.dump()` (which ultimately will call `Pickler.save()`) we _first_ check whether any of the object's super-classes exist in `Pickler.sublcass_dispatch` and get the related custom pickle function. If we find one, we add the type of `obj` alongside the function to `Pickler.dispatch`. All of this happens at the start of the call to `Pickler.save()`. _Only then_ dill.Pickler's `save` will be called, which in turn will call `pickle._Pickler.save` which handles everything. Here, the `Pickler.dispatch` table will be used to look up custom pickler functions - and it now also includes the function for `obj`, which was copied from its super-class, which we added at the very start of our custom `Pickler.save()`. For edge cases and, especially, for testing, a contextmanager class `TempPickleRegistry` is included that resets the pickle registry on exit to its previous state. ```python with TempPickleRegistry(): @pklregister(MyObjClass) def pickle_registry_test_false(pickler, obj): pickler.save(obj.fancy_method()) some_obj = MyObjClass() dumps(some_obj) # `MyObjClass` is in Pickler.dispatch # ... `MyObjClass` is _not_ in Pickler.dispatch anymore ``` closes https://github.com/huggingface/datasets/issues/3178 To Do ==== - [x] Write tests - [ ] Write documentation/examples?
true
1,046,445,507
https://api.github.com/repos/huggingface/datasets/issues/3223
https://github.com/huggingface/datasets/pull/3223
3,223
Update BibTeX entry
closed
0
2021-11-06T06:41:52
2021-11-06T07:06:38
2021-11-06T07:06:38
albertvillanova
[]
Update BibTeX entry.
true
1,046,299,725
https://api.github.com/repos/huggingface/datasets/issues/3222
https://github.com/huggingface/datasets/pull/3222
3,222
Add docs for audio processing
closed
2
2021-11-05T23:07:59
2021-11-24T16:32:08
2021-11-24T15:35:52
stevhliu
[ "documentation" ]
This PR adds documentation for the `Audio` feature. It describes: - The difference between loading `path` and `audio`, as well as use-cases/best practices for each of them. - Resampling audio files with `cast_column`, and then calling `ds[0]["audio"]` to automatically decode and resample to the desired sampling rate. - Resampling with `map`. Preview [here](https://52969-250213286-gh.circle-artifacts.com/0/docs/_build/html/audio_process.html), let me know if I'm missing anything!
true
1,045,890,512
https://api.github.com/repos/huggingface/datasets/issues/3221
https://github.com/huggingface/datasets/pull/3221
3,221
Resolve data_files by split name
closed
4
2021-11-05T14:07:35
2021-11-08T13:52:20
2021-11-05T17:49:58
lhoestq
[]
As discussed in https://github.com/huggingface/datasets/issues/3027 we should automatically infer what file is supposed to go to what split automatically, based on filenames. I added the support for different kinds of patterns, for both dataset repositories and local directories: ``` Input structure: my_dataset_repository/ ├── README.md └── dataset.csv Output patterns: {"train": ["*"]} ``` ``` Input structure: my_dataset_repository/ ├── README.md ├── train.csv └── test.csv my_dataset_repository/ ├── README.md └── data/ ├── train.csv └── test.csv my_dataset_repository/ ├── README.md ├── train_0.csv ├── train_1.csv ├── train_2.csv ├── train_3.csv ├── test_0.csv └── test_1.csv Output patterns: {"train": ["*train*"], "test": ["*test*"]} ``` ``` Input structure: my_dataset_repository/ ├── README.md └── data/ ├── train/ │ ├── shard_0.csv │ ├── shard_1.csv │ ├── shard_2.csv │ └── shard_3.csv └── test/ ├── shard_0.csv └── shard_1.csv Output patterns: {"train": ["*train*/*", "*train*/**/*"], "test": ["*test*/*", "*test*/**/*"]} ``` and also this pattern that allows to have custom split names, and that is the structure used by #3098 for `push_to_hub` (cc @LysandreJik ): ``` Input structure: my_dataset_repository/ ├── README.md └── data/ ├── train-00000-of-00003.csv ├── train-00001-of-00003.csv ├── train-00002-of-00003.csv ├── test-00000-of-00001.csv ├── random-00000-of-00003.csv ├── random-00001-of-00003.csv └── random-00002-of-00003.csv Output patterns: { "train": ["data/train-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"], "test": ["data/test-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"], "random": ["data/random-[0-9][0-9][0-9][0-9][0-9]-of-[0-9][0-9][0-9][0-9][0-9].*"], } ``` You can check the documentation about structuring your repository [here](https://52640-250213286-gh.circle-artifacts.com/0/docs/_build/html/repository_structure.html). cc @stevhliu Fix https://github.com/huggingface/datasets/issues/3027 Fix https://github.com/huggingface/datasets/issues/3212 In the future we can also add support for dataset configurations.
true
1,045,549,029
https://api.github.com/repos/huggingface/datasets/issues/3220
https://github.com/huggingface/datasets/issues/3220
3,220
Add documentation about dataset viewer feature
open
1
2021-11-05T08:11:19
2023-09-25T11:48:38
null
albertvillanova
[ "enhancement", "dataset-viewer" ]
Add to the docs more details about the dataset viewer feature in the Hub. CC: @julien-c
false
1,045,095,000
https://api.github.com/repos/huggingface/datasets/issues/3219
https://github.com/huggingface/datasets/issues/3219
3,219
Eventual Invalid Token Error at setup of private datasets
closed
0
2021-11-04T18:50:45
2021-11-08T13:23:06
2021-11-08T08:59:43
albertvillanova
[ "bug" ]
## Describe the bug From time to time, there appear Invalid Token errors with private datasets: - https://app.circleci.com/pipelines/github/huggingface/datasets/8520/workflows/d44629f2-4749-40f8-a657-50931d0b3434/jobs/52534 ``` ____________ ERROR at setup of test_load_streaming_private_dataset _____________ ValueError: Invalid token passed! ____ ERROR at setup of test_load_streaming_private_dataset_with_zipped_data ____ ValueError: Invalid token passed! =========================== short test summary info ============================ ERROR tests/test_load.py::test_load_streaming_private_dataset - ValueError: I... ERROR tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data ``` - https://app.circleci.com/pipelines/github/huggingface/datasets/8557/workflows/a8383181-ba6d-4487-9d0a-f750b6dcb936/jobs/52763 ``` ____ ERROR at setup of test_load_streaming_private_dataset_with_zipped_data ____ [gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6 hf_api = <huggingface_hub.hf_api.HfApi object at 0x7f4899bab908> hf_token = 'vgNbyuaLNEBuGbgCEtSBCOcPjZnngJufHkTaZvHwkXKGkHpjBPwmLQuJVXRxBuaRzNlGjlMpYRPbthfHPFWXaaEDTLiqTTecYENxukRYVAAdpeApIUPxcgsowadkTkPj' zip_csv_path = PosixPath('/tmp/pytest-of-circleci/pytest-0/popen-gw1/data16/dataset.csv.zip') @pytest.fixture(scope="session") def hf_private_dataset_repo_zipped_txt_data_(hf_api: HfApi, hf_token, zip_csv_path): repo_name = "repo_zipped_txt_data-{}".format(int(time.time() * 10e3)) hf_api.create_repo(token=hf_token, name=repo_name, repo_type="dataset", private=True) repo_id = f"{USER}/{repo_name}" hf_api.upload_file( token=hf_token, path_or_fileobj=str(zip_csv_path), path_in_repo="data.zip", repo_id=repo_id, > repo_type="dataset", ) tests/hub_fixtures.py:68: ... ValueError: Invalid token passed! =========================== short test summary info ============================ ERROR tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data ```
false
1,045,032,313
https://api.github.com/repos/huggingface/datasets/issues/3218
https://github.com/huggingface/datasets/pull/3218
3,218
Fix code quality in riddle_sense dataset
closed
0
2021-11-04T17:43:20
2021-11-04T17:50:03
2021-11-04T17:50:02
albertvillanova
[]
Fix trailing whitespace. Fix #3217.
true
1,045,029,710
https://api.github.com/repos/huggingface/datasets/issues/3217
https://github.com/huggingface/datasets/issues/3217
3,217
Fix code quality bug in riddle_sense dataset
closed
1
2021-11-04T17:40:32
2021-11-04T17:50:02
2021-11-04T17:50:02
albertvillanova
[ "bug" ]
## Describe the bug ``` datasets/riddle_sense/riddle_sense.py:36:21: W291 trailing whitespace ```
false
1,045,027,733
https://api.github.com/repos/huggingface/datasets/issues/3216
https://github.com/huggingface/datasets/pull/3216
3,216
Pin version exclusion for tensorflow incompatible with keras
closed
0
2021-11-04T17:38:06
2021-11-05T10:57:38
2021-11-05T10:57:37
albertvillanova
[]
Once `tensorflow` version 2.6.2 is released: - https://github.com/tensorflow/tensorflow/commit/c1867f3bfdd1042f694df7a9870be51ba80543cb - https://pypi.org/project/tensorflow/2.6.2/ with the patch: - tensorflow/tensorflow#52927 we can remove the temporary fix we introduced in: - #3208 Fix #3209.
true
1,045,011,207
https://api.github.com/repos/huggingface/datasets/issues/3215
https://github.com/huggingface/datasets/pull/3215
3,215
Small updates to to_tf_dataset documentation
closed
1
2021-11-04T17:22:01
2021-11-04T18:55:38
2021-11-04T18:55:37
Rocketknight1
[]
I added a little more description about `to_tf_dataset` compared to just setting the format
true
1,044,924,050
https://api.github.com/repos/huggingface/datasets/issues/3214
https://github.com/huggingface/datasets/issues/3214
3,214
Add ACAV100M Dataset
open
0
2021-11-04T15:59:58
2021-12-08T12:00:30
null
nateraw
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** *ACAV100M* - **Description:** *contains 100 million videos with high audio-visual correspondence, ideal for self-supervised video representation learning.* - **Paper:** *https://arxiv.org/abs/2101.10803* - **Data:** *https://github.com/sangho-vision/acav100m* - **Motivation:** *The largest dataset (to date) for audio-visual learning.* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,044,745,313
https://api.github.com/repos/huggingface/datasets/issues/3213
https://github.com/huggingface/datasets/pull/3213
3,213
Fix tuple_ie download url
closed
0
2021-11-04T13:09:07
2021-11-05T14:16:06
2021-11-05T14:16:05
mariosasko
[]
Fix #3204
true
1,044,640,967
https://api.github.com/repos/huggingface/datasets/issues/3212
https://github.com/huggingface/datasets/issues/3212
3,212
Sort files before loading
closed
1
2021-11-04T11:08:31
2021-11-05T17:49:58
2021-11-05T17:49:58
lvwerra
[ "enhancement" ]
When loading a dataset that consists of several files (e.g. `my_data/data_001.json`, `my_data/data_002.json` etc.) they are not loaded in order when using `load_dataset("my_data")`. This could lead to counter-intuitive results if, for example, the data files are sorted by date or similar since they would appear in different order in the `Dataset`. The straightforward solution is to sort the list of files alphabetically before loading them. cc @lhoestq
false
1,044,617,913
https://api.github.com/repos/huggingface/datasets/issues/3211
https://github.com/huggingface/datasets/pull/3211
3,211
Fix disable_nullable default value to False
closed
0
2021-11-04T10:52:06
2021-11-04T11:08:21
2021-11-04T11:08:20
lhoestq
[]
Currently the `disable_nullable` parameter is not consistent across all dataset transforms. For example it is `False` in `map` but `True` in `flatten_indices`. This creates unexpected behaviors like this ```python from datasets import Dataset, concatenate_datasets d1 = Dataset.from_dict({"a": [0, 1, 2, 3]}) d2 = d1.filter(lambda x: x["a"] < 2).flatten_indices() d1.data.schema == d2.data.schema # False ``` This can cause issues when concatenating datasets for example. For consistency I set `disable_nullable` to `False` in `flatten_indices` and I fixed some docstrings cc @SBrandeis
true
1,044,611,471
https://api.github.com/repos/huggingface/datasets/issues/3210
https://github.com/huggingface/datasets/issues/3210
3,210
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py
closed
3
2021-11-04T10:47:26
2022-03-30T08:26:35
2022-03-30T08:26:35
xiuzhilu
[ "dataset bug" ]
when I use python examples/pytorch/translation/run_translation.py --model_name_or_path examples/pytorch/translation/opus-mt-en-ro --do_train --do_eval --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config_name ro-en --output_dir /tmp/tst-translation --per_device_train_batch_size=4 --per_device_eval_batch_size=4 --overwrite_output_dir --predict_with_generate to finetune translation model on huggingface, I get the issue"ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py".But I can open the https://raw.githubusercontent.com/huggingface/datasets/1.15.1/datasets/wmt16/wmt16.py by using website. What should I do to solve the issue?
false
1,044,505,771
https://api.github.com/repos/huggingface/datasets/issues/3209
https://github.com/huggingface/datasets/issues/3209
3,209
Unpin keras once TF fixes its release
closed
0
2021-11-04T09:15:32
2021-11-05T10:57:37
2021-11-05T10:57:37
albertvillanova
[]
Related to: - #3208
false
1,044,504,093
https://api.github.com/repos/huggingface/datasets/issues/3208
https://github.com/huggingface/datasets/pull/3208
3,208
Pin keras version until TF fixes its release
closed
0
2021-11-04T09:13:32
2021-11-04T09:30:55
2021-11-04T09:30:54
albertvillanova
[]
Fix #3207.
true
1,044,496,389
https://api.github.com/repos/huggingface/datasets/issues/3207
https://github.com/huggingface/datasets/issues/3207
3,207
CI error: Another metric with the same name already exists in Keras 2.7.0
closed
0
2021-11-04T09:04:11
2021-11-04T09:30:54
2021-11-04T09:30:54
albertvillanova
[ "bug" ]
## Describe the bug Release of TensorFlow 2.7.0 contains an incompatibility with Keras. See: - keras-team/keras#15579 This breaks our CI test suite: https://app.circleci.com/pipelines/github/huggingface/datasets/8493/workflows/055c7ae2-43bc-49b4-9f11-8fc71f35a25c/jobs/52363
false
1,044,216,270
https://api.github.com/repos/huggingface/datasets/issues/3206
https://github.com/huggingface/datasets/pull/3206
3,206
[WIP] Allow user-defined hash functions via a registry
closed
13
2021-11-03T23:25:42
2021-11-05T12:38:11
2021-11-05T12:38:04
BramVanroy
[]
Inspired by the discussion on hashing in https://github.com/huggingface/datasets/issues/3178#issuecomment-959016329, @lhoestq suggested that it would be neat to allow users more control over the hashing process. Specifically, it would be great if users can specify specific hashing functions depending on the **class** of the object. As an example, we found in the linked topic that loaded spaCy models (`Language` objects) have different hashes when `dump`'d, but their byte representation with `Language.to_bytes()` _is_ deterministic. It would therefore be great if we could specify that for `Language` objects, the hasher should hash the objects `to_bytes()` return value instead of the object itself. This PR adds a new, but tiny, dependency to manage the registry, namely [`catalogue`](https://github.com/explosion/catalogue). Two files have been changed (apart from the added dependency in `setup.py`) and one file has been added. **utils.registry** (added) This file defines our custom Registry and builds a registry called "hashers". A Registry is basically dictionary from names (str) to functions. A function can be added to the registry by a decorator, e.g. ```python @hashers.register(spacy.Language) def hash_spacy_language(nlp): return Hasher.hash(nlp.to_bytes()) ``` You'll notice that `spacy.Language` is not a string, even though the registry holds a str->func mapping. To accomplish this with classes in a dynamic way, catalogue.Registry needed to be subclassed and modified as `DatasetsRegistry`. All methods that use a name as an input are now modified so that classes are deterministically converted in strings in such a way that we can later retrieve the actual class from the string (below). **utils.py_utils** (modified) Added two functions to deal with classes and their qualified names, that is, their full descriptive name including the module. On the one hand it allows us to retrieve a string from a given class, e.g. given `Module` class, return `torch.nn.Module` str. Conversly, a function is added to convert such a full qualified name into a class. For instance, given the string `torch.nn.Module`, return the `Module` class. These straightforward methods allow us to interchangeably use classes and strings without any needed user interaction - they can just register a class, and behind the scenes `DatasetsRegistry` converts these to deterministic strings. **fingerprint** (modified) Updated Hasher.hash so that if the object to hash is an instance of a class in the registry, the registered function is used to hash the object instead of the default behavior. To do so we iterate over the registry `hashers` and convert its keys (strings) into classes, and then we can use `isinstance`. ```python # Check if the current object is an instance that is # applicable to the user-defined hashers. If so, hash # with the user-defined function for full_module_name, func in hashers.get_all().items(): registered_cls = get_cls_from_qualname(full_module_name) if isinstance(value, registered_cls): return func(value) ``` **Putting it all together** To test this, you can try the following example with spaCy. First install spaCy from source and checkout a specific commit. ```shell git clone https://github.com/explosion/spaCy.git cd spaCy/ git checkout cab9209c3dfcd1b75dfe5657f10e52c4d847a3cf cd .. git clone https://github.com/BramVanroy/datasets.git cd datasets git checkout registry pip install -e . pip install ../spaCy spacy download en_core_web_sm ``` Now you can run the following script. By default it will use the custom hasher function for the Language object. You can enable the default behavior by commenting out `@hashers.register...`. ```python import spacy from datasets.fingerprint import Hasher from datasets.utils.registry import hashers # Register a function so that when the Hasher encounters a spacy.Language object # it uses this custom function to hash instead of the default @hashers.register(spacy.Language) def hash_spacy_language(nlp): return Hasher.hash(nlp.to_bytes()) def main(): print(hashers.get_all()) nlp = spacy.load("en_core_web_sm") dump1 = Hasher.hash(nlp) nlp = spacy.load("en_core_web_sm") dump2 = Hasher.hash(nlp) print(dump1) # succeeds when using the registered custom function # fails if using the default assert dump1 == dump2 if __name__ == '__main__': main() ``` To do ==== - The above is just a proof-of-concept. I am open to changes/suggestions - Tests still need to be written - We should consider whether we can make `DatasetsRegistry` very restrictive and ONLY allowing classes. That would make testing easier - otherwise we also need to test for other sorts of objects. - Maybe the `hashers` definition is better suited in `fingerprint`? - Documentation/examples need to be updated - Not sure why the logger is not working in `hash()` - `get_cls_from_qualname` might need a fail-safe: is it possible for a full_qualname to not have a module, and if so how do we deal with that?
true
1,044,099,561
https://api.github.com/repos/huggingface/datasets/issues/3205
https://github.com/huggingface/datasets/pull/3205
3,205
Add Multidoc2dial Dataset
closed
4
2021-11-03T20:48:31
2021-11-24T17:32:49
2021-11-24T16:55:08
sivasankalpp
[]
This PR adds the MultiDoc2Dial dataset introduced in this [paper](https://arxiv.org/pdf/2109.12595v1.pdf )
true
1,043,707,307
https://api.github.com/repos/huggingface/datasets/issues/3204
https://github.com/huggingface/datasets/issues/3204
3,204
FileNotFoundError for TupleIE dataste
closed
3
2021-11-03T14:56:55
2021-11-05T15:51:15
2021-11-05T14:16:05
arda-vianai
[ "bug" ]
Hi, `dataset = datasets.load_dataset('tuple_ie', 'all')` returns a FileNotFound error. Is the data not available? Many thanks.
false
1,043,552,766
https://api.github.com/repos/huggingface/datasets/issues/3203
https://github.com/huggingface/datasets/pull/3203
3,203
Updated: DaNE - updated URL for download
closed
3
2021-11-03T12:55:13
2021-11-04T13:14:36
2021-11-04T11:46:43
MalteHB
[]
It seems that DaNLP has updated their download URLs and it therefore also needs to be updated in here...
true
1,043,213,660
https://api.github.com/repos/huggingface/datasets/issues/3202
https://github.com/huggingface/datasets/issues/3202
3,202
Add mIoU metric
closed
1
2021-11-03T08:42:32
2022-06-01T17:39:05
2022-06-01T17:39:04
NielsRogge
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Recently, some semantic segmentation models were added to HuggingFace Transformers, including [SegFormer](https://huggingface.co/transformers/model_doc/segformer.html) and [BEiT](https://huggingface.co/transformers/model_doc/beit.html). Semantic segmentation (which is the task of labeling every pixel of an image with a corresponding class) is typically evaluated using the Mean Intersection and Union (mIoU). Together with the upcoming Image Feature, adding this metric could be very handy when creating example scripts to fine-tune any Transformer-based model on a semantic segmentation dataset. An implementation can be found [here](https://github.com/open-mmlab/mmsegmentation/blob/504965184c3e6bc9ec43af54237129ef21981a5f/mmseg/core/evaluation/metrics.py#L132) for instance.
false
1,043,209,142
https://api.github.com/repos/huggingface/datasets/issues/3201
https://github.com/huggingface/datasets/issues/3201
3,201
Add GSM8K dataset
closed
1
2021-11-03T08:36:44
2022-04-13T11:56:12
2022-04-13T11:56:11
NielsRogge
[ "dataset request" ]
## Adding a Dataset - **Name:** GSM8K (short for Grade School Math 8k) - **Description:** GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers. - **Paper:** https://openai.com/blog/grade-school-math/ - **Data:** https://github.com/openai/grade-school-math - **Motivation:** The dataset is useful to investigate the reasoning abilities of large Transformer models, such as GPT-3. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false