id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
936,771,339
https://api.github.com/repos/huggingface/datasets/issues/2587
https://github.com/huggingface/datasets/pull/2587
2,587
Add aiohttp to tests extras require
closed
0
2021-07-05T07:14:01
2021-07-05T09:04:38
2021-07-05T09:04:38
albertvillanova
[]
Currently, none of the streaming tests are runned within our CI test suite, because the streaming tests require aiohttp and this is missing from our tests extras require dependencies. Our CI test suite should be exhaustive and test all the library functionalities.
true
936,747,588
https://api.github.com/repos/huggingface/datasets/issues/2586
https://github.com/huggingface/datasets/pull/2586
2,586
Fix misalignment in SQuAD
closed
0
2021-07-05T06:42:20
2021-07-12T14:11:10
2021-07-07T13:18:51
albertvillanova
[]
Fix misalignment between: - the answer text and - the answer_start within the context by keeping original leading blank spaces in the context. Fix #2585.
true
936,484,419
https://api.github.com/repos/huggingface/datasets/issues/2585
https://github.com/huggingface/datasets/issues/2585
2,585
sqaud_v2 dataset contains misalignment between the answer text and the context value at the answer index
closed
2
2021-07-04T15:39:49
2021-07-07T13:18:51
2021-07-07T13:18:51
mmajurski
[ "bug" ]
## Describe the bug The built in huggingface squad_v2 dataset that you can access via datasets.load_dataset contains mis-alignment between the answers['text'] and the characters in the context at the location specified by answers['answer_start']. For example: id = '56d1f453e7d4791d009025bd' answers = {'text': ['Pure Land'], 'answer_start': [146]} However the actual text in context at location 146 is 'ure Land,' Which is an off-by-one error from the correct answer. ## Steps to reproduce the bug ```python import datasets def check_context_answer_alignment(example): for a_idx in range(len(example['answers']['text'])): # check raw dataset for answer consistency between context and answer answer_text = example['answers']['text'][a_idx] a_st_idx = example['answers']['answer_start'][a_idx] a_end_idx = a_st_idx + len(example['answers']['text'][a_idx]) answer_text_from_context = example['context'][a_st_idx:a_end_idx] if answer_text != answer_text_from_context: #print(example['id']) return False return True dataset = datasets.load_dataset('squad_v2', split='train', keep_in_memory=True) start_len = len(dataset) dataset = dataset.filter(check_context_answer_alignment, num_proc=1, keep_in_memory=True) end_len = len(dataset) print('{} instances contain mis-alignment between the answer text and answer index.'.format(start_len - end_len)) ``` ## Expected results This code should result in 0 rows being filtered out from the dataset. ## Actual results This filter command results in 258 rows being flagged as containing a discrepancy between the text contained within answers['text'] and the text in example['context'] at the answers['answer_start'] location. This code will reproduce the problem and produce the following count: "258 instances contain mis-alignment between the answer text and answer index." ## Environment info Steps to rebuilt the Conda environment: ``` # create a virtual environment to stuff all these packages into conda create -n round8 python=3.8 -y # activate the virtual environment conda activate round8 # install pytorch (best done through conda to handle cuda dependencies) conda install pytorch torchvision torchtext cudatoolkit=11.1 -c pytorch-lts -c nvidia pip install jsonpickle transformers datasets matplotlib ``` OS: Ubuntu 20.04 Python 3.8 Result of `conda env export`: ``` name: round8 channels: - pytorch-lts - nvidia - defaults dependencies: - _libgcc_mutex=0.1=main - _openmp_mutex=4.5=1_gnu - blas=1.0=mkl - brotlipy=0.7.0=py38h27cfd23_1003 - bzip2=1.0.8=h7b6447c_0 - ca-certificates=2021.5.25=h06a4308_1 - certifi=2021.5.30=py38h06a4308_0 - cffi=1.14.5=py38h261ae71_0 - chardet=4.0.0=py38h06a4308_1003 - cryptography=3.4.7=py38hd23ed53_0 - cudatoolkit=11.1.74=h6bb024c_0 - ffmpeg=4.2.2=h20bf706_0 - freetype=2.10.4=h5ab3b9f_0 - gmp=6.2.1=h2531618_2 - gnutls=3.6.15=he1e5248_0 - idna=2.10=pyhd3eb1b0_0 - intel-openmp=2021.2.0=h06a4308_610 - jpeg=9b=h024ee3a_2 - lame=3.100=h7b6447c_0 - lcms2=2.12=h3be6417_0 - ld_impl_linux-64=2.35.1=h7274673_9 - libffi=3.3=he6710b0_2 - libgcc-ng=9.3.0=h5101ec6_17 - libgomp=9.3.0=h5101ec6_17 - libidn2=2.3.1=h27cfd23_0 - libopus=1.3.1=h7b6447c_0 - libpng=1.6.37=hbc83047_0 - libstdcxx-ng=9.3.0=hd4cf53a_17 - libtasn1=4.16.0=h27cfd23_0 - libtiff=4.2.0=h85742a9_0 - libunistring=0.9.10=h27cfd23_0 - libuv=1.40.0=h7b6447c_0 - libvpx=1.7.0=h439df22_0 - libwebp-base=1.2.0=h27cfd23_0 - lz4-c=1.9.3=h2531618_0 - mkl=2021.2.0=h06a4308_296 - mkl-service=2.3.0=py38h27cfd23_1 - mkl_fft=1.3.0=py38h42c9631_2 - mkl_random=1.2.1=py38ha9443f7_2 - ncurses=6.2=he6710b0_1 - nettle=3.7.3=hbbd107a_1 - ninja=1.10.2=hff7bd54_1 - numpy=1.20.2=py38h2d18471_0 - numpy-base=1.20.2=py38hfae3a4d_0 - olefile=0.46=py_0 - openh264=2.1.0=hd408876_0 - openssl=1.1.1k=h27cfd23_0 - pillow=8.2.0=py38he98fc37_0 - pip=21.1.2=py38h06a4308_0 - pycparser=2.20=py_2 - pyopenssl=20.0.1=pyhd3eb1b0_1 - pysocks=1.7.1=py38h06a4308_0 - python=3.8.10=h12debd9_8 - pytorch=1.8.1=py3.8_cuda11.1_cudnn8.0.5_0 - readline=8.1=h27cfd23_0 - requests=2.25.1=pyhd3eb1b0_0 - setuptools=52.0.0=py38h06a4308_0 - six=1.16.0=pyhd3eb1b0_0 - sqlite=3.35.4=hdfb4753_0 - tk=8.6.10=hbc83047_0 - torchtext=0.9.1=py38 - torchvision=0.9.1=py38_cu111 - typing_extensions=3.7.4.3=pyha847dfd_0 - urllib3=1.26.4=pyhd3eb1b0_0 - wheel=0.36.2=pyhd3eb1b0_0 - x264=1!157.20191217=h7b6447c_0 - xz=5.2.5=h7b6447c_0 - zlib=1.2.11=h7b6447c_3 - zstd=1.4.9=haebb681_0 - pip: - click==8.0.1 - cycler==0.10.0 - datasets==1.8.0 - dill==0.3.4 - filelock==3.0.12 - fsspec==2021.6.0 - huggingface-hub==0.0.8 - joblib==1.0.1 - jsonpickle==2.0.0 - kiwisolver==1.3.1 - matplotlib==3.4.2 - multiprocess==0.70.12.2 - packaging==20.9 - pandas==1.2.4 - pyarrow==3.0.0 - pyparsing==2.4.7 - python-dateutil==2.8.1 - pytz==2021.1 - regex==2021.4.4 - sacremoses==0.0.45 - tokenizers==0.10.3 - tqdm==4.49.0 - transformers==4.6.1 - xxhash==2.0.2 prefix: /home/mmajurski/anaconda3/envs/round8 ```
false
936,049,736
https://api.github.com/repos/huggingface/datasets/issues/2584
https://github.com/huggingface/datasets/pull/2584
2,584
wi_locness: reference latest leaderboard on codalab
closed
0
2021-07-02T20:26:22
2021-07-05T09:06:14
2021-07-05T09:06:14
aseifert
[]
The dataset's author asked me to put this codalab link into the dataset's README.
true
936,034,976
https://api.github.com/repos/huggingface/datasets/issues/2583
https://github.com/huggingface/datasets/issues/2583
2,583
Error iteration over IterableDataset using Torch DataLoader
closed
2
2021-07-02T19:55:58
2021-07-20T09:04:45
2021-07-05T23:48:23
LeenaShekhar
[ "bug" ]
## Describe the bug I have an IterableDataset (created using streaming=True) and I am trying to create batches using Torch DataLoader class by passing this IterableDataset to it. This throws error which is pasted below. I can do the same by using Torch IterableDataset. One thing I noticed is that in the former case when I look at the dataloader.sampler class I get torch.utils.data.sampler.SequentialSampler while the latter one gives torch.utils.data.dataloader._InfiniteConstantSampler. I am not sure if this is how it is meant to be used, but that's what seemed reasonable to me. ## Steps to reproduce the bug 1. Does not work. ```python >>> from datasets import load_dataset >>> dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True) >>> dataloader = torch.utils.data.DataLoader(dataset, batch_size=4) >>> dataloader.sampler <torch.utils.data.sampler.SequentialSampler object at 0x7f245a510208> >>> for batch in dataloader: ... print(batch) ``` 2. Works. ```python import torch from torch.utils.data import Dataset, IterableDataset, DataLoader class CustomIterableDataset(IterableDataset): 'Characterizes a dataset for PyTorch' def __init__(self, data): 'Initialization' self.data = data def __iter__(self): return iter(self.data) data = list(range(12)) dataset = CustomIterableDataset(data) dataloader = DataLoader(dataset, batch_size=4) print("dataloader: ", dataloader.sampler) for batch in dataloader: print(batch) ``` ## Expected results To get batches of data with the batch size as 4. Output from the latter one (2) though Datasource is different here so actual data is different. dataloader: <torch.utils.data.dataloader._InfiniteConstantSampler object at 0x7f1cc29e2c50> tensor([0, 1, 2, 3]) tensor([4, 5, 6, 7]) tensor([ 8, 9, 10, 11]) ## Actual results <torch.utils.data.sampler.SequentialSampler object at 0x7f245a510208> ... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 435, in __next__ data = self._next_data() File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 474, in _next_data index = self._next_index() # may raise StopIteration File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 427, in _next_index return next(self._sampler_iter) # may raise StopIteration File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 227, in __iter__ for idx in self.sampler: File "/data/leshekha/lib/HFDatasets/lib/python3.6/site-packages/torch/utils/data/sampler.py", line 67, in __iter__ return iter(range(len(self.data_source))) TypeError: object of type 'IterableDataset' has no len() ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: '1.8.1.dev0' - Platform: Linux - Python version: Python 3.6.8 - PyArrow version: '3.0.0'
false
935,859,104
https://api.github.com/repos/huggingface/datasets/issues/2582
https://github.com/huggingface/datasets/pull/2582
2,582
Add skip and take
closed
3
2021-07-02T15:10:19
2021-07-05T16:06:40
2021-07-05T16:06:39
lhoestq
[]
As discussed in https://github.com/huggingface/datasets/pull/2375#discussion_r657084544 I added the `IterableDataset.skip` and `IterableDataset.take` methods that allows to do basic splitting of iterable datasets. You can create new dataset with the first `n` examples using `IterableDataset.take()`, or you can get a dataset with the rest of the examples by skipping the first `n` examples with `IterableDataset.skip()` One implementation detail: Using `take` (or `skip`) prevents future dataset shuffling from shuffling the dataset shards, otherwise the taken examples could come from other shards. In this case it only uses the shuffle buffer. I would have loved to allow the shards of the taken examples to be shuffled anyway, but since we don't know in advance the length of each shard we don't know what shards to take or skip. I think this is ok though since users can shuffle before doing take or skip. I mentioned this in the documentation cc @vblagoje @lewtun
true
935,783,588
https://api.github.com/repos/huggingface/datasets/issues/2581
https://github.com/huggingface/datasets/pull/2581
2,581
Faster search_batch for ElasticsearchIndex due to threading
closed
0
2021-07-02T13:42:07
2021-07-12T14:13:46
2021-07-12T09:52:51
mwrzalik
[]
Hey, I think it makes sense to perform search_batch threaded, so ES can perform search in parallel. Cheers!
true
935,767,421
https://api.github.com/repos/huggingface/datasets/issues/2580
https://github.com/huggingface/datasets/pull/2580
2,580
Fix Counter import
closed
0
2021-07-02T13:21:48
2021-07-02T14:37:47
2021-07-02T14:37:46
albertvillanova
[]
Import from `collections` instead of `typing`.
true
935,486,894
https://api.github.com/repos/huggingface/datasets/issues/2579
https://github.com/huggingface/datasets/pull/2579
2,579
Fix BibTeX entry
closed
0
2021-07-02T07:10:40
2021-07-02T07:33:44
2021-07-02T07:33:44
albertvillanova
[]
Add missing contributor to BibTeX entry. cc: @abhishekkrthakur @thomwolf
true
935,187,497
https://api.github.com/repos/huggingface/datasets/issues/2578
https://github.com/huggingface/datasets/pull/2578
2,578
Support Zstandard compressed files
closed
8
2021-07-01T20:22:34
2021-08-11T14:46:24
2021-07-05T10:50:27
albertvillanova
[]
Close #2572. cc: @thomwolf
true
934,986,761
https://api.github.com/repos/huggingface/datasets/issues/2576
https://github.com/huggingface/datasets/pull/2576
2,576
Add mC4
closed
0
2021-07-01T15:51:25
2021-07-02T14:50:56
2021-07-02T14:50:55
lhoestq
[]
AllenAI is now hosting the processed C4 and mC4 dataset in this repo: https://huggingface.co/datasets/allenai/c4 Thanks a lot to them ! In this PR I added the mC4 dataset builder. It supports 108 languages You can load it with ```python from datasets import load_dataset en_mc4 = load_dataset("mc4", "en") fr_mc4 = load_dataset("mc4", "fr") en_and_fr_mc4 = load_dataset("mc4", languages=["en", "fr"]) ``` It also supports streaming, if you don't want to download hundreds of GB of data: ```python en_mc4 = load_dataset("mc4", "en", streaming=True) ``` Regarding the dataset_infos.json, I will add them once I have them. Also we can work on the dataset card at that will be at https://huggingface.co/datasets/mc4 For now I just added a link to https://huggingface.co/datasets/allenai/c4 as well as a few sections
true
934,876,496
https://api.github.com/repos/huggingface/datasets/issues/2575
https://github.com/huggingface/datasets/pull/2575
2,575
Add C4
closed
0
2021-07-01T13:58:08
2021-07-02T14:50:23
2021-07-02T14:50:23
lhoestq
[]
The old code for the C4 dataset was to generate the C4 with Apache Beam, as in Tensorflow Datasets. However AllenAI is now hosting the processed C4 dataset in this repo: https://huggingface.co/datasets/allenai/c4 Thanks a lot to them for their amazing work ! In this PR I changed the script to download and prepare the data directly from this repo. It has 4 variants: en, en.noblocklist, en.noclean, realnewslike You can load it with ```python from datasets import load_dataset c4 = load_dataset("c4", "en") ``` It also supports streaming, if you don't want to download hundreds of GB of data: ```python c4 = load_dataset("c4", "en", streaming=True) ``` Regarding the dataset_infos.json, I haven't added the infos for en.noclean. I will add them once I have them. Also we can work on the dataset card at https://huggingface.co/datasets/c4 For now I just added a link to https://huggingface.co/datasets/allenai/c4 as well as a few sections
true
934,632,378
https://api.github.com/repos/huggingface/datasets/issues/2574
https://github.com/huggingface/datasets/pull/2574
2,574
Add streaming in load a dataset docs
closed
0
2021-07-01T09:32:53
2021-07-01T14:12:22
2021-07-01T14:12:21
lhoestq
[]
Mention dataset streaming on the "loading a dataset" page of the documentation
true
934,584,745
https://api.github.com/repos/huggingface/datasets/issues/2573
https://github.com/huggingface/datasets/issues/2573
2,573
Finding right block-size with JSON loading difficult for user
open
1
2021-07-01T08:48:35
2021-07-01T19:10:53
null
albertvillanova
[ "bug" ]
As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets > json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
false
934,573,767
https://api.github.com/repos/huggingface/datasets/issues/2572
https://github.com/huggingface/datasets/issues/2572
2,572
Support Zstandard compressed files
closed
5
2021-07-01T08:37:04
2023-01-03T15:34:01
2021-07-05T10:50:27
albertvillanova
[ "enhancement" ]
Add support for Zstandard compressed files: https://facebook.github.io/zstd/
false
933,791,018
https://api.github.com/repos/huggingface/datasets/issues/2571
https://github.com/huggingface/datasets/pull/2571
2,571
Filter expected warning log from transformers
closed
1
2021-06-30T14:48:19
2021-07-02T04:08:17
2021-07-02T04:08:17
albertvillanova
[]
Close #2569.
true
933,402,521
https://api.github.com/repos/huggingface/datasets/issues/2570
https://github.com/huggingface/datasets/pull/2570
2,570
Minor fix docs format for bertscore
closed
0
2021-06-30T07:42:12
2021-06-30T15:31:01
2021-06-30T15:31:01
albertvillanova
[]
Minor fix docs format for bertscore: - link to README - format of KWARGS_DESCRIPTION
true
933,015,797
https://api.github.com/repos/huggingface/datasets/issues/2569
https://github.com/huggingface/datasets/issues/2569
2,569
Weights of model checkpoint not initialized for RobertaModel for Bertscore
closed
2
2021-06-29T18:55:23
2021-07-01T07:08:59
2021-06-30T07:35:49
suzyahyah
[ "bug" ]
When applying bertscore out of the box, ```Some weights of the model checkpoint at roberta-large were not used when initializing RobertaModel: ['lm_head.decoder.weight', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight']``` Following the typical usage from https://huggingface.co/docs/datasets/loading_metrics.html ``` from datasets import load_metric metric = load_metric('bertscore') # Example of typical usage for batch in dataset: inputs, references = batch predictions = model(inputs) metric.add_batch(predictions=predictions, references=references) score = metric.compute(lang="en") #score = metric.compute(model_type="roberta-large") # gives the same error ``` I am concerned about this because my usage shouldn't require any further fine-tuning and most people would expect to use BertScore out of the box? I realised the huggingface code is a wrapper around https://github.com/Tiiiger/bert_score, but I think this repo is anyway relying on the model code and weights from huggingface repo.... ## Environment info - `datasets` version: 1.7.0 - Platform: Linux-5.4.0-1041-aws-x86_64-with-glibc2.27 - Python version: 3.9.5 - PyArrow version: 3.0.0
false
932,934,795
https://api.github.com/repos/huggingface/datasets/issues/2568
https://github.com/huggingface/datasets/pull/2568
2,568
Add interleave_datasets for map-style datasets
closed
0
2021-06-29T17:19:24
2021-07-01T09:33:34
2021-07-01T09:33:33
lhoestq
[]
### Add interleave_datasets for map-style datasets Add support for map-style datasets (i.e. `Dataset` objects) in `interleave_datasets`. It was only supporting iterable datasets (i.e. `IterableDataset` objects). ### Implementation details It works by concatenating the datasets and then re-order the indices to make the new dataset. ### TODO - [x] tests - [x] docs Close #2563
true
932,933,536
https://api.github.com/repos/huggingface/datasets/issues/2567
https://github.com/huggingface/datasets/pull/2567
2,567
Add ASR task and new languages to resources
closed
0
2021-06-29T17:18:01
2021-07-01T09:42:23
2021-07-01T09:42:09
lewtun
[]
This PR adds a new `automatic-speech-recognition` task to the list of supported tasks in `tasks.json` and also includes a few new languages missing from `common_voice`. Note: I used the [Papers with Code list](https://www.paperswithcode.com/area/speech/speech-recognition) as inspiration for the ASR subtasks
true
932,804,725
https://api.github.com/repos/huggingface/datasets/issues/2566
https://github.com/huggingface/datasets/pull/2566
2,566
fix Dataset.map when num_procs > num rows
closed
0
2021-06-29T15:07:07
2021-07-01T09:11:13
2021-07-01T09:11:13
connor-mccarthy
[]
closes #2470 ## Testing notes To run updated tests: ```sh pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s ``` With Python code (to view warning): ```python from datasets import Dataset dataset = Dataset.from_dict({"x": ["sample"]}) print(len(dataset)) dataset.map(lambda x: x, num_proc=10) ```
true
932,445,439
https://api.github.com/repos/huggingface/datasets/issues/2565
https://github.com/huggingface/datasets/pull/2565
2,565
Inject templates for ASR datasets
closed
2
2021-06-29T10:02:01
2021-07-05T14:26:26
2021-07-05T14:26:26
lewtun
[]
This PR adds ASR templates for 5 of the most common speech datasets on the Hub, where "common" is defined by the number of models trained on them. I also fixed a bunch of the tags in the READMEs 😎
true
932,389,639
https://api.github.com/repos/huggingface/datasets/issues/2564
https://github.com/huggingface/datasets/issues/2564
2,564
concatenate_datasets for iterable datasets
closed
2
2021-06-29T08:59:41
2022-06-28T21:15:04
2022-06-28T21:15:04
lhoestq
[]
Currently `concatenate_datasets` only works for map-style `Dataset`. It would be nice to have it work for `IterableDataset` objects as well. It would simply chain the iterables of the iterable datasets.
false
932,387,639
https://api.github.com/repos/huggingface/datasets/issues/2563
https://github.com/huggingface/datasets/issues/2563
2,563
interleave_datasets for map-style datasets
closed
0
2021-06-29T08:57:24
2021-07-01T09:33:33
2021-07-01T09:33:33
lhoestq
[]
Currently the `interleave_datasets` functions only works for `IterableDataset`. Let's make it work for map-style `Dataset` objects as well. It would work the same way: either alternate between the datasets in order or randomly given probabilities specified by the user.
false
932,333,436
https://api.github.com/repos/huggingface/datasets/issues/2562
https://github.com/huggingface/datasets/pull/2562
2,562
Minor fix in loading metrics docs
closed
0
2021-06-29T07:55:11
2021-06-29T17:21:22
2021-06-29T17:21:22
albertvillanova
[]
Make some minor fixes in "Loading metrics" docs.
true
932,321,725
https://api.github.com/repos/huggingface/datasets/issues/2561
https://github.com/huggingface/datasets/issues/2561
2,561
Existing cache for local dataset builder file updates is ignored with `ignore_verifications=True`
closed
4
2021-06-29T07:43:03
2022-08-04T11:58:36
2022-08-04T11:58:36
apsdehal
[ "bug" ]
## Describe the bug If i have local file defining a dataset builder class and I load it using `load_dataset` functionality, the existing cache is ignored whenever the file is update even with `ignore_verifications=True`. This slows down debugging and cache generator for very large datasets. ## Steps to reproduce the bug - Create a local dataset builder class - load the local builder class file using `load_dataset` and let the cache build - update the file's content - The cache should rebuilt. ## Expected results With `ignore_verifications=True`, `load_dataset` should pick up existing cache. ## Actual results Creates new cache. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.7 - PyArrow version: 3.0.0
false
932,143,634
https://api.github.com/repos/huggingface/datasets/issues/2560
https://github.com/huggingface/datasets/pull/2560
2,560
fix Dataset.map when num_procs > num rows
closed
3
2021-06-29T02:24:11
2021-06-29T15:00:18
2021-06-29T14:53:31
connor-mccarthy
[]
closes #2470 ## Testing notes To run updated tests: ```sh pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s ``` With Python code (to view warning): ```python from datasets import Dataset dataset = Dataset.from_dict({"x": ["sample"]}) print(len(dataset)) dataset.map(lambda x: x, num_proc=10) ```
true
931,849,724
https://api.github.com/repos/huggingface/datasets/issues/2559
https://github.com/huggingface/datasets/issues/2559
2,559
Memory usage consistently increases when processing a dataset with `.map`
closed
2
2021-06-28T18:31:58
2023-07-20T13:34:10
2023-07-20T13:34:10
apsdehal
[ "bug" ]
## Describe the bug I have a HF dataset with image paths stored in it and I am trying to load those image paths using `.map` with `num_proc=80`. I am noticing that the memory usage consistently keeps on increasing with time. I tried using `DEFAULT_WRITER_BATCH_SIZE=10` in the builder to decrease arrow writer's batch size but that doesn't seem to help. ## Steps to reproduce the bug Providing code as it is would be hard. I can provide a MVP if that helps. ## Expected results Memory usage should become consistent after some time following the launch of processing. ## Actual results Memory usage keeps on increasing. ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-5.4.0-52-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.7 - PyArrow version: 3.0.0
false
931,736,647
https://api.github.com/repos/huggingface/datasets/issues/2558
https://github.com/huggingface/datasets/pull/2558
2,558
Update: WebNLG - update checksums
closed
0
2021-06-28T16:16:37
2021-06-28T17:23:17
2021-06-28T17:23:16
lhoestq
[]
The master branch changed so I computed the new checksums. I also pinned a specific revision so that it doesn't happen again in the future. Fix https://github.com/huggingface/datasets/issues/2553
true
931,633,823
https://api.github.com/repos/huggingface/datasets/issues/2557
https://github.com/huggingface/datasets/pull/2557
2,557
Fix `fever` keys
closed
0
2021-06-28T14:27:02
2021-06-28T16:11:30
2021-06-28T16:11:29
lhoestq
[]
The keys has duplicates since they were reset to 0 after each file. I fixed it by taking into account the file index as well.
true
931,595,872
https://api.github.com/repos/huggingface/datasets/issues/2556
https://github.com/huggingface/datasets/issues/2556
2,556
Better DuplicateKeysError error to help the user debug the issue
closed
7
2021-06-28T13:50:57
2022-06-28T09:26:04
2022-06-28T09:26:04
lhoestq
[ "enhancement", "good first issue" ]
As mentioned in https://github.com/huggingface/datasets/issues/2552 it would be nice to improve the error message when a dataset fails to build because there are duplicate example keys. The current one is ```python datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 48 Keys should be unique and deterministic in nature ``` and we could have something that guides the user to debugging the issue: ```python DuplicateKeysError: both 42th and 1337th examples have the same keys `48`. Please fix the dataset script at <path/to/the/dataset/script> ```
false
931,585,485
https://api.github.com/repos/huggingface/datasets/issues/2555
https://github.com/huggingface/datasets/pull/2555
2,555
Fix code_search_net keys
closed
1
2021-06-28T13:40:23
2021-09-02T08:24:43
2021-06-28T14:10:35
lhoestq
[]
There were duplicate keys in the `code_search_net` dataset, as reported in https://github.com/huggingface/datasets/issues/2552 I fixed the keys (it was an addition of the file and row indices, which was causing collisions) Fix #2552.
true
931,453,855
https://api.github.com/repos/huggingface/datasets/issues/2554
https://github.com/huggingface/datasets/issues/2554
2,554
Multilabel metrics not supported
closed
4
2021-06-28T11:09:46
2021-10-13T12:29:13
2021-07-08T08:40:15
GuillemGSubies
[ "bug" ]
When I try to use a metric like F1 macro I get the following error: ``` TypeError: int() argument must be a string, a bytes-like object or a number, not 'list' ``` There is an explicit casting here: https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/src/datasets/features.py#L274 And looks like this is because here https://github.com/huggingface/datasets/blob/fc79f61cbbcfa0e8c68b28c0a8257f17e768a075/metrics/f1/f1.py#L88 the features can only be integers, so we cannot use that F1 for multilabel. Instead, if I create the following F1 (ints replaced with sequence of ints), it will work: ```python class F1(datasets.Metric): def _info(self): return datasets.MetricInfo( description=_DESCRIPTION, citation=_CITATION, inputs_description=_KWARGS_DESCRIPTION, features=datasets.Features( { "predictions": datasets.Sequence(datasets.Value("int32")), "references": datasets.Sequence(datasets.Value("int32")), } ), reference_urls=["https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html"], ) def _compute(self, predictions, references, labels=None, pos_label=1, average="binary", sample_weight=None): return { "f1": f1_score( references, predictions, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight, ), } ```
false
931,365,926
https://api.github.com/repos/huggingface/datasets/issues/2553
https://github.com/huggingface/datasets/issues/2553
2,553
load_dataset("web_nlg") NonMatchingChecksumError
closed
2
2021-06-28T09:26:46
2021-06-28T17:23:39
2021-06-28T17:23:16
alxthm
[ "bug" ]
Hi! It seems the WebNLG dataset gives a NonMatchingChecksumError. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('web_nlg', name="release_v3.0_en", split="dev") ``` Gives ``` NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://gitlab.com/shimorina/webnlg-dataset/-/archive/master/webnlg-dataset-master.zip'] ``` ## Environment info - `datasets` version: 1.8.0 - Platform: macOS-11.3.1-x86_64-i386-64bit - Python version: 3.9.4 - PyArrow version: 3.0.0 Also tested on Linux, with python 3.6.8
false
931,354,687
https://api.github.com/repos/huggingface/datasets/issues/2552
https://github.com/huggingface/datasets/issues/2552
2,552
Keys should be unique error on code_search_net
closed
8
2021-06-28T09:15:20
2021-09-06T14:08:30
2021-09-02T08:25:29
thomwolf
[ "bug" ]
## Describe the bug Loading `code_search_net` seems not possible at the moment. ## Steps to reproduce the bug ```python >>> load_dataset('code_search_net') Downloading: 8.50kB [00:00, 3.09MB/s] Downloading: 19.1kB [00:00, 10.1MB/s] No config specified, defaulting to: code_search_net/all Downloading and preparing dataset code_search_net/all (download: 4.77 GiB, generated: 5.99 GiB, post-processed: Unknown size, total: 10.76 GiB) to /Users/thomwolf/.cache/huggingface/datasets/code_search_net/all/1.0.0/b3e8278faf5d67da1d06981efbeac3b76a2900693bd2239bbca7a4a3b0d6e52a... Traceback (most recent call last): File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/builder.py", line 1067, in _prepare_split writer.write(example, key) File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/arrow_writer.py", line 343, in write self.check_duplicate_keys() File "/Users/thomwolf/Documents/GitHub/datasets/src/datasets/arrow_writer.py", line 354, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 48 Keys should be unique and deterministic in nature ``` ## Environment info - `datasets` version: 1.8.1.dev0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 2.0.0
false
930,967,978
https://api.github.com/repos/huggingface/datasets/issues/2551
https://github.com/huggingface/datasets/pull/2551
2,551
Fix FileSystems documentation
closed
0
2021-06-27T16:18:42
2021-06-28T13:09:55
2021-06-28T13:09:54
connor-mccarthy
[]
### What this fixes: This PR resolves several issues I discovered in the documentation on the `datasets.filesystems` module ([this page](https://huggingface.co/docs/datasets/filesystems.html)). ### What were the issues? When I originally tried implementing the code examples I faced several bugs attributed to: - out of date [botocore](https://github.com/boto/botocore) call signatures - capitalization errors in the `S3FileSystem` class name (written as `S3Filesystem` in one place) - call signature errors for the `S3FileSystem` class constructor (uses parameter `sessions` instead of `session` in some places) (see [`s3fs`](https://s3fs.readthedocs.io/en/latest/api.html#s3fs.core.S3FileSystem) for where this constructor signature is defined) ### Testing/reviewing notes Instructions for generating the documentation locally: [here](https://github.com/huggingface/datasets/tree/master/docs#generating-the-documentation).
true
930,951,287
https://api.github.com/repos/huggingface/datasets/issues/2550
https://github.com/huggingface/datasets/issues/2550
2,550
Allow for incremental cumulative metric updates in a distributed setup
closed
0
2021-06-27T15:00:58
2021-09-26T13:42:39
2021-09-26T13:42:39
eladsegal
[ "enhancement" ]
Currently, using a metric allows for one of the following: - Per example/batch metrics - Cumulative metrics over the whole data What I'd like is to have an efficient way to get cumulative metrics over the examples/batches added so far, in order to display it as part of the progress bar during training/evaluation. Since most metrics are just an average of per-example metrics (which aren't?), an efficient calculation can be done as follows: `((score_cumulative * n_cumulative) + (score_new * n_new)) / (n_cumulative+ n_new)` where `n` and `score` refer to number of examples and metric score, `cumulative` refers to the cumulative metric and `new` refers to the addition of new examples. If you don't want to add this capability in the library, a simple solution exists so users can do it themselves: It is easy to implement for a single process setup, but in a distributed one there is no way to get the correct `n_new`. The solution for this is to return the number of examples that was used to compute the metrics in `.compute()` by adding the following line here: https://github.com/huggingface/datasets/blob/5a3221785311d0ce86c2785b765e86bd6997d516/src/datasets/metric.py#L402-L403 ``` output["number_of_examples"] = len(predictions) ``` and also remove the log message here so it won't spam: https://github.com/huggingface/datasets/blob/3db67f5ff6cbf807b129d2b4d1107af27623b608/src/datasets/metric.py#L411 If this change is ok with you, I'll open a pull request.
false
929,819,093
https://api.github.com/repos/huggingface/datasets/issues/2549
https://github.com/huggingface/datasets/issues/2549
2,549
Handling unlabeled datasets
closed
2
2021-06-25T04:32:23
2021-06-25T21:07:57
2021-06-25T21:07:56
nelson-liu
[ "enhancement" ]
Hi! Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable). For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `"label": data.get("gold_label")`, but got the following error: ``` File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 989, in _prepare_split example = self.info.features.encode_example(record) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 953, in encode_example return encode_nested_example(self, example) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in encode_nested_example k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in <dictcomp> k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 875, in encode_nested_example return schema.encode_example(obj) File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 653, in encode_example if not -1 <= example_data < self.num_classes: TypeError: '<=' not supported between instances of 'int' and 'NoneType' ``` What's the proper way to handle reading unlabeled datasets, especially for downstream usage with Transformers?
false
929,232,831
https://api.github.com/repos/huggingface/datasets/issues/2548
https://github.com/huggingface/datasets/issues/2548
2,548
Field order issue in loading json
closed
1
2021-06-24T13:29:53
2021-06-24T14:36:43
2021-06-24T14:34:05
luyug
[ "bug" ]
## Describe the bug The `load_dataset` function expects columns in alphabetical order when loading json files. Similar bug was previously reported for csv in #623 and fixed in #684. ## Steps to reproduce the bug For a json file `j.json`, ``` {"c":321, "a": 1, "b": 2} ``` Running the following, ``` f= datasets.Features({'a': Value('int32'), 'b': Value('int32'), 'c': Value('int32')}) json_data = datasets.load_dataset('json', data_files='j.json', features=f) ``` ## Expected results A successful load. ## Actual results ``` File "pyarrow/table.pxi", line 1409, in pyarrow.lib.Table.cast ValueError: Target schema's field names are not matching the table's field names: ['c', 'a', 'b'], ['a', 'b', 'c'] ``` ## Environment info - `datasets` version: 1.8.0 - Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyArrow version: 3.0.0
false
929,192,329
https://api.github.com/repos/huggingface/datasets/issues/2547
https://github.com/huggingface/datasets/issues/2547
2,547
Dataset load_from_disk is too slow
open
3
2021-06-24T12:45:44
2021-06-25T14:56:38
null
avacaondata
[ "bug" ]
@lhoestq ## Describe the bug It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in the context of a language model training, therefore I'm wasting 100$ each time I have to load the dataset from disk again (because the spot instance was stopped by aws and I need to relaunch it for example). ## Steps to reproduce the bug Just get the oscar in spanish (around 150GGB) and try to first save in disk and then load the processed dataset. It's not dependent on the task you're doing, it just depends on the size of the text dataset. ## Expected results I expect the dataset to be loaded in a normal time, by using the whole machine for loading it, I mean if you store the dataset in multiple files (.arrow) and then load it from multiple files, you can use multiprocessing for that and therefore don't waste so much time. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Ubuntu 18 - Python version: 3.8 I've seen you're planning to include a streaming mode for load_dataset, but that only saves the downloading and processing time, that's not being a problem for me, you cannot save the pure loading from disk time, therefore that's not a solution for my use case or for anyone who wants to use your library for training a language model.
false
929,091,689
https://api.github.com/repos/huggingface/datasets/issues/2546
https://github.com/huggingface/datasets/pull/2546
2,546
Add license to the Cambridge English Write & Improve + LOCNESS dataset card
closed
0
2021-06-24T10:39:29
2021-06-24T10:52:01
2021-06-24T10:52:01
lhoestq
[]
As noticed in https://github.com/huggingface/datasets/pull/2539, the licensing information was missing for this dataset. I added it and I also filled a few other empty sections.
true
929,016,580
https://api.github.com/repos/huggingface/datasets/issues/2545
https://github.com/huggingface/datasets/pull/2545
2,545
Fix DuplicatedKeysError in drop dataset
closed
0
2021-06-24T09:10:39
2021-06-24T14:57:08
2021-06-24T14:57:08
albertvillanova
[]
Close #2542. cc: @VictorSanh.
true
928,900,827
https://api.github.com/repos/huggingface/datasets/issues/2544
https://github.com/huggingface/datasets/pull/2544
2,544
Fix logging levels
closed
0
2021-06-24T06:41:36
2021-06-25T13:40:19
2021-06-25T13:40:19
albertvillanova
[]
Sometimes default `datasets` logging can be too verbose. One approach could be reducing some logging levels, from info to debug, or from warning to info. Close #2543. cc: @stas00
true
928,571,915
https://api.github.com/repos/huggingface/datasets/issues/2543
https://github.com/huggingface/datasets/issues/2543
2,543
switching some low-level log.info's to log.debug?
closed
1
2021-06-23T19:26:55
2021-06-25T13:40:19
2021-06-25T13:40:19
stas00
[ "enhancement" ]
In https://github.com/huggingface/transformers/pull/12276 we are now changing the examples to have `datasets` on the same log level as `transformers`, so that one setting can do a consistent logging across all involved components. The trouble is that now we get a ton of these: ``` 06/23/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 acquired on /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock 06/23/2021 12:15:31 - INFO - datasets.arrow_writer - Done writing 50 examples in 12280 bytes /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow. 06/23/2021 12:15:31 - INFO - datasets.arrow_dataset - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns. 06/23/2021 12:15:31 - INFO - datasets.utils.filelock - Lock 139627640431136 released on /home/stas/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock ``` May I suggest that these can be `log.debug` as it's no informative to the user. More examples: these are not informative - too much information: ``` 06/23/2021 12:14:26 - INFO - datasets.load - Checking /home/stas/.cache/huggingface/datasets/downloads/459933f1fe47711fad2f6ff8110014ff189120b45ad159ef5b8e90ea43a174fa.e23e7d1259a8c6274a82a42a8936dd1b87225302c6dc9b7261beb3bc2daac640.py for additional imports. 06/23/2021 12:14:27 - INFO - datasets.builder - Constructing Dataset for split train, validation, test, from /home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a ``` While these are: ``` 06/23/2021 12:14:27 - INFO - datasets.info - Loading Dataset Infos from /home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt16/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a 06/23/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (/home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a) ``` I also realize that `transformers` examples don't have do use `info` for `datasets` to let the default `warning` keep logging to less noisy. But I think currently the log levels are slightly misused and skewed by 1 level. Many `warnings` will better be `info`s and most `info`s be `debug`. e.g.: ``` 06/23/2021 12:14:27 - WARNING - datasets.builder - Reusing dataset wmt16 (/home/stas/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/0d9fb3e814712c785176ad8cdb9f465fbe6479000ee6546725db30ad8a8b5f8a) ``` why is this a warning? it is informing me that the cache is used, there is nothing to be worried about. I'd have it as `info`. Warnings are typically something that's bordering error or the first thing to check when things don't work as expected. infrequent info is there to inform of the different stages or important events. Everything else is debug. At least the way I understand things.
false
928,540,382
https://api.github.com/repos/huggingface/datasets/issues/2542
https://github.com/huggingface/datasets/issues/2542
2,542
`datasets.keyhash.DuplicatedKeysError` for `drop` and `adversarial_qa/adversarialQA`
closed
4
2021-06-23T18:41:16
2021-06-25T21:50:05
2021-06-24T14:57:08
VictorSanh
[ "bug" ]
## Describe the bug Failure to generate the datasets (`drop` and subset `adversarialQA` from `adversarial_qa`) because of duplicate keys. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("drop") load_dataset("adversarial_qa", "adversarialQA") ``` ## Expected results The examples keys should be unique. ## Actual results ```bash >>> load_dataset("drop") Using custom data configuration default Downloading and preparing dataset drop/default (download: 7.92 MiB, generated: 111.88 MiB, post-processed: Unknown size, total: 119.80 MiB) to /home/hf/.cache/huggingface/datasets/drop/default/0.1.0/7a94f1e2bb26c4b5c75f89857c06982967d7416e5af935a9374b9bccf5068026... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset use_auth_token=use_auth_token, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 992, in _prepare_split num_examples, num_bytes = writer.finalize() File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/arrow_writer.py", line 409, in finalize self.check_duplicate_keys() File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/arrow_writer.py", line 349, in check_duplicate_keys raise DuplicatedKeysError(key) datasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 28553293-d719-441b-8f00-ce3dc6df5398 Keys should be unique and deterministic in nature ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.7.0 - Platform: Linux-5.4.0-1044-gcp-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyArrow version: 3.0.0
false
928,529,078
https://api.github.com/repos/huggingface/datasets/issues/2541
https://github.com/huggingface/datasets/pull/2541
2,541
update discofuse link cc @ekQ
closed
1
2021-06-23T18:24:58
2021-06-28T14:34:51
2021-06-28T14:34:50
VictorSanh
[]
Updating the discofuse link: https://github.com/google-research-datasets/discofuse/commit/fd4b120cb3dd19a417e7f3b5432010b574b5eeee
true
928,433,892
https://api.github.com/repos/huggingface/datasets/issues/2540
https://github.com/huggingface/datasets/pull/2540
2,540
Remove task templates if required features are removed during `Dataset.map`
closed
0
2021-06-23T16:20:25
2021-06-24T14:41:15
2021-06-24T13:34:03
lewtun
[]
This PR fixes a bug reported by @craffel where removing a dataset's columns during `Dataset.map` triggered a `KeyError` because the `TextClassification` template tried to access the removed columns during `DatasetInfo.__post_init__`: ```python from datasets import load_dataset # `yelp_polarity` comes with a `TextClassification` template ds = load_dataset("yelp_polarity", split="test") ds # Dataset({ # features: ['text', 'label'], # num_rows: 38000 # }) # Triggers KeyError: 'label' - oh noes! ds.map(lambda x: {"inputs": 0}, remove_columns=ds.column_names) ``` I wrote a unit test to make sure I could reproduce the error and then patched a fix.
true
927,952,429
https://api.github.com/repos/huggingface/datasets/issues/2539
https://github.com/huggingface/datasets/pull/2539
2,539
remove wi_locness dataset due to licensing issues
closed
5
2021-06-23T07:35:32
2021-06-25T14:52:42
2021-06-25T14:52:42
aseifert
[]
It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset.
true
927,940,691
https://api.github.com/repos/huggingface/datasets/issues/2538
https://github.com/huggingface/datasets/issues/2538
2,538
Loading partial dataset when debugging
open
11
2021-06-23T07:19:52
2023-04-19T11:05:38
null
reachtarunhere
[]
I am using PyTorch Lightning along with datasets (thanks for so many datasets already prepared and the great splits). Every time I execute load_dataset for the imdb dataset it takes some time even if I specify a split involving very few samples. I guess this due to hashing as per the other issues. Is there a way to only load part of the dataset on load_dataset? This would really speed up my workflow. Something like a debug mode would really help. Thanks!
false
927,472,659
https://api.github.com/repos/huggingface/datasets/issues/2537
https://github.com/huggingface/datasets/pull/2537
2,537
Add Parquet loader + from_parquet and to_parquet
closed
3
2021-06-22T17:28:23
2021-06-30T16:31:03
2021-06-30T16:30:58
lhoestq
[]
Continuation of #2247 I added a "parquet" dataset builder, as well as the methods `Dataset.from_parquet` and `Dataset.to_parquet`. As usual, the data are converted to arrow in a batched way to avoid loading everything in memory.
true
927,338,639
https://api.github.com/repos/huggingface/datasets/issues/2536
https://github.com/huggingface/datasets/issues/2536
2,536
Use `Audio` features for `AutomaticSpeechRecognition` task template
closed
2
2021-06-22T15:07:21
2022-06-01T17:18:16
2022-06-01T17:18:16
lewtun
[ "enhancement" ]
In #2533 we added a task template for speech recognition that relies on the file paths to the audio files. As pointed out by @SBrandeis this is brittle as it doesn't port easily across different OS'. The solution is to use dedicated `Audio` features when casting the dataset. These features are not yet available in `datasets`, but should be included in the `AutomaticSpeechRecognition` template once they are.
false
927,334,349
https://api.github.com/repos/huggingface/datasets/issues/2535
https://github.com/huggingface/datasets/pull/2535
2,535
Improve Features docs
closed
0
2021-06-22T15:03:27
2021-06-23T13:40:43
2021-06-23T13:40:43
albertvillanova
[]
- Fix rendering and cross-references in Features docs - Add docstrings to Features methods
true
927,201,435
https://api.github.com/repos/huggingface/datasets/issues/2534
https://github.com/huggingface/datasets/pull/2534
2,534
Sync with transformers disabling NOTSET
closed
2
2021-06-22T12:54:21
2021-06-24T14:42:47
2021-06-24T14:42:47
albertvillanova
[]
Close #2528.
true
927,193,264
https://api.github.com/repos/huggingface/datasets/issues/2533
https://github.com/huggingface/datasets/pull/2533
2,533
Add task template for automatic speech recognition
closed
2
2021-06-22T12:45:02
2021-06-23T16:14:46
2021-06-23T15:56:57
lewtun
[]
This PR adds a task template for automatic speech recognition. In this task, the input is a path to an audio file which the model consumes to produce a transcription. Usage: ```python from datasets import load_dataset from datasets.tasks import AutomaticSpeechRecognition ds = load_dataset("timit_asr", split="train[:10]") # Dataset({ # features: ['file', 'text', 'phonetic_detail', 'word_detail', 'dialect_region', 'sentence_type', 'speaker_id', 'id'], # num_rows: 10 # }) task = AutomaticSpeechRecognition(audio_file_column="file", transcription_column="text") ds.prepare_for_task(task) # Dataset({ # features: ['audio_file', 'transcription'], # num_rows: 10 # }) ```
true
927,063,196
https://api.github.com/repos/huggingface/datasets/issues/2532
https://github.com/huggingface/datasets/issues/2532
2,532
Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task
closed
2
2021-06-22T10:08:18
2021-06-23T05:17:25
2021-06-23T05:17:25
cosmeowpawlitan
[ "bug" ]
[This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner). The pipeline works fine with most instance in different languages, but unfortunately, [the Japanese Kana ligature (a form of abbreviation? I don't know Japanese well)](https://en.wikipedia.org/wiki/Kana_ligature) break the alignment of `return_offsets_mapping`: ![image](https://user-images.githubusercontent.com/50871412/122904371-db192700-d382-11eb-8917-1775db76db69.png) Without the try catch block, it riase `ValueError: NumPy boolean array indexing assignment cannot assign 88 input values to the 87 output values where the mask is true`, example shown here [(another colab notebook)](https://colab.research.google.com/drive/1MmOqf3ppzzdKKyMWkn0bJy6DqzOO0SSm?usp=sharing) It is clear that the normalizer is the process that break the alignment, as it is observed that `tokenizer._tokenizer.normalizer.normalize_str('ヿ')` return 'コト'. One workaround is to include `tokenizer._tokenizer.normalizer.normalize_str` before the tokenizer preprocessing pipeline, which is also provided in the [first colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) with the name `udposTestDatasetWorkaround`. I guess similar logics should be included inside the tokenizer and the offsets_mapping generation process such that user don't need to include them in their code. But I don't understand the code of tokenizer well that I think I am not able to do this. p.s. **I am using my own dataset building script in the provided example, but the script should be equivalent to the changes made by this [update](https://github.com/huggingface/datasets/pull/2466)** `get_dataset `is just a simple wrapping for `load_dataset` and the `tokenizer` is just `XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-large")`
false
927,017,924
https://api.github.com/repos/huggingface/datasets/issues/2531
https://github.com/huggingface/datasets/pull/2531
2,531
Fix dev version
closed
0
2021-06-22T09:17:10
2021-06-22T09:47:10
2021-06-22T09:47:09
lhoestq
[]
The dev version that ends in `.dev0` should be greater than the current version. However it happens that `1.8.0 > 1.8.0.dev0` for example. Therefore we need to use `1.8.1.dev0` for example in this case. I updated the dev version to use `1.8.1.dev0`, and I also added a comment in the setup.py in the release steps about this.
true
927,013,773
https://api.github.com/repos/huggingface/datasets/issues/2530
https://github.com/huggingface/datasets/pull/2530
2,530
Fixed label parsing in the ProductReviews dataset
closed
4
2021-06-22T09:12:45
2021-06-22T12:55:20
2021-06-22T12:52:40
yavuzKomecoglu
[]
Fixed issue with parsing dataset labels.
true
926,378,812
https://api.github.com/repos/huggingface/datasets/issues/2529
https://github.com/huggingface/datasets/pull/2529
2,529
Add summarization template
closed
2
2021-06-21T16:08:31
2021-06-23T14:22:11
2021-06-23T13:30:10
lewtun
[]
This PR adds a task template for text summarization. As far as I can tell, we do not need to distinguish between "extractive" or "abstractive" summarization - both can be handled with this template. Usage: ```python from datasets import load_dataset from datasets.tasks import Summarization ds = load_dataset("xsum", split="train") # Dataset({ # features: ['document', 'summary', 'id'], # num_rows: 204045 # }) summarization = Summarization(text_column="document", summary_column="summary") ds.prepare_for_task(summarization) # Dataset({ # features: ['text', 'summary'], # num_rows: 204045 # }) ```
true
926,314,656
https://api.github.com/repos/huggingface/datasets/issues/2528
https://github.com/huggingface/datasets/issues/2528
2,528
Logging cannot be set to NOTSET similar to transformers
closed
1
2021-06-21T15:04:54
2021-06-24T14:42:47
2021-06-24T14:42:47
joshzwiebel
[ "bug" ]
## Describe the bug In the transformers library you can set the verbosity level to logging.NOTSET to work around the usage of tqdm and IPywidgets, however in Datasets this is no longer possible. This is because transformers set the verbosity level of tqdm with [this](https://github.com/huggingface/transformers/blob/b53bc55ba9bb10d5ee279eab51a2f0acc5af2a6b/src/transformers/file_utils.py#L1449) `disable=bool(logging.get_verbosity() == logging.NOTSET)` and datasets accomplishes this like [so](https://github.com/huggingface/datasets/blob/83554e410e1ab8c6f705cfbb2df7953638ad3ac1/src/datasets/utils/file_utils.py#L493) `not_verbose = bool(logger.getEffectiveLevel() > WARNING)` ## Steps to reproduce the bug ```python import datasets import logging datasets.logging.get_verbosity = lambda : logging.NOTSET datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy") ``` ## Expected results The code should download and load the dataset as normal without displaying progress bars ## Actual results ```ImportError Traceback (most recent call last) <ipython-input-4-aec65c0509c6> in <module> ----> 1 datasets.load_dataset("patrickvonplaten/librispeech_asr_dummy") ~/venv/lib/python3.7/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, **config_kwargs) 713 dataset=True, 714 return_resolved_file_path=True, --> 715 use_auth_token=use_auth_token, 716 ) 717 # Set the base path for downloads as the parent of the script location ~/venv/lib/python3.7/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs) 350 file_path = hf_bucket_url(path, filename=name, dataset=False) 351 try: --> 352 local_path = cached_path(file_path, download_config=download_config) 353 except FileNotFoundError: 354 raise FileNotFoundError( ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 289 use_etag=download_config.use_etag, 290 max_retries=download_config.max_retries, --> 291 use_auth_token=download_config.use_auth_token, 292 ) 293 elif os.path.exists(url_or_filename): ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 668 headers=headers, 669 cookies=cookies, --> 670 max_retries=max_retries, 671 ) 672 ~/venv/lib/python3.7/site-packages/datasets/utils/file_utils.py in http_get(url, temp_file, proxies, resume_size, headers, cookies, timeout, max_retries) 493 initial=resume_size, 494 desc="Downloading", --> 495 disable=not_verbose, 496 ) 497 for chunk in response.iter_content(chunk_size=1024): ~/venv/lib/python3.7/site-packages/tqdm/notebook.py in __init__(self, *args, **kwargs) 217 total = self.total * unit_scale if self.total else self.total 218 self.container = self.status_printer( --> 219 self.fp, total, self.desc, self.ncols) 220 self.sp = self.display 221 ~/venv/lib/python3.7/site-packages/tqdm/notebook.py in status_printer(_, total, desc, ncols) 95 if IProgress is None: # #187 #451 #558 #872 96 raise ImportError( ---> 97 "IProgress not found. Please update jupyter and ipywidgets." 98 " See https://ipywidgets.readthedocs.io/en/stable" 99 "/user_install.html") ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: Linux-5.4.95-42.163.amzn2.x86_64-x86_64-with-debian-10.8 - Python version: 3.7.10 - PyArrow version: 3.0.0 I am running this code on Deepnote and which important to this issue **does not** support IPywidgets
false
926,031,525
https://api.github.com/repos/huggingface/datasets/issues/2527
https://github.com/huggingface/datasets/pull/2527
2,527
Replace bad `n>1M` size tag
closed
0
2021-06-21T09:42:35
2021-06-21T15:06:50
2021-06-21T15:06:49
lhoestq
[]
Some datasets were still using the old `n>1M` tag which has been replaced with tags `1M<n<10M`, etc. This resulted in unexpected results when searching for datasets bigger than 1M on the hub, since it was only showing the ones with the tag `n>1M`.
true
925,929,228
https://api.github.com/repos/huggingface/datasets/issues/2526
https://github.com/huggingface/datasets/issues/2526
2,526
Add COCO datasets
open
17
2021-06-21T07:48:32
2023-06-22T14:12:18
null
NielsRogge
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** COCO - **Description:** COCO is a large-scale object detection, segmentation, and captioning dataset. - **Paper + website:** https://cocodataset.org/#home - **Data:** https://cocodataset.org/#download - **Motivation:** It would be great to have COCO available in HuggingFace datasets, as we are moving beyond just text. COCO includes multi-modalities (images + text), as well as a huge amount of images annotated with objects, segmentation masks, keypoints etc., on which models like DETR (which I recently added to HuggingFace Transformers) are trained. Currently, one needs to download everything from the website and place it in a local folder, but it would be much easier if we can directly access it through the datasets API. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
925,896,358
https://api.github.com/repos/huggingface/datasets/issues/2525
https://github.com/huggingface/datasets/pull/2525
2,525
Use scikit-learn package rather than sklearn in setup.py
closed
0
2021-06-21T07:04:25
2021-06-21T10:01:13
2021-06-21T08:57:33
lesteve
[]
The sklearn package is an historical thing and should probably not be used by anyone, see https://github.com/scikit-learn/scikit-learn/issues/8215#issuecomment-344679114 for some caveats. Note: this affects only TESTS_REQUIRE so I guess only developers not end users.
true
925,610,934
https://api.github.com/repos/huggingface/datasets/issues/2524
https://github.com/huggingface/datasets/pull/2524
2,524
Raise FileNotFoundError in WindowsFileLock
closed
2
2021-06-20T14:25:11
2021-06-28T09:56:22
2021-06-28T08:47:39
mariosasko
[]
Closes #2443
true
925,421,008
https://api.github.com/repos/huggingface/datasets/issues/2523
https://github.com/huggingface/datasets/issues/2523
2,523
Fr
closed
0
2021-06-19T15:56:32
2021-06-19T18:48:23
2021-06-19T18:48:23
aDrIaNo34500
[]
__Originally posted by @lewtun in https://github.com/huggingface/datasets/pull/2469__
false
925,334,379
https://api.github.com/repos/huggingface/datasets/issues/2522
https://github.com/huggingface/datasets/issues/2522
2,522
Documentation Mistakes in Dataset: emotion
closed
3
2021-06-19T07:08:57
2023-01-02T12:04:58
2023-01-02T12:04:58
GDGauravDutta
[ "bug" ]
As per documentation, Dataset: emotion Homepage: https://github.com/dair-ai/emotion_dataset Dataset: https://github.com/huggingface/datasets/blob/master/datasets/emotion/emotion.py Permalink: https://huggingface.co/datasets/viewer/?dataset=emotion Emotion is a dataset of English Twitter messages with eight basic emotions: anger, anticipation, disgust, fear, joy, sadness, surprise, and trust. For more detailed information please refer to the paper. But when we view the data, there are only 6 emotions, anger, fear, joy, sadness, surprise, and trust.
false
925,030,685
https://api.github.com/repos/huggingface/datasets/issues/2521
https://github.com/huggingface/datasets/pull/2521
2,521
Insert text classification template for Emotion dataset
closed
0
2021-06-18T15:56:19
2021-06-21T09:22:31
2021-06-21T09:22:31
lewtun
[]
This PR includes a template and updated `dataset_infos.json` for the `emotion` dataset.
true
925,015,004
https://api.github.com/repos/huggingface/datasets/issues/2520
https://github.com/huggingface/datasets/issues/2520
2,520
Datasets with tricky task templates
closed
1
2021-06-18T15:33:57
2023-07-20T13:20:32
2023-07-20T13:20:32
lewtun
[ "Dataset discussion" ]
I'm collecting a list of datasets here that don't follow the "standard" taxonomy and require further investigation to implement task templates for. ## Text classification * [hatexplain](https://huggingface.co/datasets/hatexplain): ostensibly a form of text classification, but not in the standard `(text, target)` format and each sample appears to be tokenized. * [muchocine](https://huggingface.co/datasets/muchocine): contains two candidate text columns (long-form and summary) which in principle requires two `TextClassification` templates which is not currently supported
false
924,903,240
https://api.github.com/repos/huggingface/datasets/issues/2519
https://github.com/huggingface/datasets/pull/2519
2,519
Improve performance of pandas arrow extractor
closed
4
2021-06-18T13:24:41
2021-06-21T09:06:06
2021-06-21T09:06:06
albertvillanova
[]
While reviewing PR #2505, I noticed that pandas arrow extractor could be refactored to be faster.
true
924,654,100
https://api.github.com/repos/huggingface/datasets/issues/2518
https://github.com/huggingface/datasets/pull/2518
2,518
Add task templates for tydiqa and xquad
closed
1
2021-06-18T08:06:34
2021-06-18T15:01:17
2021-06-18T14:50:33
lewtun
[]
This PR adds question-answering templates to the remaining datasets that are linked to a model on the Hub. Notes: * I could not test the tydiqa implementation since I don't have enough disk space 😢 . But I am confident the template works :) * there exist other datasets like `fquad` and `mlqa` which are candidates for question-answering templates, but some work is needed to handle the ordering of nested column described in #2434
true
924,643,345
https://api.github.com/repos/huggingface/datasets/issues/2517
https://github.com/huggingface/datasets/pull/2517
2,517
Fix typo in MatthewsCorrelation class name
closed
0
2021-06-18T07:53:06
2021-06-18T08:43:55
2021-06-18T08:43:55
albertvillanova
[]
Close #2513.
true
924,597,470
https://api.github.com/repos/huggingface/datasets/issues/2516
https://github.com/huggingface/datasets/issues/2516
2,516
datasets.map pickle issue resulting in invalid mapping function
open
7
2021-06-18T06:47:26
2021-06-23T13:47:49
null
david-waterworth
[ "bug" ]
I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is mapped to a dataset, i.e. in the manner of run_mlm.py and other huggingface scripts. The following reproduces the issue - most likely I'm missing something A simulated tokeniser which can be pickled ``` class CustomTokenizer: def __init__(self): self.state = "init" def __getstate__(self): print("__getstate__ called") out = self.__dict__.copy() self.state = "pickled" return out def __setstate__(self, d): print("__setstate__ called") self.__dict__ = d self.state = "restored" tokenizer = CustomTokenizer() ``` Test that it actually works - prints "__getstate__ called" and "__setstate__ called" ``` import pickle serialized = pickle.dumps(tokenizer) restored = pickle.loads(serialized) assert restored.state == "restored" ``` Simulate a function that tokenises examples, when dataset.map is called, this function ``` def tokenize_function(examples): assert tokenizer.state == "restored" # this shouldn't fail but it does output = tokenizer(examples) # this will fail as tokenizer isn't really a tokenizer return output ``` Use map to simulate tokenization ``` import glob from datasets import load_dataset assert tokenizer.state == "restored" train_files = glob.glob('train*.csv') validation_files = glob.glob('validation*.csv') datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files)) tokenized_datasets = datasets.map( tokenize_function, batched=True, ) ``` What's happening is I can see that __getstate__ is called but not __setstate__, so the state of `tokenize_function` is invalid at the point that it's actually executed. This doesn't matter as far as I can see for the standard tokenizers as they don't use __getstate__ / __setstate__. I'm not sure if there's another hook I'm supposed to implement as well? --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-22-a2aef4f74aaa> in <module> 8 tokenized_datasets = datasets.map( 9 tokenize_function, ---> 10 batched=True, 11 ) ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc) 487 desc=desc, 488 ) --> 489 for k, dataset in self.items() 490 } 491 ) ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0) 487 desc=desc, 488 ) --> 489 for k, dataset in self.items() 490 } 491 ) ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1633 fn_kwargs=fn_kwargs, 1634 new_fingerprint=new_fingerprint, -> 1635 desc=desc, 1636 ) 1637 else: ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 184 } 185 # apply actual function --> 186 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 187 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 188 # re-apply format to the output ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 395 # Call actual function 396 --> 397 out = func(self, *args, **kwargs) 398 399 # Update fingerprint of in-place transforms + update in-place history of transforms ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc) 1961 indices, 1962 check_same_num_examples=len(input_dataset.list_indexes()) > 0, -> 1963 offset=offset, 1964 ) 1965 except NumExamplesMismatch: ~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset) 1853 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset 1854 processed_inputs = ( -> 1855 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1856 ) 1857 if update_data is None: <ipython-input-21-8ee4a8ba5b1b> in tokenize_function(examples) 1 def tokenize_function(examples): ----> 2 assert tokenizer.state == "restored" 3 tokenizer(examples) 4 return examples
false
924,435,447
https://api.github.com/repos/huggingface/datasets/issues/2515
https://github.com/huggingface/datasets/pull/2515
2,515
CRD3 dataset card
closed
0
2021-06-18T00:24:07
2021-06-21T10:18:44
2021-06-21T10:18:44
wilsonyhlee
[]
This PR adds additional information to the CRD3 dataset card.
true
924,417,172
https://api.github.com/repos/huggingface/datasets/issues/2514
https://github.com/huggingface/datasets/issues/2514
2,514
Can datasets remove duplicated rows?
open
12
2021-06-17T23:35:38
2024-07-19T13:23:01
null
liuxinglan
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that.. **Describe the solution you'd like** have a functionality of " remove duplicated rows" **Describe alternatives you've considered** convert dataset to pandas, remove duplicate, and convert back... **Additional context** no
false
924,174,413
https://api.github.com/repos/huggingface/datasets/issues/2513
https://github.com/huggingface/datasets/issues/2513
2,513
Corelation should be Correlation
closed
1
2021-06-17T17:28:48
2021-06-18T08:43:55
2021-06-18T08:43:55
colbym-MM
[]
https://github.com/huggingface/datasets/blob/0e87e1d053220e8ecddfa679bcd89a4c7bc5af62/metrics/matthews_correlation/matthews_correlation.py#L66
false
924,069,353
https://api.github.com/repos/huggingface/datasets/issues/2512
https://github.com/huggingface/datasets/issues/2512
2,512
seqeval metric does not work with a recent version of sklearn: classification_report() got an unexpected keyword argument 'output_dict'
closed
1
2021-06-17T15:36:02
2021-06-17T15:46:07
2021-06-17T15:46:07
avidale
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric seqeval = load_metric("seqeval") seqeval.compute(predictions=[['A']], references=[['A']]) ``` ## Expected results The function computes a dict with metrics ## Actual results ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-39-69a57f5cf06f> in <module> 1 from datasets import load_dataset, load_metric 2 seqeval = load_metric("seqeval") ----> 3 seqeval.compute(predictions=[['A']], references=[['A']]) ~/p3/lib/python3.7/site-packages/datasets/metric.py in compute(self, *args, **kwargs) 396 references = self.data["references"] 397 with temp_seed(self.seed): --> 398 output = self._compute(predictions=predictions, references=references, **kwargs) 399 400 if self.buf_writer is not None: ~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/81eda1ff004361d4fa48754a446ec69bb7aa9cf4d14c7215f407d1475941c5ff/seqeval.py in _compute(self, predictions, references, suffix) 95 96 def _compute(self, predictions, references, suffix=False): ---> 97 report = classification_report(y_true=references, y_pred=predictions, suffix=suffix, output_dict=True) 98 report.pop("macro avg") 99 report.pop("weighted avg") TypeError: classification_report() got an unexpected keyword argument 'output_dict' ``` ## Environment info sklearn=0.24 datasets=1.1.3
false
923,762,133
https://api.github.com/repos/huggingface/datasets/issues/2511
https://github.com/huggingface/datasets/issues/2511
2,511
Add C4
closed
2
2021-06-17T10:31:04
2021-07-05T12:36:58
2021-07-05T12:36:57
lhoestq
[ "dataset request" ]
## Adding a Dataset - **Name:** *C4* - **Description:** *https://github.com/allenai/allennlp/discussions/5056* - **Paper:** *https://arxiv.org/abs/1910.10683* - **Data:** *https://huggingface.co/datasets/allenai/c4* - **Motivation:** *Used a lot for pretraining* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Should fix https://github.com/huggingface/datasets/issues/1710
false
923,735,485
https://api.github.com/repos/huggingface/datasets/issues/2510
https://github.com/huggingface/datasets/pull/2510
2,510
Add align_labels_with_mapping to DatasetDict
closed
0
2021-06-17T10:03:35
2021-06-17T10:45:25
2021-06-17T10:45:24
lhoestq
[]
https://github.com/huggingface/datasets/pull/2457 added the `Dataset.align_labels_with_mapping` method. In this PR I also added `DatasetDict.align_labels_with_mapping`
true
922,846,035
https://api.github.com/repos/huggingface/datasets/issues/2509
https://github.com/huggingface/datasets/pull/2509
2,509
Fix fingerprint when moving cache dir
closed
4
2021-06-16T16:45:09
2021-06-21T15:05:04
2021-06-21T15:05:03
lhoestq
[]
The fingerprint of a dataset changes if the cache directory is moved. I fixed that by setting the fingerprint to be the hash of: - the relative cache dir (dataset_name/version/config_id) - the requested split Close #2496 I had to fix an issue with the filelock filename that was too long (>255). It prevented the tests to run on my machine. I just added `hash_filename_if_too_long` in case this happens, to not get filenames longer than 255. We usually have long filenames for filelocks because they are named after the path that is being locked. In case the path is a cache directory that has long directory names, then the filelock filename could en up being very long.
true
921,863,173
https://api.github.com/repos/huggingface/datasets/issues/2508
https://github.com/huggingface/datasets/issues/2508
2,508
Load Image Classification Dataset from Local
closed
5
2021-06-15T22:43:33
2022-03-01T16:29:44
2022-03-01T16:29:44
Jacobsolawetz
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Yes - we would like to load an image classification dataset with datasets without having to write a custom data loader. **Describe the solution you'd like** Given a folder structure with images of each class in each folder, the ability to load these folders into a HuggingFace dataset like "cifar10". **Describe alternatives you've considered** Implement ViT training outside of the HuggingFace Trainer and without datasets (we did this but prefer to stay on the main path) Write custom data loader logic **Additional context** We're training ViT on custom dataset
false
921,441,962
https://api.github.com/repos/huggingface/datasets/issues/2507
https://github.com/huggingface/datasets/pull/2507
2,507
Rearrange JSON field names to match passed features schema field names
closed
0
2021-06-15T14:10:02
2021-06-16T10:47:49
2021-06-16T10:47:49
albertvillanova
[]
This PR depends on PR #2453 (which must be merged first). Close #2366.
true
921,435,598
https://api.github.com/repos/huggingface/datasets/issues/2506
https://github.com/huggingface/datasets/pull/2506
2,506
Add course banner
closed
0
2021-06-15T14:03:54
2021-06-15T16:25:36
2021-06-15T16:25:35
sgugger
[]
This PR adds a course banner similar to the one you can now see in the [Transformers repo](https://github.com/huggingface/transformers) that links to the course. Let me know if placement seems right to you or not, I can move it just below the badges too.
true
921,234,797
https://api.github.com/repos/huggingface/datasets/issues/2505
https://github.com/huggingface/datasets/pull/2505
2,505
Make numpy arrow extractor faster
closed
5
2021-06-15T10:11:32
2021-06-28T09:53:39
2021-06-28T09:53:38
lhoestq
[]
I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498 This could make the numpy/torch/tf/jax formatting faster
true
920,636,186
https://api.github.com/repos/huggingface/datasets/issues/2503
https://github.com/huggingface/datasets/issues/2503
2,503
SubjQA wrong boolean values in entries
open
4
2021-06-14T17:42:46
2021-08-25T03:52:06
null
arnaudstiegler
[ "bug" ]
## Describe the bug SubjQA seems to have a boolean that's consistently wrong. It defines: - question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective). - is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are considered as subjective) However, `is_ques_subjective` seems to have wrong values in the entire dataset. For instance, in the example in the dataset card, we have: - "question_subj_level": 2 - "is_ques_subjective": false However, according to the description, the question should be subjective since the `question_subj_level` is below 4
false
920,623,572
https://api.github.com/repos/huggingface/datasets/issues/2502
https://github.com/huggingface/datasets/pull/2502
2,502
JAX integration
closed
0
2021-06-14T17:24:23
2021-06-21T16:15:50
2021-06-21T16:15:49
lhoestq
[]
Hi ! I just added the "jax" formatting, as we already have for pytorch, tensorflow, numpy (and also pandas and arrow). It does pretty much the same thing as the pytorch formatter except it creates jax.numpy.ndarray objects. ```python from datasets import Dataset d = Dataset.from_dict({"foo": [[0., 1., 2.]]}) d = d.with_format("jax") d[0] # {'foo': DeviceArray([0., 1., 2.], dtype=float32)} ``` A few details: - The default integer precision for jax depends on the jax configuration `jax_enable_x64` (see [here](https://jax.readthedocs.io/en/latest/notebooks/Common_Gotchas_in_JAX.html#double-64bit-precision)), I took that into account. Unless `jax_enable_x64` is specified, it is int32 by default - AFAIK it's not possible to do a full conversion from arrow data to jax data. We are doing arrow -> numpy -> jax but the numpy -> jax part doesn't do zero copy unfortutanely (see [here](https://github.com/google/jax/issues/4486)) - the env var for disabling JAX is `USE_JAX`. However I noticed that in `transformers` it is `USE_FLAX`. This is not an issue though IMO I also updated `convert_to_python_objects` to allow users to pass jax.numpy.ndarray objects to build a dataset. Since the `convert_to_python_objects` method became slow because it's the time when pytorch, tf (and now jax) are imported, I fixed it by checking the `sys.modules` to avoid unecessary import of pytorch, tf or jax. Close #2495
true
920,579,634
https://api.github.com/repos/huggingface/datasets/issues/2501
https://github.com/huggingface/datasets/pull/2501
2,501
Add Zenodo metadata file with license
closed
0
2021-06-14T16:28:12
2021-06-14T16:49:42
2021-06-14T16:49:42
albertvillanova
[]
This Zenodo metadata file fixes the name of the `Datasets` license appearing in the DOI as `"Apache-2.0"`, which otherwise by default is `"other-open"`. Close #2472.
true
920,471,411
https://api.github.com/repos/huggingface/datasets/issues/2500
https://github.com/huggingface/datasets/pull/2500
2,500
Add load_dataset_builder
closed
6
2021-06-14T14:27:45
2025-06-20T18:07:24
2021-07-05T10:45:58
mariosasko
[]
Adds the `load_dataset_builder` function. The good thing is that we can reuse this function to load the dataset info without downloading the dataset itself. TODOs: - [x] Add docstring and entry in the docs - [x] Add tests Closes #2484
true
920,413,021
https://api.github.com/repos/huggingface/datasets/issues/2499
https://github.com/huggingface/datasets/issues/2499
2,499
Python Programming Puzzles
open
2
2021-06-14T13:27:18
2021-06-15T18:14:14
null
VictorSanh
[ "dataset request" ]
## Adding a Dataset - **Name:** Python Programming Puzzles - **Description:** Programming challenge called programming puzzles, as an objective and comprehensive evaluation of program synthesis - **Paper:** https://arxiv.org/pdf/2106.05784.pdf - **Data:** https://github.com/microsoft/PythonProgrammingPuzzles ([Scrolling through the data](https://github.com/microsoft/PythonProgrammingPuzzles/blob/main/problems/README.md)) - **Motivation:** Spans a large range of difficulty, problems, and domains. A useful resource for evaluation as we don't have a clear understanding of the abilities and skills of extremely large LMs. Note: it's a growing dataset (contributions are welcome), so we'll need careful versioning for this dataset. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
920,411,285
https://api.github.com/repos/huggingface/datasets/issues/2498
https://github.com/huggingface/datasets/issues/2498
2,498
Improve torch formatting performance
open
17
2021-06-14T13:25:24
2022-07-15T17:12:04
null
vblagoje
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** It would be great, if possible, to further improve read performance of raw encoded datasets and their subsequent conversion to torch tensors. A bit more background. I am working on LM pre-training using HF ecosystem. We use encoded HF Wikipedia and BookCorpus datasets. The training machines are similar to DGX-1 workstations. We use HF trainer torch.distributed training approach on a single machine with 8 GPUs. The current performance is about 30% slower than NVidia optimized BERT [examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling) baseline. Quite a bit of customized code and training loop tricks were used to achieve the baseline performance. It would be great to achieve the same performance while using nothing more than off the shelf HF ecosystem. Perhaps, in the future, with @stas00 work on deepspeed integration, it could even be exceeded. **Describe the solution you'd like** Using profiling tools we've observed that appx. 25% of cumulative run time is spent on data loader next call. ![dataloader_next](https://user-images.githubusercontent.com/458335/121895543-59742a00-ccee-11eb-85fb-f07715e3f1f6.png) As you can observe most of the data loader next call is spent in HF datasets torch_formatter.py format_batch call. Digging a bit deeper into format_batch we can see the following profiler data: ![torch_formatter](https://user-images.githubusercontent.com/458335/121895944-c7b8ec80-ccee-11eb-95d5-5875c5716c30.png) Once again, a lot of time is spent in pyarrow table conversion to pandas which seems like an intermediary step. Offline @lhoestq told me that this approach was, for some unknown reason, faster than direct to numpy conversion. **Describe alternatives you've considered** I am not familiar with pyarrow and have not yet considered the alternatives to the current approach. Most of the online advice around data loader performance improvements revolve around increasing number of workers, using pin memory for copying tensors from host device to gpus but we've already tried these avenues without much performance improvement. Weights & Biases dashboard for the pre-training task reports CPU utilization of ~ 10%, GPUs are completely saturated (GPU utilization is above 95% on all GPUs), while disk utilization is above 90%.
false
920,250,382
https://api.github.com/repos/huggingface/datasets/issues/2497
https://github.com/huggingface/datasets/pull/2497
2,497
Use default cast for sliced list arrays if pyarrow >= 4
closed
2
2021-06-14T10:02:47
2021-06-15T18:06:18
2021-06-14T14:24:37
albertvillanova
[]
From pyarrow version 4, it is supported to cast sliced lists. This PR uses default pyarrow cast in Datasets to cast sliced list arrays if pyarrow version is >= 4. In relation with PR #2461 and #2490. cc: @lhoestq, @abhi1thakur, @SBrandeis
true
920,216,314
https://api.github.com/repos/huggingface/datasets/issues/2496
https://github.com/huggingface/datasets/issues/2496
2,496
Dataset fingerprint changes after moving the cache directory, which prevent cache reload when using `map`
closed
0
2021-06-14T09:20:26
2021-06-21T15:05:03
2021-06-21T15:05:03
lhoestq
[]
`Dataset.map` uses the dataset fingerprint (a hash) for caching. However the fingerprint seems to change when someone moves the cache directory of the dataset. This is because it uses the default fingerprint generation: 1. the dataset path is used to get the fingerprint 2. the modification times of the arrow file is also used to get the fingerprint To fix that we could set the fingerprint of the dataset to be a hash of (<dataset_name>, <config_name>, <version>, <script_hash>), i.e. a hash of the the cache path relative to the cache directory.
false
920,170,030
https://api.github.com/repos/huggingface/datasets/issues/2495
https://github.com/huggingface/datasets/issues/2495
2,495
JAX formatting
closed
0
2021-06-14T08:32:07
2021-06-21T16:15:49
2021-06-21T16:15:49
lhoestq
[]
We already support pytorch, tensorflow, numpy, pandas and arrow dataset formatting. Let's add jax as well
false
920,149,183
https://api.github.com/repos/huggingface/datasets/issues/2494
https://github.com/huggingface/datasets/issues/2494
2,494
Improve docs on Enhancing performance
open
2
2021-06-14T08:11:48
2025-06-28T18:55:38
null
albertvillanova
[ "documentation" ]
In the ["Enhancing performance"](https://huggingface.co/docs/datasets/loading_datasets.html#enhancing-performance) section of docs, add specific use cases: - How to make datasets the fastest - How to make datasets take the less RAM - How to make datasets take the less hard drive mem cc: @thomwolf
false
919,833,281
https://api.github.com/repos/huggingface/datasets/issues/2493
https://github.com/huggingface/datasets/pull/2493
2,493
add tensorflow-macos support
closed
1
2021-06-13T16:20:08
2021-06-15T08:53:06
2021-06-15T08:53:06
slayerjain
[]
ref - https://github.com/huggingface/datasets/issues/2068
true
919,718,102
https://api.github.com/repos/huggingface/datasets/issues/2492
https://github.com/huggingface/datasets/pull/2492
2,492
Eduge
closed
0
2021-06-13T05:10:59
2021-06-22T09:49:04
2021-06-16T10:41:46
enod
[]
Hi, awesome folks behind the huggingface! Here is my PR for the text classification dataset in Mongolian. Please do let me know in case you have anything to clarify. Thanks & Regards, Enod
true
919,714,506
https://api.github.com/repos/huggingface/datasets/issues/2491
https://github.com/huggingface/datasets/pull/2491
2,491
add eduge classification dataset
closed
1
2021-06-13T04:37:01
2021-06-13T05:06:48
2021-06-13T05:06:38
enod
[]
true
919,571,385
https://api.github.com/repos/huggingface/datasets/issues/2490
https://github.com/huggingface/datasets/pull/2490
2,490
Allow latest pyarrow version
closed
1
2021-06-12T14:17:34
2021-07-06T16:54:52
2021-06-14T07:53:23
albertvillanova
[]
Allow latest pyarrow version, once that version 4.0.1 fixes the segfault bug introduced in version 4.0.0. Close #2489.
true
919,569,749
https://api.github.com/repos/huggingface/datasets/issues/2489
https://github.com/huggingface/datasets/issues/2489
2,489
Allow latest pyarrow version once segfault bug is fixed
closed
0
2021-06-12T14:09:52
2021-06-14T07:53:23
2021-06-14T07:53:23
albertvillanova
[ "enhancement" ]
As pointed out by @symeneses (see https://github.com/huggingface/datasets/pull/2268#issuecomment-860048613), pyarrow has fixed the segfault bug present in version 4.0.0 (see https://issues.apache.org/jira/browse/ARROW-12568): - it was fixed on 3 May 2021 - version 4.0.1 was released on 19 May 2021 with the bug fix
false
919,500,756
https://api.github.com/repos/huggingface/datasets/issues/2488
https://github.com/huggingface/datasets/pull/2488
2,488
Set configurable downloaded datasets path
closed
0
2021-06-12T09:09:03
2021-06-14T09:13:27
2021-06-14T08:29:07
albertvillanova
[]
Part of #2480.
true
919,452,407
https://api.github.com/repos/huggingface/datasets/issues/2487
https://github.com/huggingface/datasets/pull/2487
2,487
Set configurable extracted datasets path
closed
2
2021-06-12T05:47:29
2021-06-14T09:30:17
2021-06-14T09:02:56
albertvillanova
[]
Part of #2480.
true
919,174,898
https://api.github.com/repos/huggingface/datasets/issues/2486
https://github.com/huggingface/datasets/pull/2486
2,486
Add Rico Dataset
closed
2
2021-06-11T20:17:41
2022-10-03T09:38:18
2022-10-03T09:38:18
ncoop57
[ "dataset contribution" ]
Hi there! I'm wanting to add the Rico datasets for software engineering type data to y'alls awesome library. However, as I have started coding, I've ran into a few hiccups so I thought it best to open the PR early to get a bit of discussion on how the Rico datasets should be added to the `datasets` lib. 1) There are 7 different datasets under Rico and so I was wondering, should I make a folder for each or should I put them as different configurations of a single dataset? You can see the datasets available for Rico here: http://interactionmining.org/rico 2) As of right now, I have a semi working version of the first dataset which has pairs of screenshots and hierarchies from android applications. However, these screenshots are very large (1440, 2560, 3) and there are 66,000 of them so I am not able to perform the processing that the `datasets` lib does after downloading and extracting the dataset since I run out of memory very fast. Is there a way to have `datasets` lib not put everything into memory while it is processing the dataset? 2.1) If there is not a way, would it be better to just return the path to the screenshots instead of the actual image? 3) The hierarchies are JSON objects and looking through the documentation of `datasets`, I didn't see any feature that I could use for this type of data. So, for now I just have it being read in as a string, is this okay or should I be doing it differently? 4) One of the Rico datasets is a bunch of animations (GIFs), is there a `datasets` feature that I can put this type of data into or should I just return the path as a string? I appreciate any and all help I can get for this PR, I think the Rico datasets will be an awesome addition to the library :nerd_face: !
true