id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
โŒ€
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
โŒ€
is_pull_request
bool
2 classes
1,073,603,508
https://api.github.com/repos/huggingface/datasets/issues/3401
https://github.com/huggingface/datasets/issues/3401
3,401
Add Wikimedia pre-processed datasets
closed
1
2021-12-07T17:33:19
2024-10-09T16:10:47
2024-10-09T16:10:47
albertvillanova
[ "dataset request" ]
## Adding a Dataset - **Name:** Add pre-processed data to: - *wikimedia/wikipedia*: https://huggingface.co/datasets/wikimedia/wikipedia - *wikimedia/wikisource*: https://huggingface.co/datasets/wikimedia/wikisource - **Description:** Add pre-processed data to the Hub for all languages - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** This will be very useful for the NLP community, as the pre-processing has a high cost for lot of researchers (both in computation and in knowledge) Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). CC: @geohci, @yjernite
false
1,073,600,382
https://api.github.com/repos/huggingface/datasets/issues/3400
https://github.com/huggingface/datasets/issues/3400
3,400
Improve Wikipedia loading script
closed
2
2021-12-07T17:29:25
2022-03-22T16:52:28
2022-03-22T16:52:28
albertvillanova
[ "dataset request" ]
As reported by @geohci, the "wikipedia" processing/loading script could be improved by some additional small suggested processing functions: - _extract_content(filepath): - Replace .startswith("#redirect") with more structured approach: if elem.find(f"./{namespace}redirect") is None: continue - _parse_and_clean_wikicode(raw_content, parser): - Remove rm_template from cleaning -- this is redundant with .strip_code() from mwparserformhell - Build a language-specific list of namespace prefixes to filter out per below get_namespace_prefixes - Optional: strip prefixes like categories -- e.g., Category:Towns in Tianjin becomes Towns in Tianjin - Optional: strip magic words
false
1,073,593,861
https://api.github.com/repos/huggingface/datasets/issues/3399
https://github.com/huggingface/datasets/issues/3399
3,399
Add Wikisource dataset
closed
2
2021-12-07T17:21:31
2024-10-09T16:11:27
2024-10-09T16:11:26
albertvillanova
[ "dataset request" ]
## Adding a Dataset - **Name:** *wikisource* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** Additional high quality textual data, besides Wikipedia. Add loading script as "canonical" dataset (as it is the case for ""wikipedia"). Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). CC: @geohci, @yjernite
false
1,073,590,384
https://api.github.com/repos/huggingface/datasets/issues/3398
https://github.com/huggingface/datasets/issues/3398
3,398
Add URL field to Wikimedia dataset instances: wikipedia,...
closed
5
2021-12-07T17:17:27
2022-03-22T16:53:27
2022-03-22T16:53:27
albertvillanova
[ "dataset request" ]
As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Ranking_2021_Participant_Instructions.pdf#subsection.3.2 This should be done for all pre-processed datasets under "wikimedia" org in the Hub: https://huggingface.co/wikimedia
false
1,073,502,444
https://api.github.com/repos/huggingface/datasets/issues/3397
https://github.com/huggingface/datasets/pull/3397
3,397
add BNL newspapers
closed
9
2021-12-07T15:43:21
2022-01-17T18:35:34
2022-01-17T18:35:34
davanstrien
[]
This pull request adds the BNL's [processed newspaper collections](https://data.bnl.lu/data/historical-newspapers/) as a dataset. This is partly done to support BigScience see: https://github.com/bigscience-workshop/data_tooling/issues/192. The Datacard is more sparse than I would like but I plan to make a separate pull request to try and make this more complete at a later date. I had to manually add the `dummy_data` but I believe I've done this correctly (the tests pass locally).
true
1,073,467,183
https://api.github.com/repos/huggingface/datasets/issues/3396
https://github.com/huggingface/datasets/issues/3396
3,396
Install Audio dependencies to support audio decoding
closed
5
2021-12-07T15:11:36
2022-04-25T16:12:22
2022-04-25T16:12:01
albertvillanova
[ "dataset-viewer", "audio_column" ]
## Dataset viewer issue for '*openslr*', '*projecte-aina/parlament_parla*' **Link:** *https://huggingface.co/datasets/openslr* **Link:** *https://huggingface.co/datasets/projecte-aina/parlament_parla* Error: ``` Status code: 400 Exception: ImportError Message: To support decoding audio files, please install 'librosa'. ``` Am I the one who added this dataset ? Yes-No - openslr: No - projecte-aina/parlament_parla: Yes
false
1,073,432,650
https://api.github.com/repos/huggingface/datasets/issues/3395
https://github.com/huggingface/datasets/pull/3395
3,395
Fix formatting in IterableDataset.map docs
closed
0
2021-12-07T14:41:01
2021-12-08T10:11:33
2021-12-08T10:11:33
mariosasko
[]
Fix formatting in the recently added `Map` section of the streaming docs.
true
1,073,396,308
https://api.github.com/repos/huggingface/datasets/issues/3394
https://github.com/huggingface/datasets/issues/3394
3,394
Preserve all feature types when saving a dataset on the Hub with `push_to_hub`
closed
2
2021-12-07T14:08:30
2021-12-21T17:00:09
2021-12-21T17:00:09
mariosasko
[ "bug" ]
Currently, if one of the dataset features is of type `ClassLabel`, saving the dataset with `push_to_hub` and reloading the dataset with `load_dataset` will return the feature of type `Value`. To fix this, we should do something similar to `save_to_disk` (which correctly preserves the types) and not only push the parquet files in `push_to_hub`, but also the dataset `info` (stored in a JSON file).
false
1,073,189,777
https://api.github.com/repos/huggingface/datasets/issues/3393
https://github.com/huggingface/datasets/issues/3393
3,393
Common Voice Belarusian Dataset
open
0
2021-12-07T10:37:02
2021-12-09T15:56:03
null
wiedymi
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** *Common Voice Belarusian Dataset* - **Description:** *[commonvoice.mozilla.org/be](https://commonvoice.mozilla.org/be)* - **Data:** *[commonvoice.mozilla.org/be/datasets](https://commonvoice.mozilla.org/be/datasets)* - **Motivation:** *It has more than 7GB of data, so it will be great to have it in this package so anyone can try to train something for Belarusian language.* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,073,073,408
https://api.github.com/repos/huggingface/datasets/issues/3392
https://github.com/huggingface/datasets/issues/3392
3,392
Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
closed
1
2021-12-07T08:41:01
2021-12-07T14:04:28
2021-12-07T14:04:28
severo
[ "dataset-viewer" ]
## Dataset viewer issue for `dansbecker/hackernews_hiring_posts` **Link:** https://huggingface.co/datasets/dansbecker/hackernews_hiring_posts *short description of the issue* Dataset preview not showing for uploaded DatasetDict. See https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-datasetdict/12603 Am I the one who added this dataset ? No -> @dansbecker
false
1,072,849,055
https://api.github.com/repos/huggingface/datasets/issues/3391
https://github.com/huggingface/datasets/issues/3391
3,391
method to select columns
closed
1
2021-12-07T02:44:19
2021-12-07T02:45:27
2021-12-07T02:45:27
changjonathanc
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** * There is currently no way to select some columns of a dataset. In pandas, one can use `df[['col1', 'col2']]` to select columns, but in `datasets`, it results in error. **Describe the solution you'd like** * A new method that can be used to create a new dataset with only a list of specified columns. **Describe alternatives you've considered** `.remove_columns(self, columns: Union[str, List[str]], inverse: bool = False)` Or `.select(self, indices: Iterable = None, columns: List[str] = None)`
false
1,072,462,456
https://api.github.com/repos/huggingface/datasets/issues/3390
https://github.com/huggingface/datasets/issues/3390
3,390
Loading dataset throws "KeyError: 'Field "builder_name" does not exist in table schema'"
closed
1
2021-12-06T18:22:49
2021-12-06T20:22:05
2021-12-06T20:22:05
R4ZZ3
[ "bug" ]
## Describe the bug I have prepared dataset to datasets and now I am trying to load it back Finnish-NLP/voxpopuli_fi I get "KeyError: 'Field "builder_name" does not exist in table schema'" My dataset folder and files should be like @patrickvonplaten has here https://huggingface.co/datasets/flax-community/german-common-voice-processed How my voxpopuli dataset looks like: ![image](https://user-images.githubusercontent.com/25264037/144895598-b7d9ae91-b04a-4046-9f06-b71ff0824d13.png) Part of the processing (path column is the absolute path to audio files) ``` def add_audio_column(example): example['audio'] = example['path'] return example voxpopuli = voxpopuli.map(add_audio_column) voxpopuli.cast_column("audio", Audio()) voxpopuli["audio"] <-- to my knowledge this does load the local files and prepares those arrays voxpopuli = voxpopuli.cast_column("audio", Audio(sampling_rate=16_000)) resampling 16kHz ``` I have then saved it to disk_ `voxpopuli.save_to_disk('/asr_disk/datasets_processed_new/voxpopuli')` and made folder structure same as @patrickvonplaten I also get same error while trying to load_dataset from his repo: ![image](https://user-images.githubusercontent.com/25264037/144895872-e9b8f326-cf2b-46cf-9417-606a0ce14077.png) ## Steps to reproduce the bug ```python dataset = load_dataset("Finnish-NLP/voxpopuli_fi") ``` ## Expected results Dataset is loaded correctly and looks like in the first picture ## Actual results Loading throws keyError: KeyError: 'Field "builder_name" does not exist in table schema' Resources I have been trying to follow: https://huggingface.co/docs/datasets/audio_process.html https://huggingface.co/docs/datasets/share_dataset.html ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.2.dev0 - Platform: Ubuntu 20.04.2 LTS - Python version: 3.8.12 - PyArrow version: 6.0.1
false
1,072,191,865
https://api.github.com/repos/huggingface/datasets/issues/3389
https://github.com/huggingface/datasets/issues/3389
3,389
Add EDGAR
open
2
2021-12-06T14:06:11
2022-10-05T10:40:22
null
philschmid
[ "dataset request" ]
## Adding a Dataset - **Name:** EDGAR Database - **Description:** https://www.sec.gov/edgar/about EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system, is the primary system for companies and others submitting documents under the Securities Act of 1933, the Securities Exchange Act of 1934, the Trust Indenture Act of 1939, and the Investment Company Act of 1940. Containing millions of company and individual filings, EDGAR benefits investors, corporations, and the U.S. economy overall by increasing the efficiency, transparency, and fairness of the securities markets. The system processes about 3,000 filings per day, serves up 3,000 terabytes of data to the public annually, and accommodates 40,000 new filers per year on average. EDGARยฎ and EDGARLinkยฎ are registered trademarks of the SEC. - **Data:** https://www.sec.gov/os/accessing-edgar-data - **Motivation:** Enabling and improving FSI (Financial Services Industry) datasets to increase ease of use Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,072,022,021
https://api.github.com/repos/huggingface/datasets/issues/3388
https://github.com/huggingface/datasets/pull/3388
3,388
Fix flaky test of the temporary directory used by load_from_disk
closed
1
2021-12-06T11:09:31
2021-12-06T11:25:03
2021-12-06T11:24:49
lhoestq
[]
The test is flaky, here is an example of random CI failure: https://github.com/huggingface/datasets/commit/73ed6615b4b3eb74d5311684f7b9e05cdb76c989 I fixed that by not checking the content of the random part of the temporary directory name
true
1,071,836,456
https://api.github.com/repos/huggingface/datasets/issues/3387
https://github.com/huggingface/datasets/pull/3387
3,387
Create Language Modeling task
closed
0
2021-12-06T07:56:07
2021-12-17T17:18:28
2021-12-17T17:18:27
albertvillanova
[]
Create Language Modeling task to be able to specify the input "text" column in a dataset. This can be useful for datasets which are not exclusively used for language modeling and have more than one column: - for text classification datasets (with columns "review" and "rating", for example), the Language Modeling task can be used to specify the "text" column ("review" in this case). TODO: - [ ] Add the LanguageModeling task to all dataset scripts which can be used for language modeling
true
1,071,813,141
https://api.github.com/repos/huggingface/datasets/issues/3386
https://github.com/huggingface/datasets/pull/3386
3,386
Fix typos in dataset cards
closed
0
2021-12-06T07:20:40
2021-12-06T09:30:55
2021-12-06T09:30:54
albertvillanova
[]
This PR: - Fix typos in dataset cards - Fix Papers With Code ID for: - Bilingual Corpus of Arabic-English Parallel Tweets - Tweets Hate Speech Detection - Add pretty name tags
true
1,071,742,310
https://api.github.com/repos/huggingface/datasets/issues/3385
https://github.com/huggingface/datasets/issues/3385
3,385
None batched `with_transform`, `set_transform`
open
3
2021-12-06T05:20:54
2022-01-17T15:25:01
null
changjonathanc
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** A `torch.utils.data.Dataset.__getitem__` operates on a single example. But ๐Ÿค— `Datasets.with_transform` doesn't seem to allow non-batched transform. **Describe the solution you'd like** Have a `batched=True` argument in `Datasets.with_transform` **Describe alternatives you've considered** * Convert a non-batched transform function to batched one myself. * Wrap a ๐Ÿค— Dataset with torch Dataset, and add a `__getitem__`. ๐Ÿ™„ * Have `lazy=False` in `Dataset.map`, and returns a `LazyDataset` if `lazy=True`. This way the same `map` interface can be used, and existing code can be updated with one argument change.
false
1,071,594,165
https://api.github.com/repos/huggingface/datasets/issues/3384
https://github.com/huggingface/datasets/pull/3384
3,384
Adding mMARCO dataset
closed
0
2021-12-05T23:59:11
2021-12-12T15:27:36
2021-12-12T15:27:36
lhbonifacio
[]
We are adding mMARCO dataset to HuggingFace datasets repo. This way, all the languages covered in the translation are available in a easy way.
true
1,071,551,884
https://api.github.com/repos/huggingface/datasets/issues/3383
https://github.com/huggingface/datasets/pull/3383
3,383
add Georgian data in cc100.
closed
0
2021-12-05T20:38:09
2021-12-14T14:37:23
2021-12-14T14:37:22
AnzorGozalishvili
[]
update cc100 dataset to support loading Georgian (ka) data which is originally available in CC100 dataset source. All tests are passed. Dummy data generated. metadata generated.
true
1,071,293,299
https://api.github.com/repos/huggingface/datasets/issues/3382
https://github.com/huggingface/datasets/pull/3382
3,382
#3337 Add typing overloads to Dataset.__getitem__ for mypy
closed
2
2021-12-04T20:54:49
2021-12-14T10:28:55
2021-12-14T10:28:55
Dref360
[]
Add typing overloads to Dataset.__getitem__ for mypy Fixes #3337 **Iterable** Iterable from `collections` cannot have a type, so you can't do `Iterable[int]` for example. `typing` has a Generic version that builds upon the one from `collections`. **Flake8** I had to add `# noqa: F811`, this is a bug from Flake8. datasets uses flake8==3.7.9 which released in October 2019 if I update flake8 (4.0.1), I no longer get these errors, but I did not want to make the update without your approval. (It also triggers other errors like no args in f-strings.)
true
1,071,283,879
https://api.github.com/repos/huggingface/datasets/issues/3381
https://github.com/huggingface/datasets/issues/3381
3,381
Unable to load audio_features from common_voice dataset
closed
3
2021-12-04T19:59:11
2021-12-06T17:52:42
2021-12-06T17:52:42
ashu5644
[ "bug" ]
## Describe the bug I am not able to load audio features from common_voice dataset ## Steps to reproduce the bug ``` from datasets import load_dataset import torchaudio test_dataset = load_dataset("common_voice", "hi", split="test[:2%]") resampler = torchaudio.transforms.Resample(48_000, 16_000) def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) ``` ## Expected results This piece of code should return test_dataset after loading audio features. ## Actual results Reusing dataset common_voice (/home/jovyan/.cache/huggingface/datasets/common_voice/hi/6.1.0/b879a355caa529b11f2249400b61cadd0d9433f334d5c60f8c7216ccedfecfe1) /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:341: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 " Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 0%| | 0/3 [00:00<?, ?ex/s]formats: can't open input file `common_voice_hi_23795358.mp3': No such file or directory 0%| | 0/3 [00:00<?, ?ex/s] Traceback (most recent call last): File "demo_file.py", line 23, in <module> test_dataset = test_dataset.map(speech_file_to_array_fn) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2036, in map desc=desc, File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 518, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 485, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 411, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2368, in _map_single example = apply_function_on_filtered_inputs(example, i, offset=offset) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2277, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1978, in decorated result = f(decorated_item, *args, **kwargs) File "demo_file.py", line 19, in speech_file_to_array_fn speech_array, sampling_rate = torchaudio.load(batch["path"]) File "/opt/conda/lib/python3.7/site-packages/torchaudio/backend/sox_io_backend.py", line 154, in load filepath, frame_offset, num_frames, normalize, channels_first, format) RuntimeError: Error loading audio file: failed to open file common_voice_hi_23795358.mp3 ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-4.14.243 with-debian-bullseye-sid - Python version: 3.7.9 - PyArrow version: 6.0.1
false
1,071,166,270
https://api.github.com/repos/huggingface/datasets/issues/3380
https://github.com/huggingface/datasets/issues/3380
3,380
[Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem!
closed
0
2021-12-04T09:18:33
2022-01-11T12:29:53
2022-01-11T12:29:53
LysandreJik
[]
Thanks to all of you, `datasets` will pass 11.5k stars :star2: this week! If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts: [**hf.co/oss-survey**](https://hf.co/oss-survey) (please reply in the above feedback form rather than to this thread) Thank you all on behalf of the HuggingFace team! ๐Ÿค—
false
1,071,079,146
https://api.github.com/repos/huggingface/datasets/issues/3379
https://github.com/huggingface/datasets/pull/3379
3,379
iter_archive on zipfiles with better compression type check
closed
10
2021-12-04T01:04:48
2023-01-24T13:00:19
2023-01-24T12:53:08
Mehdi2402
[]
Hello @lhoestq , thank you for your detailed answer on previous PR ! I made this new PR because I misused git on the previous one #3347. Related issue #3272. # Comments : * For extension check I used the `_get_extraction_protocol` function in **download_manager.py** with a slight change and called it `_get_extraction_protocol_local`: **I removed this part :** ```python elif path.endswith(".tar.gz") or path.endswith(".tgz"): raise NotImplementedError( f"Extraction protocol for TAR archives like '{urlpath}' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead." ) ``` **And also changed :** ```diff - extension = path.split(".")[-1] + extension = "tar" if path.endswith(".tar.gz") else path.split(".")[-1] ``` The reason for this is a compression like **.tar.gz** will be considered a **.gz** which is handled with **zipfile**, though **tar.gz** can only be opened using **tarfile**. Please tell me if there's anything to change. # Tasks : - [x] download_manager.py - [x] streaming_download_manager.py
true
1,070,580,126
https://api.github.com/repos/huggingface/datasets/issues/3378
https://github.com/huggingface/datasets/pull/3378
3,378
Add The Pile subsets
closed
0
2021-12-03T13:14:54
2021-12-09T18:11:25
2021-12-09T18:11:23
albertvillanova
[]
Add The Pile subsets: - pubmed - ubuntu_irc - europarl - hacker_news - nih_exporter Close bigscience-workshop/data_tooling#301. CC: @StellaAthena
true
1,070,562,907
https://api.github.com/repos/huggingface/datasets/issues/3377
https://github.com/huggingface/datasets/pull/3377
3,377
COCO ๐Ÿฅฅ on the ๐Ÿค— Hub?
closed
4
2021-12-03T12:55:27
2021-12-20T14:14:01
2021-12-20T14:14:00
merveenoyan
[]
This is a draft PR since I ran into few small problems. I referred to this TFDS code: https://github.com/tensorflow/datasets/blob/2538a08c184d53b37bfcf52cc21dd382572a88f4/tensorflow_datasets/object_detection/coco.py cc: @mariosasko
true
1,070,522,979
https://api.github.com/repos/huggingface/datasets/issues/3376
https://github.com/huggingface/datasets/pull/3376
3,376
Update clue benchmark
closed
1
2021-12-03T12:06:01
2021-12-08T14:14:42
2021-12-08T14:14:41
mariosasko
[]
Fix #3374
true
1,070,454,913
https://api.github.com/repos/huggingface/datasets/issues/3375
https://github.com/huggingface/datasets/pull/3375
3,375
Support streaming zipped dataset repo by passing only repo name
closed
6
2021-12-03T10:43:05
2021-12-16T18:03:32
2021-12-16T18:03:31
albertvillanova
[]
Proposed solution: - I have added the method `iter_files` to DownloadManager and StreamingDownloadManager - I use this in modules: "csv", "json", "text" - I test for CSV/JSONL/TXT zipped (and non-zipped) files, both in streaming and non-streaming modes Fix #3373.
true
1,070,426,462
https://api.github.com/repos/huggingface/datasets/issues/3374
https://github.com/huggingface/datasets/issues/3374
3,374
NonMatchingChecksumError for the CLUE:cluewsc2020, chid, c3 and tnews
closed
2
2021-12-03T10:10:54
2021-12-08T14:14:41
2021-12-08T14:14:41
Namco0816
[]
Hi, it seems like there are updates in cluewsc2020, chid, c3 and tnews, since i could not load them due to the checksum error.
false
1,070,406,391
https://api.github.com/repos/huggingface/datasets/issues/3373
https://github.com/huggingface/datasets/issues/3373
3,373
Support streaming zipped CSV dataset repo by passing only repo name
closed
0
2021-12-03T09:48:24
2021-12-16T18:03:31
2021-12-16T18:03:31
albertvillanova
[ "enhancement" ]
Given a community ๐Ÿค— dataset repository containing only a zipped CSV file (only raw data, no loading script), I would like to load it in streaming mode without passing `data_files`: ``` ds_name = "bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab" ds = load_dataset(ds_name, split="train", streaming=True, use_auth_token=True) item = next(iter(ds)) ``` Currently, it gives a `FileNotFoundError` because there is no glob (no "\*" after "zip://": "zip://*") in the passed URL: ``` 'zip://::https://huggingface.co/datasets/bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab/resolve/e5d45f1bd9a8a798cc14f0a45ebc1ce91907c792/poems_dataset.zip' ```
false
1,069,948,178
https://api.github.com/repos/huggingface/datasets/issues/3372
https://github.com/huggingface/datasets/issues/3372
3,372
[SEO improvement] Add Dataset Metadata to make datasets indexable
closed
0
2021-12-02T20:21:07
2022-03-18T09:36:48
2022-03-18T09:36:48
cakiki
[ "enhancement" ]
Some people who host datasets on github seem to include a table of metadata at the end of their README.md to make the dataset indexable by [Google Dataset Search](https://datasetsearch.research.google.com/) (See [here](https://github.com/google-research/google-research/tree/master/goemotions#dataset-metadata) and [here](https://github.com/cvdfoundation/google-landmark#dataset-metadata)). This could be a useful addition to canonical datasets; perhaps even community datasets. I'll include a screenshot (as opposed to markdown) as an example so as not to have a github issue indexed as a dataset: > ![image](https://user-images.githubusercontent.com/3664563/144496173-953428cf-633a-4571-b75b-f099c6b2ed65.png) **_PS: It might very well be the case that this is already covered by some other markdown magic I'm not aware of._**
false
1,069,821,335
https://api.github.com/repos/huggingface/datasets/issues/3371
https://github.com/huggingface/datasets/pull/3371
3,371
New: Americas NLI dataset
closed
0
2021-12-02T17:44:59
2021-12-08T13:58:12
2021-12-08T13:58:11
fdschmidt93
[]
This PR adds the [Americas NLI](https://arxiv.org/abs/2104.08726) dataset, extension of XNLI to 10 low-resource indigenous languages spoken in the Americas: Ashaninka, Aymara, Bribri, Guarani, Nahuatl, Otomi, Quechua, Raramuri, Shipibo-Konibo, and Wixarika. One odd thing (not sure) is that I had to set `datasets-cli dummy_data ./datasets/americas_nli/ --auto_generate --n_lines 7500` `n_lines` very large to successfully generate the dummy files for all the subsets. Happy to get some guidance here. Otherwise, I hope everything is in order :) e: missed a step, onto fixing the tests e2: there you go -- hope it's ok to have added more languages with their ISO codes to `languages.json`, need those tests to pass :laughing:
true
1,069,735,423
https://api.github.com/repos/huggingface/datasets/issues/3370
https://github.com/huggingface/datasets/pull/3370
3,370
Document a training loop for streaming dataset
closed
0
2021-12-02T16:17:00
2021-12-03T13:34:35
2021-12-03T13:34:34
lhoestq
[]
I added some docs about streaming dataset. In particular I added two subsections: - one on how to use `map` for preprocessing - one on how to use a streaming dataset in a pytorch training loop cc @patrickvonplaten @stevhliu if you have some comments cc @Rocketknight1 later we can add the one for TF and I might need your help ^^'
true
1,069,587,674
https://api.github.com/repos/huggingface/datasets/issues/3369
https://github.com/huggingface/datasets/issues/3369
3,369
[Audio] Allow resampling for audio datasets in streaming mode
closed
2
2021-12-02T14:04:57
2021-12-16T15:55:19
2021-12-16T15:55:19
patrickvonplaten
[ "enhancement" ]
Many audio datasets like Common Voice always need to be resampled. This can very easily be done in non-streaming mode as follows: ```python from datasets import load_dataset ds = load_dataset("common_voice", "ab", split="test") ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) ``` However in streaming mode it fails currently: ```python from datasets import load_dataset ds = load_dataset("common_voice", "ab", split="test", streaming=True) ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) ``` with the following error: ``` AttributeError: 'IterableDataset' object has no attribute 'cast_column' ``` It would be great if we could add such a feature (I'm not 100% sure though how complex this would be)
false
1,069,403,624
https://api.github.com/repos/huggingface/datasets/issues/3368
https://github.com/huggingface/datasets/pull/3368
3,368
Fix dict source_datasets tagset validator
closed
0
2021-12-02T10:52:20
2021-12-02T15:48:38
2021-12-02T15:48:37
albertvillanova
[]
Currently, the `source_datasets` tag validation does not support passing a dict with configuration keys. This PR: - Extends `tagset_validator` to support regex tags - Uses `tagset_validator` to validate dict `source_datasets`
true
1,069,241,274
https://api.github.com/repos/huggingface/datasets/issues/3367
https://github.com/huggingface/datasets/pull/3367
3,367
Fix typo in other-structured-to-text task tag
closed
0
2021-12-02T08:02:27
2021-12-02T16:07:14
2021-12-02T16:07:13
albertvillanova
[]
Fix typo in task tag: - `other-stuctured-to-text` (before) - `other-structured-to-text` (now)
true
1,069,214,022
https://api.github.com/repos/huggingface/datasets/issues/3366
https://github.com/huggingface/datasets/issues/3366
3,366
Add multimodal datasets
open
0
2021-12-02T07:24:04
2023-02-28T16:29:22
null
albertvillanova
[ "dataset request" ]
Epic issue to track the addition of multimodal datasets: - [ ] #2526 - [x] #1842 - [ ] #1810 Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). @VictorSanh feel free to add and sort by priority any interesting dataset. I have added the multimodal dataset requests which were already present as issues.
false
1,069,195,887
https://api.github.com/repos/huggingface/datasets/issues/3365
https://github.com/huggingface/datasets/issues/3365
3,365
Add task tags for multimodal datasets
closed
1
2021-12-02T06:58:20
2023-07-25T18:21:33
2023-07-25T18:21:32
albertvillanova
[ "enhancement" ]
## **Is your feature request related to a problem? Please describe.** Currently, task tags are either exclusively related to text or speech processing: - https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/tasks.json ## **Describe the solution you'd like** We should also add tasks related to: - multimodality - image - video CC: @VictorSanh @lewtun @lhoestq @merveenoyan @SBrandeis
false
1,068,851,196
https://api.github.com/repos/huggingface/datasets/issues/3364
https://github.com/huggingface/datasets/pull/3364
3,364
Use the Audio feature in the AutomaticSpeechRecognition template
closed
4
2021-12-01T20:42:26
2022-03-24T14:34:09
2022-03-24T14:34:08
anton-l
[]
This updates the ASR template and all supported datasets to use the `Audio` feature
true
1,068,824,340
https://api.github.com/repos/huggingface/datasets/issues/3363
https://github.com/huggingface/datasets/pull/3363
3,363
Update URL of Jeopardy! dataset
closed
2
2021-12-01T20:08:10
2022-10-06T13:45:49
2021-12-03T12:35:01
mariosasko
[]
Updates the URL of the Jeopardy! dataset. Fix #3361
true
1,068,809,768
https://api.github.com/repos/huggingface/datasets/issues/3362
https://github.com/huggingface/datasets/pull/3362
3,362
Adapt image datasets
closed
3
2021-12-01T19:52:01
2021-12-09T18:37:42
2021-12-09T18:37:41
mariosasko
[]
This PR: * adapts the ImageClassification template to use the new Image feature * adapts the following datasets to use the new Image feature: * beans (+ fixes streaming) * cast_vs_dogs (+ fixes streaming) * cifar10 * cifar100 * fashion_mnist * mnist * head_qa cc @nateraw
true
1,068,736,268
https://api.github.com/repos/huggingface/datasets/issues/3361
https://github.com/huggingface/datasets/issues/3361
3,361
Jeopardy _URL access denied
closed
1
2021-12-01T18:21:33
2021-12-11T12:50:23
2021-12-06T11:16:31
tianjianjiang
[ "bug" ]
## Describe the bug http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz returns Access Denied now. However, https://drive.google.com/file/d/0BwT5wj_P7BKXb2hfM3d2RHU1ckE/view?usp=sharing from the original Reddit post https://www.reddit.com/r/datasets/comments/1uyd0t/200000_jeopardy_questions_in_a_json_file/ may work. ## Steps to reproduce the bug ```shell > python Python 3.7.12 (default, Sep 5 2021, 08:34:29) [Clang 11.0.3 (clang-1103.0.32.62)] on darwin Type "help", "copyright", "credits" or "license" for more information. ``` ```python >>> from datasets import load_dataset >>> load_dataset("jeopardy") ``` ## Expected results The download completes. ## Actual results ```shell Downloading: 4.18kB [00:00, 1.60MB/s] Downloading: 2.03kB [00:00, 1.04MB/s] Using custom data configuration default Downloading and preparing dataset jeopardy/default (download: 12.13 MiB, generated: 34.46 MiB, post-processed: Unknown size, total: 46.59 MiB) to /Users/mike/.cache/huggingface/datasets/jeopardy/default/0.1.0/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/load.py", line 1632, in load_dataset use_auth_token=use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 608, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/mike/.cache/huggingface/modules/datasets_modules/datasets/jeopardy/25ee3e4a73755e637b8810f6493fd36e4523dea3ca8a540529d0a6e24c7f9810/jeopardy.py", line 72, in _split_generators filepath = dl_manager.download_and_extract(_DATA_URL) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 197, in download download_func, url_or_urls, map_tuple=True, num_proc=download_config.num_proc, disable_tqdm=False File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 197, in map_nested return function(data_struct) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 305, in cached_path use_auth_token=download_config.use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` --- ```shell > curl http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz ``` ```xml <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>70Y9R36XNPEQXMGV</RequestId><HostId>G6F5AK4qo7JdaEdKGMtS0P6gdLPeFOdEfSEfvTOZEfk9km0/jAfp08QLfKSTFFj1oWIKoAoBehM=</HostId></Error> ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14.0 - Platform: macOS Catalina 10.15.7 - Python version: 3.7.12 - PyArrow version: 6.0.1
false
1,068,724,697
https://api.github.com/repos/huggingface/datasets/issues/3360
https://github.com/huggingface/datasets/pull/3360
3,360
Add The Pile USPTO subset
closed
0
2021-12-01T18:08:05
2021-12-03T11:45:29
2021-12-03T11:45:28
albertvillanova
[]
Add: - USPTO subset of The Pile: "uspto" config Close bigscience-workshop/data_tooling#297. CC: @StellaAthena
true
1,068,638,213
https://api.github.com/repos/huggingface/datasets/issues/3359
https://github.com/huggingface/datasets/pull/3359
3,359
Add The Pile Free Law subset
closed
3
2021-12-01T16:46:04
2021-12-06T10:12:17
2021-12-01T17:30:44
albertvillanova
[]
Add: - Free Law subset of The Pile: "free_law" config Close bigscience-workshop/data_tooling#75. CC: @StellaAthena
true
1,068,623,216
https://api.github.com/repos/huggingface/datasets/issues/3358
https://github.com/huggingface/datasets/issues/3358
3,358
add new field, and get errors
closed
2
2021-12-01T16:35:38
2021-12-02T02:26:22
2021-12-02T02:26:22
PatricYan
[]
after adding new field **tokenized_examples["example_id"]**, and get errors below, I think it is due to changing data to tensor, and **tokenized_examples["example_id"]** is string list **all fields** ``` ***************** train_dataset 1: Dataset({ features: ['attention_mask', 'end_positions', 'example_id', 'input_ids', 'start_positions', 'token_type_ids'], num_rows: 87714 }) ``` **Errors** ``` Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 705, in convert_to_tensors tensor = as_tensor(value) ValueError: too many dimensions 'str' ```
false
1,068,607,382
https://api.github.com/repos/huggingface/datasets/issues/3357
https://github.com/huggingface/datasets/pull/3357
3,357
Update languages in aeslc dataset card
closed
0
2021-12-01T16:20:46
2022-09-23T13:16:49
2022-09-23T13:16:49
apergo-ai
[ "dataset contribution" ]
After having worked a bit with the dataset. As far as I know, it is solely in English (en-US). There are only a few mails in Spanish, French or German (less than a dozen I would estimate).
true
1,068,503,932
https://api.github.com/repos/huggingface/datasets/issues/3356
https://github.com/huggingface/datasets/pull/3356
3,356
to_tf_dataset() refactor
closed
5
2021-12-01T14:54:30
2021-12-09T10:26:53
2021-12-09T10:26:53
Rocketknight1
[]
This is the promised cleanup to `to_tf_dataset()` now that the course is out of the way! The main changes are: - A collator is always required (there was way too much hackiness making things like labels work without it) - Lots of cleanup and a lot of code moved to `_get_output_signature` - Should now handle it gracefully when the data collator adds unexpected columns
true
1,068,468,573
https://api.github.com/repos/huggingface/datasets/issues/3355
https://github.com/huggingface/datasets/pull/3355
3,355
Extend support for streaming datasets that use pd.read_excel
closed
1
2021-12-01T14:22:43
2021-12-17T07:24:19
2021-12-17T07:24:18
albertvillanova
[]
This PR fixes error: ``` ValueError: Cannot seek streaming HTTP file ``` CC: @severo
true
1,068,307,271
https://api.github.com/repos/huggingface/datasets/issues/3354
https://github.com/huggingface/datasets/pull/3354
3,354
Remove duplicate name from dataset cards
closed
0
2021-12-01T11:45:40
2021-12-01T13:14:30
2021-12-01T13:14:29
albertvillanova
[]
Remove duplicate name from dataset card for: - ajgt_twitter_ar - emotone_ar
true
1,068,173,783
https://api.github.com/repos/huggingface/datasets/issues/3353
https://github.com/huggingface/datasets/issues/3353
3,353
add one field "example_id", but I can't see it in the "comput_loss" function
closed
7
2021-12-01T09:35:09
2021-12-01T16:02:39
2021-12-01T16:02:39
PatricYan
[]
Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs ``` *********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], ..., [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0]], device='cuda:0'), 'end_positions': tensor([ 25, 97, 93, 44, 25, 112, 109, 134], device='cuda:0'), 'input_ids': tensor([[ 101, 2054, 2390, ..., 0, 0, 0], [ 101, 2054, 2515, ..., 0, 0, 0], [ 101, 2054, 2106, ..., 0, 0, 0], ..., [ 101, 2339, 2001, ..., 0, 0, 0], [ 101, 2054, 2515, ..., 0, 0, 0], [ 101, 2054, 2003, ..., 0, 0, 0]], device='cuda:0'), 'start_positions': tensor([ 20, 90, 89, 41, 25, 96, 106, 132], device='cuda:0'), 'token_type_ids': tensor([[0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], ..., [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0], [0, 0, 0, ..., 0, 0, 0]], device='cuda:0')} ``` ``` # This function preprocesses a question answering dataset, tokenizing the question and context text # and finding the right offsets for the answer spans in the tokenized context (to use as labels). # Adapted from https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py def prepare_train_dataset_qa(examples, tokenizer, max_seq_length=None): questions = [q.lstrip() for q in examples["question"]] max_seq_length = tokenizer.model_max_length # tokenize both questions and the corresponding context # if the context length is longer than max_length, we split it to several # chunks of max_length tokenized_examples = tokenizer( questions, examples["context"], truncation="only_second", max_length=max_seq_length, stride=min(max_seq_length // 2, 128), return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length" ) # Since one example might give us several features if it has a long context, # we need a map from a feature to its corresponding example. sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") # The offset mappings will give us a map from token to character position # in the original context. This will help us compute the start_positions # and end_positions to get the final answer string. offset_mapping = tokenized_examples.pop("offset_mapping") tokenized_examples["start_positions"] = [] tokenized_examples["end_positions"] = [] tokenized_examples["example_id"] = [] for i, offsets in enumerate(offset_mapping): input_ids = tokenized_examples["input_ids"][i] # We will label features not containing the answer the index of the CLS token. cls_index = input_ids.index(tokenizer.cls_token_id) sequence_ids = tokenized_examples.sequence_ids(i) # from the feature idx to sample idx sample_index = sample_mapping[i] # get the answer for a feature answers = examples["answers"][sample_index] tokenized_examples["example_id"].append(examples["id"][sample_index]) if len(answers["answer_start"]) == 0: tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Start/end character index of the answer in the text. start_char = answers["answer_start"][0] end_char = start_char + len(answers["text"][0]) # Start token index of the current span in the text. token_start_index = 0 while sequence_ids[token_start_index] != 1: token_start_index += 1 # End token index of the current span in the text. token_end_index = len(input_ids) - 1 while sequence_ids[token_end_index] != 1: token_end_index -= 1 # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index). if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char): tokenized_examples["start_positions"].append(cls_index) tokenized_examples["end_positions"].append(cls_index) else: # Otherwise move the token_start_index and token_end_index to the two ends of the answer. # Note: we could go after the last offset if the answer is the last word (edge case). while token_start_index < len(offsets) and \ offsets[token_start_index][0] <= start_char: token_start_index += 1 tokenized_examples["start_positions"].append( token_start_index - 1) while offsets[token_end_index][1] >= end_char: token_end_index -= 1 tokenized_examples["end_positions"].append(token_end_index + 1) return tokenized_examples ``` _Originally posted by @yanllearnn in https://github.com/huggingface/datasets/issues/3333#issuecomment-983457161_
false
1,068,102,994
https://api.github.com/repos/huggingface/datasets/issues/3352
https://github.com/huggingface/datasets/pull/3352
3,352
Make LABR dataset streamable
closed
0
2021-12-01T08:22:27
2021-12-01T10:49:02
2021-12-01T10:49:01
albertvillanova
[]
Fix LABR dataset to make it streamable. Related to: #3350.
true
1,068,094,873
https://api.github.com/repos/huggingface/datasets/issues/3351
https://github.com/huggingface/datasets/pull/3351
3,351
Add VCTK dataset
closed
9
2021-12-01T08:13:17
2022-02-28T09:22:03
2021-12-28T15:05:08
jaketae
[]
Fixes #1837.
true
1,068,078,160
https://api.github.com/repos/huggingface/datasets/issues/3350
https://github.com/huggingface/datasets/pull/3350
3,350
Avoid content-encoding issue while streaming datasets
closed
0
2021-12-01T07:56:48
2021-12-01T08:15:01
2021-12-01T08:15:00
albertvillanova
[]
This PR will fix streaming of datasets served with gzip content-encoding: ``` ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` Fix #2918. CC: @severo
true
1,067,853,601
https://api.github.com/repos/huggingface/datasets/issues/3349
https://github.com/huggingface/datasets/pull/3349
3,349
raise exception instead of using assertions.
closed
6
2021-12-01T01:37:51
2021-12-20T16:07:27
2021-12-20T16:07:27
manisnesan
[]
fix for the remaining files https://github.com/huggingface/datasets/issues/3171
true
1,067,831,113
https://api.github.com/repos/huggingface/datasets/issues/3348
https://github.com/huggingface/datasets/pull/3348
3,348
BLEURT: Match key names to correspond with filename
closed
3
2021-12-01T01:01:18
2021-12-07T16:06:57
2021-12-07T16:06:57
jaehlee
[]
In order to properly locate downloaded ckpt files key name needs to match filename. Correcting change introduced in #3235
true
1,067,738,902
https://api.github.com/repos/huggingface/datasets/issues/3347
https://github.com/huggingface/datasets/pull/3347
3,347
iter_archive for zip files
closed
1
2021-11-30T22:34:17
2021-12-04T00:22:22
2021-12-04T00:22:11
Mehdi2402
[]
* In this PR, I added the option to iterate through zipfiles for `download_manager.py` only. * Next PR will be the same applied to `streaming_download_manager.py`. * Related issue #3272. ## Comments : * There is no `.isreg()` equivalent in zipfile library to check if file is Regular so I used `.is_dir()` instead to skip directories. * For now I got `streaming_download_manager.py` working for local zip files, but not for urls. I get the following error when I test it on an archive in google drive, so still working on it. `BlockSizeError: Got more bytes so far (>2112) than requested (22)` ## Tasks : - [x] download_manager.py - [ ] streaming_download_manager.py
true
1,067,632,365
https://api.github.com/repos/huggingface/datasets/issues/3346
https://github.com/huggingface/datasets/issues/3346
3,346
Failed to convert `string` with pyarrow for QED since 1.15.0
closed
2
2021-11-30T20:11:42
2021-12-14T14:39:05
2021-12-14T14:39:05
tianjianjiang
[ "bug" ]
## Describe the bug Loading QED was fine until 1.15.0. related: bigscience-workshop/promptsource#659, bigscience-workshop/promptsource#670 Not sure where the root cause is, but here are some candidates: - #3158 - #3120 - #3196 - #2891 ## Steps to reproduce the bug ```python load_dataset("qed") ``` ## Expected results Loading completed. ## Actual results ```shell ArrowInvalid: Could not convert in with type str: tried to convert to boolean Traceback: File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/script_runner.py", line 354, in _run_script exec(code, module.__dict__) File "/Users/s0s0cr3/Documents/GitHub/promptsource/promptsource/app.py", line 260, in <module> dataset = get_dataset(dataset_key, str(conf_option.name) if conf_option else None) File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/caching.py", line 543, in wrapped_func return get_or_create_cached_value() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/streamlit/caching.py", line 527, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/Users/s0s0cr3/Documents/GitHub/promptsource/promptsource/utils.py", line 49, in get_dataset builder_instance.download_and_prepare() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/builder.py", line 1106, in _prepare_split num_examples, num_bytes = writer.finalize() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 456, in finalize self.write_examples_on_file() File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 325, in write_examples_on_file pa_array = pa.array(typed_sequence) File "pyarrow/array.pxi", line 222, in pyarrow.lib.array File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol File "/Users/s0s0cr3/Library/Python/3.9/lib/python/site-packages/datasets/arrow_writer.py", line 121, in __arrow_array__ out = pa.array(cast_to_python_objects(self.data, only_1d_for_numpy=True), type=type) File "pyarrow/array.pxi", line 305, in pyarrow.lib.array File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.0, 1.16.1 - Platform: macOS 1.15.7 or above - Python version: 3.7.12 and 3.9 - PyArrow version: 3.0.0, 5.0.0, 6.0.1
false
1,067,622,951
https://api.github.com/repos/huggingface/datasets/issues/3345
https://github.com/huggingface/datasets/issues/3345
3,345
Failed to download species_800 from Google Drive zip file
closed
3
2021-11-30T20:00:28
2021-12-01T17:53:15
2021-12-01T17:53:15
tianjianjiang
[ "bug" ]
## Describe the bug One can manually download the zip file on Google Drive, but `load_dataset()` cannot. related: #3248 ## Steps to reproduce the bug ```shell > python Python 3.7.12 (default, Sep 5 2021, 08:34:29) [Clang 11.0.3 (clang-1103.0.32.62)] on darwin Type "help", "copyright", "credits" or "license" for more information. ``` ```python >>> from datasets import load_dataset >>> s800 = load_dataset("species_800") ``` ## Expected results species_800 downloaded. ## Actual results ```shell Downloading: 5.68kB [00:00, 1.22MB/s] Downloading: 2.70kB [00:00, 691kB/s] Downloading and preparing dataset species800/species_800 (download: 17.36 MiB, generated: 3.53 MiB, post-processed: Unknown size, total: 20.89 MiB) to /Users/mike/.cache/huggingface/datasets/species800/species_800/1.0.0/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976... 0%| | 0/1 [00:00<?, ?it/s]Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/load.py", line 1632, in load_dataset use_auth_token=use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 608, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/builder.py", line 675, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/mike/.cache/huggingface/modules/datasets_modules/datasets/species_800/532167f0bb8fbc0d77d6d03c4fd642c8c55527b9c5f2b1da77f3d00b0e559976/species_800.py", line 104, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 197, in download download_func, url_or_urls, map_tuple=True, num_proc=download_config.num_proc, disable_tqdm=False File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 209, in map_nested for obj in utils.tqdm(iterable, disable=disable_tqdm) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 209, in <listcomp> for obj in utils.tqdm(iterable, disable=disable_tqdm) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 143, in _single_map_nested return function(data_struct) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 217, in _download return cached_path(url_or_filename, download_config=download_config) File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 305, in cached_path use_auth_token=download_config.use_auth_token, File "/Users/mike/Library/Caches/pypoetry/virtualenvs/promptsource-hsdAcWsQ-py3.7/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 594, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/ ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.14,0 1.15.0, 1.16.1 - Platform: macOS Catalina 10.15.7 - Python version: 3.7.12 - PyArrow version: 6.0.1
false
1,067,567,603
https://api.github.com/repos/huggingface/datasets/issues/3344
https://github.com/huggingface/datasets/pull/3344
3,344
Add ArrayXD docs
closed
0
2021-11-30T18:53:31
2021-12-01T20:16:03
2021-12-01T19:35:32
stevhliu
[]
Documents support for dynamic first dimension in `ArrayXD` from #2891, and explain the `ArrayXD` feature in general. Let me know if I'm missing anything @lhoestq :)
true
1,067,505,507
https://api.github.com/repos/huggingface/datasets/issues/3343
https://github.com/huggingface/datasets/pull/3343
3,343
Better error message when download fails
closed
0
2021-11-30T17:38:50
2021-12-01T11:27:59
2021-12-01T11:27:58
lhoestq
[]
From our discussions in https://github.com/huggingface/datasets/issues/3269 and https://github.com/huggingface/datasets/issues/3282 it would be nice to have better messages if a download fails. In particular the error now shows: - the error from the HEAD request if there's one - otherwise the response code of the HEAD request I also added an error to tell users to pass `use_auth_token` when the Hugging Face Hub returns 401 (Unauthorized). While paying around with this I also fixed a minor issue with the `force_download` parameter that was not always taken into account
true
1,067,481,390
https://api.github.com/repos/huggingface/datasets/issues/3342
https://github.com/huggingface/datasets/pull/3342
3,342
Fix ASSET dataset data URLs
closed
1
2021-11-30T17:13:30
2021-12-14T14:50:00
2021-12-14T14:50:00
tianjianjiang
[]
Change the branch name "master" to "main" in the data URLs, since facebookresearch has changed that.
true
1,067,449,569
https://api.github.com/repos/huggingface/datasets/issues/3341
https://github.com/huggingface/datasets/issues/3341
3,341
Mirror the canonical datasets to the Hugging Face Hub
closed
2
2021-11-30T16:42:05
2022-01-26T14:47:37
2022-01-26T14:47:37
severo
[ "enhancement" ]
- [ ] create a repo on https://hf.co/datasets for every canonical dataset - [ ] on every commit related to a dataset, update the hf.co repo See https://github.com/huggingface/moon-landing/pull/1562 @SBrandeis: I let you edit this description if needed to precise the intent.
false
1,067,292,636
https://api.github.com/repos/huggingface/datasets/issues/3340
https://github.com/huggingface/datasets/pull/3340
3,340
Fix JSON ClassLabel casting for integers
closed
0
2021-11-30T14:19:54
2021-12-01T11:27:30
2021-12-01T11:27:30
lhoestq
[]
Loading a JSON dataset with ClassLabel feature types currently fails if the JSON data already has integers. Indeed currently it tries to convert the strings to integers without even checking if the data are not integers already. For example this currently fails: ```python from datasets import load_dataset, Features, ClassLabel path = "data.json" f = Features({"a": ClassLabel(names=["neg", "pos"])}) d = load_dataset("json", data_files=path, features=f) ``` data.json ```json {"a": 0} {"a": 1} ``` I fixed that by adding a line that checks the type of the JSON data before trying to convert them cc @albertvillanova let me know if it sounds good to you
true
1,066,662,477
https://api.github.com/repos/huggingface/datasets/issues/3339
https://github.com/huggingface/datasets/issues/3339
3,339
to_tf_dataset fails on TPU
open
5
2021-11-30T00:50:52
2021-12-02T14:21:27
null
nbroad1881
[ "bug" ]
Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs. ## Steps to reproduce the bug I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGouFxqD4OuWfnycW_1TaT276z?usp=sharing ## Expected results dataset from `to_tf_dataset` works in `model.fit` Right below the first error in the colab I use `tf.data.Dataset.from_tensor_slices` and `model.fit` works just fine. This is the desired outcome. ## Actual results ``` InternalError: 5 root error(s) found. (0) INTERNAL: {{function_node __inference_train_function_30558}} failed to connect to all addresses Additional GRPC error information from remote target /job:localhost/replica:0/task:0/device:CPU:0: :{"created":"@1638231897.932218653","description":"Failed to pick subchannel","file":"third_party/grpc/src/core/ext/filters/client_channel/client_channel.cc","file_line":3151,"referenced_errors":[{"created":"@1638231897.932216754","description":"failed to connect to all addresses","file":"third_party/grpc/src/core/lib/transport/error_utils.cc","file_line":161,"grpc_status":14}]} [[{{node StatefulPartitionedCall}}]] [[MultiDeviceIteratorGetNextFromShard]] Executing non-communication op <MultiDeviceIteratorGetNextFromShard> originally returned UnavailableError, and was replaced by InternalError to avoid invoking TF network error handling logic. [[RemoteCall]] [[IteratorGetNextAsOptional]] [[tpu_compile_succeeded_assert/_14023832043698465348/_7/_439]] ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.16.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyArrow version: 3.0.0 - Tensorflow 2.7.0 - `transformers` 4.12.5
false
1,066,371,235
https://api.github.com/repos/huggingface/datasets/issues/3338
https://github.com/huggingface/datasets/pull/3338
3,338
[WIP] Add doctests for tutorials
closed
1
2021-11-29T18:40:46
2023-05-05T17:18:20
2023-05-05T17:18:15
stevhliu
[]
Opening a PR as discussed with @LysandreJik for some help with doctest issues. The goal is to add doctests for each of the tutorials in the documentation to make sure the code samples work as shown. ### Issues A doctest has been added in the docstring of the `load_dataset_builder` function in `load.py` to handle variable outputs with the `ELLIPSIS` directive. When I run doctest on the `load_hub.rst` file, doctest should recognize the expected output from the docstring, and the corresponding code sample in `load_hub.rst` should pass. I am having the same issue with handling tracebacks in the `load_dataset` function. From the docstring: ``` >>> dataset_builder.cache_dir #doctest: +ELLIPSIS /Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/... ``` Test result: ``` Failed example: dataset_builder.cache_dir Expected: /Users/.../.cache/huggingface/datasets/imdb/plain_text/1.0.0/... Got: /Users/steven/.cache/huggingface/datasets/imdb/plain_text/1.0.0/2fdd8b9bcadd6e7055e742a706876ba43f19faee861df134affd7a3f60fc38a1 ``` I am able to get the doctest to pass by adding the doctest directives (`ELLIPSIS` and `NORMALIZE_WHITESPACE`) to the code samples in the `rst` file directly. But my understanding is that these directives should also work in the docstrings of the functions. I am running the test from the root of the directory: ``` python -m doctest -v docs/source/load_hub.rst ```
true
1,066,232,936
https://api.github.com/repos/huggingface/datasets/issues/3337
https://github.com/huggingface/datasets/issues/3337
3,337
Typing of Dataset.__getitem__ could be improved.
closed
2
2021-11-29T16:20:11
2021-12-14T10:28:54
2021-12-14T10:28:54
Dref360
[ "bug" ]
## Describe the bug The newly added typing for Dataset.__getitem__ is Union[Dict, List]. This makes tools like mypy a bit awkward to use as we need to check the type manually. We could use type overloading to make this easier. [Documentation](https://docs.python.org/3/library/typing.html#typing.overload) ## Steps to reproduce the bug Let's have a file `test.py` ```python from typing import List, Dict, Any from datasets import Dataset ds = Dataset.from_dict({ 'a': [1,2,3], 'b': ["1", "2", "3"] }) one_colum: List[str] = ds['a'] some_index: Dict[Any, Any] = ds[1] ``` ## Expected results Running `mypy test.py` should not give any error. ## Actual results ``` test.py:10: error: Incompatible types in assignment (expression has type "Union[Dict[Any, Any], List[Any]]", variable has type "List[str]") test.py:11: error: Incompatible types in assignment (expression has type "Union[Dict[Any, Any], List[Any]]", variable has type "Dict[Any, Any]") Found 2 errors in 1 file (checked 1 source file) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.13.3 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 6.0.1
false
1,066,208,436
https://api.github.com/repos/huggingface/datasets/issues/3336
https://github.com/huggingface/datasets/pull/3336
3,336
Add support for multiple dynamic dimensions and to_pandas conversion for dynamic arrays
closed
0
2021-11-29T15:58:59
2023-09-24T09:53:52
2023-05-16T18:24:46
mariosasko
[]
Add support for multiple dynamic dimensions (e.g. `(None, None, 3)` for arbitrary sized images) and `to_pandas()` conversion for dynamic arrays. TODOs: * [ ] Cleaner code * [ ] Formatting issues (if NumPy doesn't allow broadcasting even though dtype is np.object) * [ ] Fix some issues with zero-dim tensors * [ ] Tests
true
1,066,064,126
https://api.github.com/repos/huggingface/datasets/issues/3335
https://github.com/huggingface/datasets/pull/3335
3,335
add Speech commands dataset
closed
11
2021-11-29T13:52:47
2021-12-10T10:37:21
2021-12-10T10:30:15
polinaeterna
[]
closes #3283
true
1,065,983,923
https://api.github.com/repos/huggingface/datasets/issues/3334
https://github.com/huggingface/datasets/issues/3334
3,334
Integrate Polars library
closed
8
2021-11-29T12:31:54
2024-08-31T05:31:28
2024-08-31T05:31:27
albertvillanova
[ "enhancement" ]
Check potential integration of the Polars library: https://github.com/pola-rs/polars - Benchmark: https://h2oai.github.io/db-benchmark/ CC: @thomwolf @lewtun
false
1,065,346,919
https://api.github.com/repos/huggingface/datasets/issues/3333
https://github.com/huggingface/datasets/issues/3333
3,333
load JSON files, get the errors
closed
12
2021-11-28T14:29:58
2021-12-01T09:34:31
2021-12-01T03:57:48
PatricYan
[]
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command `!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/` change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html `dataset = datasets.load_dataset('json', data_files=args.dataset)` Errors: `Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/json/default-c1e124ad488911b8/0.0.0/45636811569ec4a6630521c18235dfbbab83b7ab572e3393c5ba68ccabe98264... ` _Originally posted by @yanllearnn in https://github.com/huggingface/datasets/issues/730#issuecomment-981095050_
false
1,065,345,853
https://api.github.com/repos/huggingface/datasets/issues/3332
https://github.com/huggingface/datasets/pull/3332
3,332
Fix error message and add extension fallback
closed
0
2021-11-28T14:25:29
2021-11-29T13:34:15
2021-11-29T13:34:14
mariosasko
[]
Fix the error message raised if `infered_module_name` is `None` in `CommunityDatasetModuleFactoryWithoutScript.get_module` and make `infer_module_for_data_files` more robust. In the linked issue, `infer_module_for_data_files` returns `None` because `json` is the second most common extension due to the suffix ordering. Now, we go from the most common to the least common extension and try to map it or return `None`. Fix #3331
true
1,065,275,896
https://api.github.com/repos/huggingface/datasets/issues/3331
https://github.com/huggingface/datasets/issues/3331
3,331
AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path'
closed
1
2021-11-28T08:54:05
2021-11-29T13:49:44
2021-11-29T13:34:14
luozhouyang
[ "bug" ]
## Describe the bug I add a new question answering dataset to huggingface datasets manually. Here is the link: [luozhouyang/question-answering-datasets](https://huggingface.co/datasets/luozhouyang/question-answering-datasets) But when I load the dataset, an error raised: ```bash AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path' ``` ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("luozhouyang/question-answering-datasets", data_files=["dureader_robust.train.json"]) ``` ## Expected results Load dataset successfully without any error. ## Actual results ```bash Traceback (most recent call last): File "/mnt/home/zhouyang.lzy/github/naivenlp/naivenlp/tests/question_answering_tests/dataset_test.py", line 89, in test_load_dataset_with_hf data_files=["dureader_robust.train.json"], File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1616, in load_dataset **config_kwargs, File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1443, in load_dataset_builder path, revision=revision, download_config=download_config, download_mode=download_mode, data_files=data_files File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1157, in dataset_module_factory raise e1 from None File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 1144, in dataset_module_factory download_mode=download_mode, File "/mnt/home/zhouyang.lzy/.conda/envs/naivenlp/lib/python3.6/site-packages/datasets/load.py", line 798, in get_module raise FileNotFoundError(f"No data files or dataset script found in {self.path}") AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: linux - Python version: 3.6.13 - PyArrow version: 6.0.1
false
1,065,176,619
https://api.github.com/repos/huggingface/datasets/issues/3330
https://github.com/huggingface/datasets/pull/3330
3,330
Change TriviaQA license (#3313)
closed
0
2021-11-28T03:26:45
2021-11-29T11:24:21
2021-11-29T11:24:21
avinashsai
[]
Fixes (#3313)
true
1,065,096,971
https://api.github.com/repos/huggingface/datasets/issues/3329
https://github.com/huggingface/datasets/issues/3329
3,329
Map function: Type error on iter #999
closed
4
2021-11-27T17:53:05
2021-11-29T20:40:15
2021-11-29T20:40:15
josephkready666
[ "bug" ]
## Describe the bug Using the map function, it throws a type error on iter #999 Here is the code I am calling: ``` dataset = datasets.load_dataset('squad') dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'}) ``` text_numbers_to_int returns the input text with numbers replaced in the format {'context': text} It happens at ` File "C:\Users\lonek\anaconda3\envs\ai\Lib\site-packages\datasets\arrow_writer.py", line 289, in <listcomp> [row[0][col] for row in self.current_examples], type=col_type, try_type=col_try_type, col=col ` The issue is that the list comprehension expects self.current_examples to be type tuple(dict, str), but for some reason 26 out of 1000 of the sefl.current_examples are type tuple(str, str) Here is an example of what self.current_examples should be ({'context': 'Super Bowl 50 was an...merals 50.'}, '') Here is an example of what self.current_examples are when it throws the error: ('The Panthers used th... Marriott.', '')
false
1,065,015,262
https://api.github.com/repos/huggingface/datasets/issues/3328
https://github.com/huggingface/datasets/pull/3328
3,328
Quick fix error formatting
closed
0
2021-11-27T11:47:48
2021-11-29T13:32:42
2021-11-29T13:32:42
NouamaneTazi
[]
While working on a dataset, I got the error ``` TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types {[type(x) for x in processed_inputs.values()]}. When using `batched=True`, make sure provided `function` returns a `dict` of types like `{allowed_batch_return_types}`. ``` This PR should fix the formatting of this error
true
1,064,675,888
https://api.github.com/repos/huggingface/datasets/issues/3327
https://github.com/huggingface/datasets/issues/3327
3,327
"Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)"
closed
1
2021-11-26T16:26:36
2021-11-26T16:44:11
2021-11-26T16:44:11
eliasws
[ "bug" ]
## Describe the bug Passing a correctly shaped Numpy-Array to get_nearest_examples leads to the Exception "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)" Probably the reason for this is a wrongly converted assertion. 1.15.1: `assert len(query.shape) == 1 or (len(query.shape) == 2 and query.shape[0] == 1)` 1.16.1: ``` if len(query.shape) != 1 or (len(query.shape) == 2 and query.shape[0] != 1): raise ValueError("Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)") ``` ## Steps to reproduce the bug follow the steps described here: https://huggingface.co/course/chapter5/6?fw=tf ```python question_embedding.shape # (1, 768) scores, samples = embeddings_dataset.get_nearest_examples( "embeddings", question_embedding, k=5 # Error ) # "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)" ``` ## Expected results Should work without exception ## Actual results Throws exception ## Environment info - `datasets` version: 1.15.1 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.12 - PyArrow version: 6.0.
false
1,064,664,479
https://api.github.com/repos/huggingface/datasets/issues/3326
https://github.com/huggingface/datasets/pull/3326
3,326
Fix import `datasets` on python 3.10
closed
0
2021-11-26T16:10:00
2021-11-26T16:31:23
2021-11-26T16:31:23
lhoestq
[]
In python 3.10 it's no longer possible to use `functools.wraps` on a method decorated with `classmethod`. To fix this I inverted the order of the `inject_arrow_table_documentation` and `classmethod` decorators Fix #3324
true
1,064,663,075
https://api.github.com/repos/huggingface/datasets/issues/3325
https://github.com/huggingface/datasets/pull/3325
3,325
Update conda dependencies
closed
0
2021-11-26T16:08:07
2021-11-26T16:20:37
2021-11-26T16:20:36
lhoestq
[]
Some dependencies minimum versions were outdated. For example `pyarrow` and `huggingface_hub`
true
1,064,661,212
https://api.github.com/repos/huggingface/datasets/issues/3324
https://github.com/huggingface/datasets/issues/3324
3,324
Can't import `datasets` in python 3.10
closed
0
2021-11-26T16:06:14
2021-11-26T16:31:23
2021-11-26T16:31:23
lhoestq
[]
When importing `datasets` I'm getting this error in python 3.10: ```python Traceback (most recent call last): File "<string>", line 1, in <module> File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 47, in <module> from .arrow_reader import ArrowReader File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_reader.py", line 33, in <module> from .table import InMemoryTable, MemoryMappedTable, Table, concat_tables File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 334, in <module> class InMemoryTable(TableBlock): File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 361, in InMemoryTable def from_pandas(cls, *args, **kwargs): File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 24, in wrapper out = wraps(arrow_table_method)(method) File "/Users/quentinlhoest/.pyenv/versions/3.10.0/lib/python3.10/functools.py", line 61, in update_wrapper wrapper.__wrapped__ = wrapped AttributeError: readonly attribute ``` This makes the conda build fail. I'm opening a PR to fix this and do a patch release 1.16.1
false
1,064,660,452
https://api.github.com/repos/huggingface/datasets/issues/3323
https://github.com/huggingface/datasets/pull/3323
3,323
Fix wrongly converted assert
closed
1
2021-11-26T16:05:39
2021-11-26T16:44:12
2021-11-26T16:44:11
eliasws
[]
Seems like this assertion was replaced by an exception but the condition got wrongly converted.
true
1,064,429,705
https://api.github.com/repos/huggingface/datasets/issues/3322
https://github.com/huggingface/datasets/pull/3322
3,322
Add missing tags to XTREME
closed
0
2021-11-26T12:37:05
2021-11-29T13:40:07
2021-11-29T13:40:06
mariosasko
[]
Add missing tags to the XTREME benchmark for better discoverability.
true
1,063,858,386
https://api.github.com/repos/huggingface/datasets/issues/3321
https://github.com/huggingface/datasets/pull/3321
3,321
Update URL of tatoeba subset of xtreme
closed
2
2021-11-25T18:42:31
2021-11-26T10:30:30
2021-11-26T10:30:30
mariosasko
[]
Updates the URL of the tatoeba subset of xtreme. Additionally, replaces `os.path.join` with `xjoin` to correctly join the URL segments on Windows. Fix #3320
true
1,063,531,992
https://api.github.com/repos/huggingface/datasets/issues/3320
https://github.com/huggingface/datasets/issues/3320
3,320
Can't get tatoeba.rus dataset
closed
0
2021-11-25T12:31:11
2021-11-26T10:30:29
2021-11-26T10:30:29
mmg10
[ "bug" ]
## Describe the bug It gives an error. > FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/LASER/raw/master/data/tatoeba/v1/tatoeba.rus-eng.rus ## Steps to reproduce the bug ```python data=load_dataset("xtreme","tatoeba.rus", split="validation") ``` ## Solution The library tries to access the **master** branch. In the github repo of facebookresearch, it is in the **main** branch.
false
1,062,749,654
https://api.github.com/repos/huggingface/datasets/issues/3319
https://github.com/huggingface/datasets/pull/3319
3,319
Add push_to_hub docs
closed
2
2021-11-24T18:21:11
2021-11-25T14:47:46
2021-11-25T14:47:46
lhoestq
[]
Since #3098 it's now possible to upload a dataset on the Hub directly from python using the `push_to_hub` method. I just added a section in the "Upload a dataset to the Hub" tutorial. I kept the section quite simple but let me know if it sounds good to you @LysandreJik @stevhliu :)
true
1,062,369,717
https://api.github.com/repos/huggingface/datasets/issues/3318
https://github.com/huggingface/datasets/pull/3318
3,318
Finish transition to PyArrow 3.0.0
closed
0
2021-11-24T12:30:14
2021-11-24T15:35:05
2021-11-24T15:35:04
mariosasko
[]
Finish transition to PyArrow 3.0.0 that was started in #3098.
true
1,062,284,447
https://api.github.com/repos/huggingface/datasets/issues/3317
https://github.com/huggingface/datasets/issues/3317
3,317
Add desc parameter to Dataset filter method
closed
4
2021-11-24T11:01:36
2022-01-05T18:31:24
2022-01-05T18:31:24
vblagoje
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consistency and it's nice to give some feedback to users during long operations on Datasets? **Describe the solution you'd like** Add desc parameter to Dataset filter method **Describe alternatives you've considered** N/A **Additional context** N/A
false
1,062,185,822
https://api.github.com/repos/huggingface/datasets/issues/3316
https://github.com/huggingface/datasets/issues/3316
3,316
Add RedCaps dataset
closed
0
2021-11-24T09:23:02
2022-01-12T14:13:15
2022-01-12T14:13:15
albertvillanova
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** RedCaps - **Description:** Web-curated image-text data created by the people, for the people - **Paper:** https://arxiv.org/abs/2111.11431 - **Data:** https://redcaps.xyz/ - **Motivation:** Multimodal image-text dataset: 12M+ Image-text pairs Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Proposed by @patil-suraj
false
1,061,678,452
https://api.github.com/repos/huggingface/datasets/issues/3315
https://github.com/huggingface/datasets/pull/3315
3,315
Removing query params for dynamic URL caching
closed
5
2021-11-23T20:24:12
2021-11-25T14:44:32
2021-11-25T14:44:31
anton-l
[]
The main use case for this is to make dynamically generated private URLs (like the ones returned by CommonVoice API) compatible with the datasets' caching logic. Usage example: ```python import datasets class CommonVoice(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo() def _split_generators(self, dl_manager): dl_manager.download_config.ignore_url_params = True HUGE_URL = "https://mozilla-common-voice-datasets.s3.dualstack.us-west-2.amazonaws.com/cv-corpus-7.0-2021-07-21/cv-corpus-7.0-2021-07-21-ab.tar.gz?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIAQ3GQRTO3IU5JYB5K%2F20211125%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-Date=20211125T131423Z&X-Amz-Expires=43200&X-Amz-Security-Token=FwoGZXIvYXdzEL7%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaDLsZw7Nj0d9h4rgheyKSBJJ6bxo1JdWLXAUhLMrUB8AXfhP8Ge4F8dtjwXmvGJgkIvdMT7P4YOEE1pS3mW8AyKsz7Z7IRVCIGQrOH1AbxGVVcDoCMMswXEOqL3nJFihKLf99%2F6l8iJVZdzftRUNgMhX5Hz0xSIL%2BzRDpH5nYa7C6YpEdOdW81CFVXybx7WUrX13wc8X4ZlUj7zrWcWf5p2VEIU5Utb7YHVi0Y5TQQiZSDoedQl0j4VmMuFkDzoobIO%2BvilgGeE2kIX0E62X423mEGNu4uQV5JsOuLAtv3GVlemsqEH3ZYrXDuxLmnvGj5HfMtySwI4vKv%2BlnnirD29o7hxvtidXiA8JMWhp93aP%2Fw7sod%2BPPbb5EqP%2B4Qb2GJ1myClOKcLEY0cqoy7XWm8NeVljLJojnFJVS5mNFBAzCCTJ%2FidxNsj8fflzkRoAzYaaPBuOTL1dgtZCdslK3FAuEvw0cik7P9A7IYiULV33otSHKMPcVfNHFsWQljs03gDztsIUWxaXvu6ck5vCcGULsHbfe6xoMPm2bR9jtKLONsslPcnzWIf7%2Fch2w%2F%2BjtTCd9IxaH4kytyJ6mIjpV%2FA%2F2h9qeDnDFsCphnMjAzPQn6tqCgTtPcyJ2b8c94ncgUnE4mepx%2FDa%2FanAEsrg9RPdmbdoPswzHn1IClh91IfSN74u95DZUxlPeZrHG5HxVCN3dKO6j%2Ft1xd20L0hEtazDdKOr8%2FYwGMirp8rp%2BII0pYOwQOrYHqH%2FREX2dRJctJtwE86Qj1eU8BAdXuFIkLC4NWXw%3D&X-Amz-Signature=1b8108d29b0e9c2bf6c7246e58ca8d5749a83de0704757ad8e8a44d78194691f&X-Amz-SignedHeaders=host" dl_path = dl_manager.download_and_extract(HUGE_URL) print(dl_path) HUGE_URL += "&some_new_or_changed_param=12345" dl_path = dl_manager.download_and_extract(HUGE_URL) print(dl_path) dl_manager = datasets.DownloadManager(dataset_name="common_voice") CommonVoice()._split_generators(dl_manager) ``` Output: ``` /home/user/.cache/huggingface/datasets/downloads/6ef2a377398ff3309554be040caa78414e6562d623dbd0ce8fc262459a7f8ec6 /home/user/.cache/huggingface/datasets/downloads/6ef2a377398ff3309554be040caa78414e6562d623dbd0ce8fc262459a7f8ec6 ```
true
1,061,448,227
https://api.github.com/repos/huggingface/datasets/issues/3314
https://github.com/huggingface/datasets/pull/3314
3,314
Adding arg to pass process rank to `map`
closed
1
2021-11-23T15:55:21
2021-11-24T11:54:13
2021-11-24T11:54:13
TevenLeScao
[]
This PR adds a `with_rank` argument to `map` that gives the user the possibility to pass the rank of each process to their function. This is mostly designed for multi-GPU map (each process can be sent to a different device thanks to the rank). I've also added tests. I'm putting the PR up so you can check the code, I'll add a multi-GPU example to the doc (+ write a bit in the doc for the new arg)
true
1,060,933,392
https://api.github.com/repos/huggingface/datasets/issues/3313
https://github.com/huggingface/datasets/issues/3313
3,313
TriviaQA License Mismatch
closed
1
2021-11-23T08:00:15
2021-11-29T11:24:21
2021-11-29T11:24:21
akhilkedia
[ "bug" ]
## Describe the bug TriviaQA Webpage at http://nlp.cs.washington.edu/triviaqa/ says they do not own the copyright to the data. However, Huggingface datasets at https://huggingface.co/datasets/trivia_qa mentions that the dataset is released under Apache License Is the License Information on HuggingFace correct?
false
1,060,440,346
https://api.github.com/repos/huggingface/datasets/issues/3312
https://github.com/huggingface/datasets/pull/3312
3,312
add bl books genre dataset
closed
6
2021-11-22T17:54:50
2021-12-02T16:10:29
2021-12-02T16:07:47
davanstrien
[]
First of all thanks for the fantastic library/collection of datasets ๐Ÿค— This pull request adds a dataset of metadata from digitised (mostly 19th Century) books from the British Library The [data](https://bl.iro.bl.uk/concern/datasets/1e1ccb46-65b4-4481-b6f8-b8129d5da053) contains various metadata about the books. In addition, a subset of the data includes 'genre' information which can be used for supervised text classification tasks. I hope that this offers easier access to a dataset for doing text classification on GLAM (galleries, libraries, archives and museums) data. I have tried to create three configurations that provide both an 'easy' version of the dataset if you want to use it for training a genre classification model and a more 'raw' version of the data for other potential use cases for the data. I am open to suggestions if this doesn't make sense. Similarly, for some of the arrow datatypes, I have had to fall back to strings since there are missing values for some fields/rows but I may have missed a more elegant way of dealing with it.
true
1,060,387,957
https://api.github.com/repos/huggingface/datasets/issues/3311
https://github.com/huggingface/datasets/issues/3311
3,311
Add WebSRC
open
0
2021-11-22T16:58:33
2021-11-22T16:58:33
null
NielsRogge
[ "dataset request" ]
## Adding a Dataset - **Name:** WebSRC - **Description:** WebSRC is a novel Web-based Structural Reading Comprehension dataset. It consists of 0.44M question-answer pairs, which are collected from 6.5K web pages with corresponding HTML source code, screenshots and metadata. - **Paper:** https://arxiv.org/abs/2101.09465 - **Data:** https://x-lance.github.io/WebSRC/dashboard.html# - **Motivation:** Currently adding MarkupLM to HuggingFace Transformers, which achieves SOTA on this dataset. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
1,060,098,104
https://api.github.com/repos/huggingface/datasets/issues/3310
https://github.com/huggingface/datasets/issues/3310
3,310
Fatal error condition occurred in aws-c-io
closed
28
2021-11-22T12:27:54
2023-02-08T10:31:05
2021-11-29T22:22:37
Crabzmatic
[ "bug" ]
## Describe the bug Fatal error when using the library ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('wikiann', 'en') ``` ## Expected results No fatal errors ## Actual results ``` Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\source\event_loop.c:74: aws_thread_launch(&cleanup_thread, s_event_loop_destroy_async_thread_fn, el_group, &thread_options) == AWS_OP_SUCCESS Exiting Application ``` ## Environment info - `datasets` version: 1.15.2.dev0 - Platform: Windows-10-10.0.22504-SP0 - Python version: 3.8.12 - PyArrow version: 6.0.0
false
1,059,496,154
https://api.github.com/repos/huggingface/datasets/issues/3309
https://github.com/huggingface/datasets/pull/3309
3,309
fix: files counted twice in inferred structure
closed
8
2021-11-21T21:50:38
2021-11-23T17:00:58
2021-11-23T17:00:58
borisdayma
[]
Files were counted twice in a structure like: ``` my_dataset_local_path/ โ”œโ”€โ”€ README.md โ””โ”€โ”€ data/ โ”œโ”€โ”€ train/ โ”‚ โ”œโ”€โ”€ shard_0.csv โ”‚ โ”œโ”€โ”€ shard_1.csv โ”‚ โ”œโ”€โ”€ shard_2.csv โ”‚ โ””โ”€โ”€ shard_3.csv โ””โ”€โ”€ valid/ โ”œโ”€โ”€ shard_0.csv โ””โ”€โ”€ shard_1.csv ``` The reason is that they were matching both `*train*/*` and `*train*/**/*`. This PR fixes it. @lhoestq
true
1,059,255,705
https://api.github.com/repos/huggingface/datasets/issues/3308
https://github.com/huggingface/datasets/issues/3308
3,308
"dataset_infos.json" missing for chr_en and mc4
open
3
2021-11-21T00:07:22
2022-01-19T13:55:32
null
amitness
[ "bug", "dataset bug" ]
## Describe the bug In the repository, every dataset has its metadata in a file called`dataset_infos.json`. But, this file is missing for two datasets: `chr_en` and `mc4`. ## Steps to reproduce the bug Check [chr_en](https://github.com/huggingface/datasets/tree/master/datasets/chr_en) and [mc4](https://github.com/huggingface/datasets/tree/master/datasets/mc4)
false
1,059,226,297
https://api.github.com/repos/huggingface/datasets/issues/3307
https://github.com/huggingface/datasets/pull/3307
3,307
Add IndoNLI dataset
closed
1
2021-11-20T20:46:03
2021-11-25T14:51:48
2021-11-25T14:51:48
afaji
[]
This PR adds IndoNLI dataset, from https://aclanthology.org/2021.emnlp-main.821/
true
1,059,185,860
https://api.github.com/repos/huggingface/datasets/issues/3306
https://github.com/huggingface/datasets/issues/3306
3,306
nested sequence feature won't encode example if the first item of the outside sequence is an empty list
closed
3
2021-11-20T16:57:54
2021-12-08T13:02:15
2021-12-08T13:02:15
function2-llx
[ "bug" ]
## Describe the bug As the title, nested sequence feature won't encode example if the first item of the outside sequence is an empty list. ## Steps to reproduce the bug ```python from datasets import Features, Sequence, ClassLabel features = Features({ 'x': Sequence(Sequence(ClassLabel(names=['a', 'b']))), }) print(features.encode_batch({ 'x': [ [['a'], ['b']], [[], ['b']], ] })) ``` ## Expected results print `{'x': [[[0], [1]], [[], ['1']]]}` ## Actual results print `{'x': [[[0], [1]], [[], ['b']]]}` ## Environment info - `datasets` version: 1.15.1 - Platform: Linux-5.13.0-21-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyArrow version: 6.0.0 ## Additional information I think the issue stems from [here](https://github.com/huggingface/datasets/blob/8555197a3fe826e98bd0206c2d031c4488c53c5c/src/datasets/features/features.py#L847-L848).
false
1,059,161,000
https://api.github.com/repos/huggingface/datasets/issues/3305
https://github.com/huggingface/datasets/pull/3305
3,305
asserts replaced with exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py``
closed
0
2021-11-20T14:51:23
2021-11-22T18:24:32
2021-11-22T17:08:13
Ishan-Kumar2
[]
Addresses #3171 Fixes exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py`` and modified tests
true
1,059,130,494
https://api.github.com/repos/huggingface/datasets/issues/3304
https://github.com/huggingface/datasets/issues/3304
3,304
Dataset object has no attribute `to_tf_dataset`
closed
1
2021-11-20T12:03:59
2021-11-21T07:07:25
2021-11-21T07:07:25
RajkumarGalaxy
[ "bug" ]
I am following HuggingFace Course. I am at Fine-tuning a model. Link: https://huggingface.co/course/chapter3/2?fw=tf I use tokenize_function and `map` as mentioned in the course to process data. `# define a tokenize function` `def Tokenize_function(example):` ` return tokenizer(example['sentence'], truncation=True)` `# tokenize entire data` `tokenized_data = raw_data.map(Tokenize_function, batched=True)` I get Dataset object at this point. When I try converting this to a TF dataset object as mentioned in the course, it throws the following error. `# convert to TF dataset` `train_data = tokenized_data["train"].to_tf_dataset( ` ` columns = ['attention_mask', 'input_ids', 'token_type_ids'], ` ` label_cols = ['label'], ` ` shuffle = True, ` ` collate_fn = data_collator, ` ` batch_size = 8 ` `)` Output: `---------------------------------------------------------------------------` `AttributeError Traceback (most recent call last)` `/tmp/ipykernel_42/103099799.py in <module>` ` 1 # convert to TF dataset` `----> 2 train_data = tokenized_data["train"].to_tf_dataset( \` ` 3 columns = ['attention_mask', 'input_ids', 'token_type_ids'], \` ` 4 label_cols = ['label'], \` ` 5 shuffle = True, \` `AttributeError: 'Dataset' object has no attribute 'to_tf_dataset'` When I look for `dir(tokenized_data["train"])`, there is no method or attribute in the name of `to_tf_dataset`. Why do I get this error? And how to clear this? Please help me.
false
1,059,129,732
https://api.github.com/repos/huggingface/datasets/issues/3303
https://github.com/huggingface/datasets/issues/3303
3,303
DataCollatorWithPadding: TypeError
closed
1
2021-11-20T11:59:55
2021-11-21T07:05:37
2021-11-21T07:05:37
RajkumarGalaxy
[ "bug" ]
Hi, I am following the HuggingFace course. I am now at Fine-tuning [https://huggingface.co/course/chapter3/3?fw=tf](https://huggingface.co/course/chapter3/3?fw=tf). When I set up `DataCollatorWithPadding` as following I got an error while trying to reproduce the course code in Kaggle. This error occurs with either a CPU-only-device or a GPU-device. Input: ```checkpoint = 'bert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(checkpoint) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") ``` Output: ```--------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_42/1563280798.py in <module> 1 checkpoint = 'bert-base-uncased' 2 tokenizer = AutoTokenizer.from_pretrained(checkpoint) ----> 3 data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="pt") TypeError: __init__() got an unexpected keyword argument 'return_tensors' ``` When I call `help` method, it too confirms that there is no argument `return_tensors`. Input: ``` help(DataCollatorWithPadding.__init__) ``` Output: ``` Help on function __init__ in module transformers.data.data_collator: __init__(self, tokenizer: transformers.tokenization_utils_base.PreTrainedTokenizerBase, padding: Union[bool, str, transformers.file_utils.PaddingStrategy] = True, max_length: Union[int, NoneType] = None, pad_to_multiple_of: Union[int, NoneType] = None) -> None ``` But, the source file *[Data Collator - docs](https://huggingface.co/transformers/main_classes/data_collator.html#datacollatorwithpadding)* says that there is such an argument. By default, it returns Pytorch tensors while I need TF tensors. Where do I miss? Please help me.
false
1,058,907,168
https://api.github.com/repos/huggingface/datasets/issues/3302
https://github.com/huggingface/datasets/pull/3302
3,302
fix old_val typo in f-string
closed
0
2021-11-19T20:51:08
2021-11-25T22:14:43
2021-11-22T17:04:19
Mehdi2402
[]
This PR is to correct a typo in #3277 that @Carlosbogo revieled in a comment. Related closed issue : #3257 Sorry about that ๐Ÿ˜….
true