id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
993,613,113
https://api.github.com/repos/huggingface/datasets/issues/2896
https://github.com/huggingface/datasets/pull/2896
2,896
add multi-proc in `to_csv`
closed
2
2021-09-10T21:35:09
2021-10-28T05:47:33
2021-10-26T16:00:42
bhavitvyamalik
[]
This PR extends the multi-proc method used in #2747 for`to_json` to `to_csv` as well. Results on my machine post benchmarking on `ascent_kb` dataset (giving ~45% improvement when compared to num_proc = 1): ``` Time taken on 1 num_proc, 10000 batch_size 674.2055702209473 Time taken on 4 num_proc, 10000 batch_size 425.6553490161896 Time taken on 1 num_proc, 50000 batch_size 623.5897650718689 Time taken on 4 num_proc, 50000 batch_size 380.0402421951294 Time taken on 4 num_proc, 100000 batch_size 361.7168130874634 ``` This is a WIP as writing tests is pending for this PR. I'm also exploring [this](https://arrow.apache.org/docs/python/csv.html#incremental-writing) approach for which I'm using `pyarrow-5.0.0`.
true
993,462,274
https://api.github.com/repos/huggingface/datasets/issues/2895
https://github.com/huggingface/datasets/pull/2895
2,895
Use pyarrow.Table.replace_schema_metadata instead of pyarrow.Table.cast
closed
0
2021-09-10T17:56:57
2021-09-21T22:50:01
2021-09-21T08:18:35
arsarabi
[]
This PR partially addresses #2252. ``update_metadata_with_features`` uses ``Table.cast`` which slows down ``load_from_disk`` (and possibly other methods that use it) for very large datasets. Since ``update_metadata_with_features`` is only updating the schema metadata, it makes more sense to use ``pyarrow.Table.replace_schema_metadata`` which is much faster. This PR adds a ``replace_schema_metadata`` method to all table classes, and modifies ``update_metadata_with_features`` to use it instead of ``cast``.
true
993,375,654
https://api.github.com/repos/huggingface/datasets/issues/2894
https://github.com/huggingface/datasets/pull/2894
2,894
Fix COUNTER dataset
closed
0
2021-09-10T16:07:29
2021-09-10T16:27:45
2021-09-10T16:27:44
albertvillanova
[]
Fix filename generating `FileNotFoundError`. Related to #2866. CC: @severo.
true
993,342,781
https://api.github.com/repos/huggingface/datasets/issues/2893
https://github.com/huggingface/datasets/pull/2893
2,893
add mbpp dataset
closed
1
2021-09-10T15:27:30
2021-09-16T09:35:42
2021-09-16T09:35:42
lvwerra
[]
This PR adds the mbpp dataset introduced by Google [here](https://github.com/google-research/google-research/tree/master/mbpp) as mentioned in #2816. The dataset contain two versions: a full and a sanitized one. They have a slightly different schema and it is current state the loading preserves the original schema. An open question is whether to harmonize the two schemas when loading the dataset or to preserve the original one. Since not all fields are overlapping the schema will not be exactly the same.
true
993,274,572
https://api.github.com/repos/huggingface/datasets/issues/2892
https://github.com/huggingface/datasets/issues/2892
2,892
Error when encoding a dataset with None objects with a Sequence feature
closed
1
2021-09-10T14:11:43
2021-09-13T14:18:13
2021-09-13T14:17:42
lhoestq
[ "bug" ]
There is an error when encoding a dataset with None objects with a Sequence feature To reproduce: ```python from datasets import Dataset, Features, Value, Sequence data = {"a": [[0], None]} features = Features({"a": Sequence(Value("int32"))}) dataset = Dataset.from_dict(data, features=features) ``` raises ```python --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-24-40add67f8751> in <module> 2 data = {"a": [[0], None]} 3 features = Features({"a": Sequence(Value("int32"))}) ----> 4 dataset = Dataset.from_dict(data, features=features) [...] ~/datasets/features.py in encode_nested_example(schema, obj) 888 if isinstance(obj, str): # don't interpret a string as a list 889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj)) --> 890 return [encode_nested_example(schema.feature, o) for o in obj] 891 # Object with special encoding: 892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks TypeError: 'NoneType' object is not iterable ``` Instead, if should run without error, as if the `features` were not passed
false
993,161,984
https://api.github.com/repos/huggingface/datasets/issues/2891
https://github.com/huggingface/datasets/pull/2891
2,891
Allow dynamic first dimension for ArrayXD
closed
9
2021-09-10T11:52:52
2021-11-23T15:33:13
2021-10-29T09:37:17
rpowalski
[]
Add support for dynamic first dimension for ArrayXD features. See issue [#887](https://github.com/huggingface/datasets/issues/887). Following changes allow for `to_pylist` method of `ArrayExtensionArray` to return a list of numpy arrays where fist dimension can vary. @lhoestq Could you suggest how you want to extend test suit. For now I added only very limited testing.
true
993,074,102
https://api.github.com/repos/huggingface/datasets/issues/2890
https://github.com/huggingface/datasets/issues/2890
2,890
0x290B112ED1280537B24Ee6C268a004994a16e6CE
closed
0
2021-09-10T09:51:17
2021-09-10T11:45:29
2021-09-10T11:45:29
rcacho172
[ "dataset request" ]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
992,968,382
https://api.github.com/repos/huggingface/datasets/issues/2889
https://github.com/huggingface/datasets/issues/2889
2,889
Coc
closed
0
2021-09-10T07:32:07
2021-09-10T11:45:54
2021-09-10T11:45:54
Bwiggity
[ "dataset request" ]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
992,676,535
https://api.github.com/repos/huggingface/datasets/issues/2888
https://github.com/huggingface/datasets/issues/2888
2,888
v1.11.1 release date
closed
2
2021-09-09T21:53:15
2021-09-12T20:18:35
2021-09-12T16:15:39
fcakyon
[ "question" ]
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago. When do you plan to publush v1.11.1 release?
false
992,576,305
https://api.github.com/repos/huggingface/datasets/issues/2887
https://github.com/huggingface/datasets/pull/2887
2,887
#2837 Use cache folder for lockfile
closed
1
2021-09-09T19:55:56
2021-10-05T17:58:22
2021-10-05T17:58:22
Dref360
[]
Fixes #2837 Use a cache folder directory to store the FileLock. The issue was that the lock file was in a readonly folder.
true
992,534,632
https://api.github.com/repos/huggingface/datasets/issues/2886
https://github.com/huggingface/datasets/issues/2886
2,886
Hj
closed
0
2021-09-09T18:58:52
2021-09-10T11:46:29
2021-09-10T11:46:29
Noorasri
[]
null
false
992,160,544
https://api.github.com/repos/huggingface/datasets/issues/2885
https://github.com/huggingface/datasets/issues/2885
2,885
Adding an Elastic Search index to a Dataset
open
3
2021-09-09T12:21:39
2021-10-20T18:57:11
null
MotzWanted
[ "bug" ]
## Describe the bug When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break: Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453) 90%|████████████████████████████████████████████▉ | 9501/10570 [00:01<00:00, 6335.61docs/s] No error is thrown, but the indexing breaks ~90%. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset from elasticsearch import Elasticsearch es = Elasticsearch() squad = load_dataset('squad', split='validation') index_name = "corpus" es_config = { "settings": { "number_of_shards": 1, "analysis": {"analyzer": {"stop_standard": {"type": "standard", " stopwords": "_english_"}}}, }, "mappings": { "properties": { "idx" : {"type" : "keyword"}, "title" : {"type" : "keyword"}, "text": { "type": "text", "analyzer": "standard", "similarity": "BM25" }, } }, } class IndexBuilder: """ Elastic search indexing of a corpus """ def __init__( self, *args, #corpus : None, dataset : squad, index_name = str, query = str, config = dict, **kwargs, ): #instantiate HuggingFace dataset self.dataset = dataset #instantiate ElasticSearch config self.config = config self.es = Elasticsearch() self.index_name = index_name self.query = query def elastic_index(self): print(self.es.info) self.es.indices.delete(index=self.index_name, ignore=[400, 404]) search_index = self.dataset.add_elasticsearch_index(column='context', host='localhost', port='9200', es_index_name=self.index_name, es_index_config=self.config) return search_index def exact_match_method(self, index): scores, retrieved_examples = index.get_nearest_examples('context', query=self.query, k=1) return scores, retrieved_examples if __name__ == "__main__": print(type(squad)) Index = IndexBuilder(dataset=squad, index_name='corpus_index', query='Where was Chopin born?', config=es_config) search_index = Index.elastic_index() scores, examples = Index.exact_match_method(search_index) print(scores, examples) for name in squad.column_names: print(type(squad[name])) ``` ## Environment info We run the code in Poetry. This might be the issue, since the script runs successfully in our local environment. Poetry: - Python version: 3.8 - PyArrow: 4.0.1 - Elasticsearch: 7.13.4 - datasets: 1.10.2 Local: - Python version: 3.8 - PyArrow: 3.0.0 - Elasticsearch: 7.7.1 - datasets: 1.7.0
false
992,135,698
https://api.github.com/repos/huggingface/datasets/issues/2884
https://github.com/huggingface/datasets/pull/2884
2,884
Add IC, SI, ER tasks to SUPERB
closed
4
2021-09-09T11:56:03
2021-09-20T09:17:58
2021-09-20T09:00:49
anton-l
[]
This PR adds 3 additional classification tasks to SUPERB #### Intent Classification Dataset URL seems to be down at the moment :( See the note below. S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#ic-intent-classification---fluent-speech-commands #### Speaker Identification Manual download script: ``` mkdir VoxCeleb1 cd VoxCeleb1 wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partaa wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partab wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partac wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_dev_wav_partad cat vox1_dev* > vox1_dev_wav.zip unzip vox1_dev_wav.zip wget https://thor.robots.ox.ac.uk/~vgg/data/voxceleb/vox1a/vox1_test_wav.zip unzip vox1_test_wav.zip # download the official SUPERB train-dev-test split wget https://raw.githubusercontent.com/s3prl/s3prl/master/s3prl/downstream/voxceleb1/veri_test_class.txt ``` S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/voxceleb1/dataset.py Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sid-speaker-identification #### Intent Classification Manual download requires going through a slow application process, see the note below. S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/emotion/IEMOCAP_preprocess.py Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#er-emotion-recognition #### :warning: Note These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingface.co/datasets/anton-l/superb_source_data_dumps/tree/main
true
991,969,875
https://api.github.com/repos/huggingface/datasets/issues/2883
https://github.com/huggingface/datasets/pull/2883
2,883
Fix data URLs and metadata in DocRED dataset
closed
0
2021-09-09T08:55:34
2021-09-13T11:24:31
2021-09-13T11:24:31
albertvillanova
[]
The host of `docred` dataset has updated the `dev` data file. This PR: - Updates the dev URL - Updates dataset metadata This PR also fixes the URL of the `train_distant` split, which was wrong. Fix #2882.
true
991,800,141
https://api.github.com/repos/huggingface/datasets/issues/2882
https://github.com/huggingface/datasets/issues/2882
2,882
`load_dataset('docred')` results in a `NonMatchingChecksumError`
closed
1
2021-09-09T05:55:02
2021-09-13T11:24:30
2021-09-13T11:24:30
tmpr
[ "bug" ]
## Describe the bug I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`. ## Steps to reproduce the bug It is quasi only this code: ```python import datasets data = datasets.load_dataset('docred') ``` ## Expected results The DocRED dataset should be loaded without any problems. ## Actual results ``` NonMatchingChecksumError Traceback (most recent call last) <ipython-input-4-b1b83f25a16c> in <module> ----> 1 d = datasets.load_dataset('docred') ~/anaconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 845 846 # Download and prepare data --> 847 builder_instance.download_and_prepare( 848 download_config=download_config, 849 download_mode=download_mode, ~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 613 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 614 if not downloaded_from_gcs: --> 615 self._download_and_prepare( 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 617 ) ~/anaconda3/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 673 # Checksums verification 674 if verify_infos: --> 675 verify_checksums( 676 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" 677 ) ~/anaconda3/lib/python3.8/site-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 38 if len(bad_urls) > 0: 39 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 41 logger.info("All the checksums matched successfully" + for_verification_name) 42 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=1fDmfUUo5G7gfaoqWWvK81u08m71TK2g7'] ``` ## Environment info - `datasets` version: 1.11.0 - Platform: Linux-5.11.0-7633-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyArrow version: 5.0.0 This error also happened on my Windows-partition, after freshly installing python 3.9 and `datasets`. ## Remarks - I have already called `rm -rf /home/<user>/.cache/huggingface`, i.e., I have tried clearing the cache. - The problem does not exist for other datasets, i.e., it seems to be DocRED-specific.
false
991,639,142
https://api.github.com/repos/huggingface/datasets/issues/2881
https://github.com/huggingface/datasets/pull/2881
2,881
Add BIOSSES dataset
closed
0
2021-09-09T00:35:36
2021-09-13T14:20:40
2021-09-13T14:20:40
bwang482
[]
Adding the biomedical semantic sentence similarity dataset, BIOSSES, listed in "Biomedical Datasets - BigScience Workshop 2021"
true
990,877,940
https://api.github.com/repos/huggingface/datasets/issues/2880
https://github.com/huggingface/datasets/pull/2880
2,880
Extend support for streaming datasets that use pathlib.Path stem/suffix
closed
0
2021-09-08T08:42:43
2021-09-09T13:13:29
2021-09-09T13:13:29
albertvillanova
[]
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the properties `pathlib.Path.stem` and `pathlib.Path.suffix`. Related to #2876, #2874, #2866. CC: @severo
true
990,257,404
https://api.github.com/repos/huggingface/datasets/issues/2879
https://github.com/huggingface/datasets/issues/2879
2,879
In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
closed
3
2021-09-07T18:53:45
2021-09-08T16:55:19
2021-09-08T09:12:28
rcgale
[ "bug" ]
## Describe the bug Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same. ## Steps to reproduce the bug I was following this tutorial - https://huggingface.co/blog/fine-tune-wav2vec2-english But here's a distilled repro: ```python !pip install datasets==1.4.1 from datasets import load_dataset timit = load_dataset("timit_asr", cache_dir="./temp") unique_transcripts = set(timit["train"]["text"]) print(unique_transcripts) assert len(unique_transcripts) > 1 ``` ## Expected results Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it. ## Actual results Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore." ## Environment info - `datasets` version: 1.4.1 - Platform: Darwin-18.7.0-x86_64-i386-64bit - Python version: 3.7.9 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: tried both - Using distributed or parallel set-up in script?: no -
false
990,093,316
https://api.github.com/repos/huggingface/datasets/issues/2878
https://github.com/huggingface/datasets/issues/2878
2,878
NotADirectoryError: [WinError 267] During load_from_disk
open
0
2021-09-07T15:15:05
2021-09-07T15:15:05
null
Grassycup
[ "bug" ]
## Describe the bug Trying to load saved dataset or dataset directory from Amazon S3 on a Windows machine fails. Performing the same operation succeeds on non-windows environment (AWS Sagemaker). ## Steps to reproduce the bug ```python # Followed https://huggingface.co/docs/datasets/filesystems.html#loading-a-processed-dataset-from-s3 from datasets import load_from_disk from datasets.filesystems import S3FileSystem s3_file = "output of save_to_disk" s3_filesystem = S3FileSystem() load_from_disk(s3_file, fs=s3_filesystem) ``` ## Expected results load_from_disk succeeds without error ## Actual results Seems like it succeeds in pulling the file into a windows temp directory, as it exists in my system, but fails to process it. ``` Exception ignored in: <finalize object at 0x26409231ce0; dead> Traceback (most recent call last): File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup cls._rmtree(name) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) [Previous line repeated 2 more times] File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror cls._rmtree(path) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe onerror(os.scandir, path, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe with os.scandir(path) as scandir_it: NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow' Exception ignored in: <finalize object at 0x264091c7880; dead> Traceback (most recent call last): File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\weakref.py", line 566, in __call__ return info.func(*info.args, **(info.kwargs or {})) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 817, in _cleanup cls._rmtree(name) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 613, in _rmtree_unsafe _rmtree_unsafe(fullname, onerror) [Previous line repeated 2 more times] File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 618, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 805, in onerror cls._rmtree(path) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\tempfile.py", line 813, in _rmtree _shutil.rmtree(name, onerror=onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 740, in rmtree return _rmtree_unsafe(path, onerror) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 599, in _rmtree_unsafe onerror(os.scandir, path, sys.exc_info()) File "C:\Users\grassycup\Anaconda3\envs\hello.world\lib\shutil.py", line 596, in _rmtree_unsafe with os.scandir(path) as scandir_it: NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\grassycup\\AppData\\Local\\Temp\\tmp45f_qbma\\tests3bucket\\output\\test_output\\train\\dataset.arrow' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.8.11 - PyArrow version: 3.0.0
false
990,027,249
https://api.github.com/repos/huggingface/datasets/issues/2877
https://github.com/huggingface/datasets/issues/2877
2,877
Don't keep the dummy data folder or dataset_infos.json when resolving data files
closed
2
2021-09-07T14:09:04
2021-09-29T09:05:38
2021-09-29T09:05:38
lhoestq
[ "enhancement" ]
When there's no dataset script, all the data files of a folder or a repository on the Hub are loaded as data files. There are already a few exceptions: - files starting with "." are ignored - the dataset card "README.md" is ignored - any file named "config.json" is ignored (currently it isn't used anywhere, but it could be used in the future to define splits or configs for example, but not 100% sure) However any data files in a folder named "dummy" should be ignored as well as they should only be used to test the dataset. Same for "dataset_infos.json" which should only be used to get the `dataset.info`
false
990,001,079
https://api.github.com/repos/huggingface/datasets/issues/2876
https://github.com/huggingface/datasets/pull/2876
2,876
Extend support for streaming datasets that use pathlib.Path.glob
closed
2
2021-09-07T13:43:45
2021-09-10T09:50:49
2021-09-10T09:50:48
albertvillanova
[]
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`. Related to #2874, #2866. CC: @severo
true
989,919,398
https://api.github.com/repos/huggingface/datasets/issues/2875
https://github.com/huggingface/datasets/issues/2875
2,875
Add Congolese Swahili speech datasets
open
0
2021-09-07T12:13:50
2021-09-07T12:13:50
null
osanseviero
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** Congolese Swahili speech corpora - **Data:** https://gamayun.translatorswb.org/data/ Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Also related: https://mobile.twitter.com/OktemAlp/status/1435196393631764482
false
989,685,328
https://api.github.com/repos/huggingface/datasets/issues/2874
https://github.com/huggingface/datasets/pull/2874
2,874
Support streaming datasets that use pathlib
closed
3
2021-09-07T07:35:49
2021-09-07T18:25:22
2021-09-07T11:41:15
albertvillanova
[]
This PR extends the support in streaming mode for datasets that use `pathlib.Path`. Related to: #2866. CC: @severo
true
989,587,695
https://api.github.com/repos/huggingface/datasets/issues/2873
https://github.com/huggingface/datasets/pull/2873
2,873
adding swedish_medical_ner
closed
2
2021-09-07T04:44:53
2021-09-17T20:47:37
2021-09-17T20:47:37
bwang482
[]
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021" Code refactored
true
989,453,069
https://api.github.com/repos/huggingface/datasets/issues/2872
https://github.com/huggingface/datasets/pull/2872
2,872
adding swedish_medical_ner
closed
0
2021-09-06T22:00:52
2021-09-07T04:36:32
2021-09-07T04:36:32
bwang482
[]
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
true
989,436,088
https://api.github.com/repos/huggingface/datasets/issues/2871
https://github.com/huggingface/datasets/issues/2871
2,871
datasets.config.PYARROW_VERSION has no attribute 'major'
closed
5
2021-09-06T21:06:57
2021-09-08T08:51:52
2021-09-08T08:51:52
bwang482
[ "bug" ]
In the test_dataset_common.py script, line 288-289 ``` if datasets.config.PYARROW_VERSION.major < 3: packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"] ``` which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS. ``` import datasets datasets.config.PYARROW_VERSION.major --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module> 1 import datasets ----> 2 datasets.config.PYARROW_VERSION.major AttributeError: 'str' object has no attribute 'major' ``` ## Environment info - `datasets` version: 1.11.0 - Platform: Darwin-20.6.0-x86_64-i386-64bit - Python version: 3.7.11 - PyArrow version: 4.0.1
false
988,276,859
https://api.github.com/repos/huggingface/datasets/issues/2870
https://github.com/huggingface/datasets/pull/2870
2,870
Fix three typos in two files for documentation
closed
0
2021-09-04T11:49:43
2021-09-06T08:21:21
2021-09-06T08:19:35
leny-mi
[]
Changed "bacth_size" to "batch_size" (2x) Changed "intsructions" to "instructions"
true
987,676,420
https://api.github.com/repos/huggingface/datasets/issues/2869
https://github.com/huggingface/datasets/issues/2869
2,869
TypeError: 'NoneType' object is not callable
closed
17
2021-09-03T11:27:39
2025-02-19T09:57:34
2021-09-08T09:24:55
Chenfei-Kang
[ "bug" ]
## Describe the bug TypeError: 'NoneType' object is not callable ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric dataset = datasets.load_dataset("glue", 'cola') ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: - Python version: 3.7 - PyArrow version:
false
987,139,146
https://api.github.com/repos/huggingface/datasets/issues/2868
https://github.com/huggingface/datasets/issues/2868
2,868
Add Common Objects in 3D (CO3D)
open
0
2021-09-02T20:36:12
2024-01-17T12:03:59
null
nateraw
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** *Common Objects in 3D (CO3D)* - **Description:** *See blog post [here](https://ai.facebook.com/blog/common-objects-in-3d-dataset-for-3d-reconstruction)* - **Paper:** *[link to paper](https://arxiv.org/abs/2109.00512)* - **Data:** *[link to data](https://ai.facebook.com/datasets/co3d-downloads/)* - **Motivation:** *excerpt from above blog post:* > As the first data set of its kind, CO3D will aptly enable reconstruction of real-life 3D objects. Indeed, CO3D already provides training data to enable our NeRFormer to tackle the new-view synthesis (NVS) task. Here, photorealistic NVS is a major step on the path to fully immersive AR/VR effects, where objects can be virtually transported across different environments, which will allow connecting users by sharing or recollecting their experiences. > > Besides practical applications in AR/VR, we hope that the data set will become a standard testbed for the recent proliferation of methods (including NeRFormer, Implicit Differentiable Renderer, NeRF, and others) that reconstruct 3D scenes by means of an implicit shape model. > Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
986,971,224
https://api.github.com/repos/huggingface/datasets/issues/2867
https://github.com/huggingface/datasets/pull/2867
2,867
Add CaSiNo dataset
closed
3
2021-09-02T17:06:23
2021-09-16T15:12:54
2021-09-16T09:23:44
kushalchawla
[]
Hi. I request you to add our dataset to the repository. This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf
true
986,706,676
https://api.github.com/repos/huggingface/datasets/issues/2866
https://github.com/huggingface/datasets/issues/2866
2,866
"counter" dataset raises an error in normal mode, but not in streaming mode
closed
11
2021-09-02T13:10:53
2021-10-14T09:24:09
2021-10-14T09:24:09
severo
[ "bug" ]
## Describe the bug `counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode. ## Steps to reproduce the bug ```python >>> import datasets as ds >>> a = ds.load_dataset('counter', split="train", streaming=False) Using custom data configuration default Downloading and preparing dataset counter/default (download: 1.29 MiB, generated: 2.48 MiB, post-processed: Unknown size, total: 3.77 MiB) to /home/slesage/.cache/huggingface/datasets/counter/default/1.0.0/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9... Traceback (most recent call last): File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 726, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 1124, in _prepare_split for key, record in utils.tqdm( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/tqdm/std.py", line 1185, in __iter__ for obj in iterable: File "/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/counter/9f84962fa0f35bec5a34fe0bdff8681838d497008c457f7856c48654476ec0e9/counter.py", line 161, in _generate_examples with derived_file.open(encoding="utf-8") as f: File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1222, in open return io.open(self, mode, buffering, encoding, errors, newline, File "/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/pathlib.py", line 1078, in _opener return self._accessor.open(self, flags, mode) FileNotFoundError: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/load.py", line 1112, in load_dataset builder_instance.download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 636, in download_and_prepare self._download_and_prepare( File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.8/site-packages/datasets/builder.py", line 728, in _download_and_prepare raise OSError( OSError: Cannot find data file. Original error: [Errno 2] No such file or directory: '/home/slesage/.cache/huggingface/datasets/downloads/extracted/b57aa6db5601a738e57b95c1fd8cced54ff28fc540efcdaf0f6c4f1bb5dfe211/COUNTER/0032p.xml' ``` ```python >>> import datasets as ds >>> b = ds.load_dataset('counter', split="train", streaming=True) Using custom data configuration default >>> list(b) [] ``` ## Expected results An exception should be raised in streaming mode ## Actual results No exception is raised in streaming mode: there is no way to tell if something has broken or if the dataset is simply empty. ## Environment info - `datasets` version: 1.11.1.dev0 - Platform: Linux-5.11.0-1016-aws-x86_64-with-glibc2.29 - Python version: 3.8.11 - PyArrow version: 4.0.1
false
986,460,698
https://api.github.com/repos/huggingface/datasets/issues/2865
https://github.com/huggingface/datasets/pull/2865
2,865
Add MultiEURLEX dataset
closed
6
2021-09-02T09:42:24
2021-09-10T11:50:06
2021-09-10T11:50:06
iliaschalkidis
[]
**Add new MultiEURLEX Dataset** MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is multi-label classification task (given the text, predict multiple labels).
true
986,159,438
https://api.github.com/repos/huggingface/datasets/issues/2864
https://github.com/huggingface/datasets/pull/2864
2,864
Fix data URL in ToTTo dataset
closed
0
2021-09-02T05:25:08
2021-09-02T06:47:40
2021-09-02T06:47:40
albertvillanova
[]
Data source host changed their data URL: google-research-datasets/ToTTo@cebeb43. Fix #2860.
true
986,156,755
https://api.github.com/repos/huggingface/datasets/issues/2863
https://github.com/huggingface/datasets/pull/2863
2,863
Update dataset URL
closed
1
2021-09-02T05:22:18
2021-09-02T08:10:50
2021-09-02T08:10:50
mrm8488
[]
null
true
985,081,871
https://api.github.com/repos/huggingface/datasets/issues/2861
https://github.com/huggingface/datasets/pull/2861
2,861
fix: 🐛 be more specific when catching exceptions
closed
6
2021-09-01T12:18:12
2021-09-02T09:53:36
2021-09-02T09:52:03
severo
[]
The same specific exception is catched in other parts of the same function.
true
985,013,339
https://api.github.com/repos/huggingface/datasets/issues/2860
https://github.com/huggingface/datasets/issues/2860
2,860
Cannot download TOTTO dataset
closed
1
2021-09-01T11:04:10
2021-09-02T06:47:40
2021-09-02T06:47:40
mrm8488
[ "bug" ]
Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip `datasets version: 1.11.0` # How to reproduce: ```py from datasets import load_dataset dataset = load_dataset('totto') ```
false
984,324,500
https://api.github.com/repos/huggingface/datasets/issues/2859
https://github.com/huggingface/datasets/issues/2859
2,859
Loading allenai/c4 in streaming mode does too many HEAD requests
closed
2
2021-08-31T21:11:04
2021-10-12T07:35:52
2021-10-11T11:05:51
lhoestq
[ "enhancement", "streaming" ]
This does 60,000+ HEAD requests to get all the ETags of all the data files: ```python from datasets import load_dataset load_dataset("allenai/c4", streaming=True) ``` It makes loading the dataset completely impractical. The ETags are used to compute the config id (it must depend on the data files being used). Instead of using the ETags, we could simply use the commit hash of the dataset repository on the hub, as well and the glob pattern used to resolve the files (here it's `*` by default, to load all the files of the repository)
false
984,145,568
https://api.github.com/repos/huggingface/datasets/issues/2858
https://github.com/huggingface/datasets/pull/2858
2,858
Fix s3fs version in CI
closed
0
2021-08-31T18:05:43
2021-09-06T13:33:35
2021-08-31T21:29:51
lhoestq
[]
The latest s3fs version has new constrains on aiobotocore, and therefore on boto3 and botocore This PR changes the constrains to avoid the new conflicts In particular it pins the version of s3fs.
true
984,093,938
https://api.github.com/repos/huggingface/datasets/issues/2857
https://github.com/huggingface/datasets/pull/2857
2,857
Update: Openwebtext - update size
closed
1
2021-08-31T17:11:03
2022-02-15T10:38:03
2021-09-07T09:44:32
lhoestq
[]
Update the size of the Openwebtext dataset I also regenerated the dataset_infos.json but the data file checksum didn't change, and the number of examples either (8013769 examples) Close #2839, close #726.
true
983,876,734
https://api.github.com/repos/huggingface/datasets/issues/2856
https://github.com/huggingface/datasets/pull/2856
2,856
fix: 🐛 remove URL's query string only if it's ?dl=1
closed
0
2021-08-31T13:40:07
2021-08-31T14:22:12
2021-08-31T14:22:12
severo
[]
A lot of URL use the query strings, for example http://opus.nlpl.eu/download.php?f=Bianet/v1/moses/en-ku.txt.zip, we must not remove it when trying to detect the protocol. We thus remove it only in the case of the query string being ?dl=1 which occurs on dropbox and dl.orangedox.com. Also: add unit tests. See https://github.com/huggingface/datasets/pull/2843 for the original discussion.
true
983,858,229
https://api.github.com/repos/huggingface/datasets/issues/2855
https://github.com/huggingface/datasets/pull/2855
2,855
Fix windows CI CondaError
closed
0
2021-08-31T13:22:02
2021-08-31T13:35:34
2021-08-31T13:35:33
lhoestq
[]
From this thread: https://github.com/conda/conda/issues/6057 We can fix the conda error ``` CondaError: Cannot link a source that does not exist. C:\Users\...\Anaconda3\Scripts\conda.exe ``` by doing ```bash conda update conda ``` before doing any install in the windows CI
true
983,726,084
https://api.github.com/repos/huggingface/datasets/issues/2854
https://github.com/huggingface/datasets/pull/2854
2,854
Fix caching when moving script
closed
1
2021-08-31T10:58:35
2021-08-31T13:13:36
2021-08-31T13:13:36
lhoestq
[]
When caching the result of a `map` function, the hash that is computed depends on many properties of this function, such as all the python objects it uses, its code and also the location of this code. Using the full path of the python script for the location of the code makes the hash change if a script like `run_mlm.py` is moved. I changed this by simply using the base name of the script instead of the full path. Note that this change also affects the hash of the code used from imported modules, but I think it's fine. Indeed it hashes the code of the imported modules anyway, so the location of the python files of the imported modules doesn't matter when computing the hash. Close https://github.com/huggingface/datasets/issues/2825
true
983,692,026
https://api.github.com/repos/huggingface/datasets/issues/2853
https://github.com/huggingface/datasets/pull/2853
2,853
Add AMI dataset
closed
2
2021-08-31T10:19:01
2021-09-29T09:19:19
2021-09-29T09:19:19
cahya-wirawan
[]
This is an initial commit for AMI dataset
true
983,609,352
https://api.github.com/repos/huggingface/datasets/issues/2852
https://github.com/huggingface/datasets/pull/2852
2,852
Fix: linnaeus - fix url
closed
1
2021-08-31T08:51:13
2021-08-31T13:12:10
2021-08-31T13:12:09
lhoestq
[]
The url was causing a `ConnectionError` because of the "/" at the end Close https://github.com/huggingface/datasets/issues/2821
true
982,789,593
https://api.github.com/repos/huggingface/datasets/issues/2851
https://github.com/huggingface/datasets/pull/2851
2,851
Update `column_names` showed as `:func:` in exploring.st
closed
0
2021-08-30T13:21:46
2021-09-01T08:42:11
2021-08-31T14:45:46
ClementRomac
[]
Hi, One mention of `column_names` in exploring.st was showing it as `:func:` instead of `:attr:`.
true
982,654,644
https://api.github.com/repos/huggingface/datasets/issues/2850
https://github.com/huggingface/datasets/issues/2850
2,850
Wound segmentation datasets
open
0
2021-08-30T10:44:32
2021-12-08T12:02:00
null
osanseviero
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** Wound segmentation datasets - **Description:** annotated wound image dataset - **Paper:** https://www.nature.com/articles/s41598-020-78799-w - **Data:** https://github.com/uwm-bigdata/wound-segmentation - **Motivation:** Interesting simple image dataset, useful for segmentation, with visibility due to http://www.miccai.org/special-interest-groups/challenges/ and https://fusc.grand-challenge.org/ Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
982,631,420
https://api.github.com/repos/huggingface/datasets/issues/2849
https://github.com/huggingface/datasets/issues/2849
2,849
Add Open Catalyst Project Dataset
open
0
2021-08-30T10:14:39
2021-08-30T10:14:39
null
osanseviero
[ "dataset request" ]
## Adding a Dataset - **Name:** Open Catalyst 2020 (OC20) Dataset - **Website:** https://opencatalystproject.org/ - **Data:** https://github.com/Open-Catalyst-Project/ocp/blob/master/DATASET.md Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
981,953,908
https://api.github.com/repos/huggingface/datasets/issues/2848
https://github.com/huggingface/datasets/pull/2848
2,848
Update README.md
closed
1
2021-08-28T23:58:26
2021-09-07T09:40:32
2021-09-07T09:40:32
odellus
[]
Changed 'Tain' to 'Train'.
true
981,589,693
https://api.github.com/repos/huggingface/datasets/issues/2847
https://github.com/huggingface/datasets/pull/2847
2,847
fix regex to accept negative timezone
closed
0
2021-08-27T20:54:05
2021-09-13T20:39:50
2021-09-07T09:34:23
jadermcs
[]
fix #2846
true
981,587,590
https://api.github.com/repos/huggingface/datasets/issues/2846
https://github.com/huggingface/datasets/issues/2846
2,846
Negative timezone
closed
1
2021-08-27T20:50:33
2021-09-10T11:51:07
2021-09-10T11:51:07
jadermcs
[ "bug" ]
## Describe the bug The load_dataset method do not accept a parquet file with a negative timezone, as it has the following regex: ``` "^(s|ms|us|ns),\s*tz=([a-zA-Z0-9/_+:]*)$" ``` So a valid timestap ```timestamp[us, tz=-03:00]``` returns an error when loading parquet files. ## Steps to reproduce the bug ```python # Where the timestamp column has a tz of -03:00 datasets = load_dataset('parquet', data_files={'train': train_files, 'validation': validation_files, 'test': test_files}, cache_dir="./cache_teste/") ``` ## Expected results The -03:00 is a valid tz so the regex should accept this without raising an error. ## Actual results As this regex disaproves a valid tz it raises the following error: ```python raise ValueError( f"{datasets_dtype} is not a validly formatted string representation of a pyarrow timestamp." f"Examples include timestamp[us] or timestamp[us, tz=America/New_York]" f"See: https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp" ) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Ubuntu 20.04 - Python version: 3.8 - PyArrow version: 5.0.0
false
981,487,861
https://api.github.com/repos/huggingface/datasets/issues/2845
https://github.com/huggingface/datasets/issues/2845
2,845
[feature request] adding easy to remember `datasets.cache_dataset()` + `datasets.is_dataset_cached()`
open
0
2021-08-27T18:21:51
2021-08-27T18:24:05
null
stas00
[ "enhancement" ]
Often, there is a need to prepare a dataset but not use it immediately, e.g. think tests suite setup, so it'd be really useful to be able to do: ``` if not datasets.is_dataset_cached(ds): datasets.cache_dataset(ds) ``` This can already be done with: ``` builder = load_dataset_builder(ds) if not os.path.idsir(builder.cache_dir): builder.download_and_prepare() ``` but the current way is a way less intuitive and much harder to remember than the proposed API, IMHO. One more way is to do: ``` _ = load_dataset(ds) ``` but it wastes resources loading the dataset when it's not needed. this has been discussed at https://huggingface.slack.com/archives/C01229B19EX/p1630021912025800 Thank you! @lhoestq
false
981,382,806
https://api.github.com/repos/huggingface/datasets/issues/2844
https://github.com/huggingface/datasets/pull/2844
2,844
Fix: wikicorpus - fix keys
closed
1
2021-08-27T15:56:06
2021-09-06T14:07:28
2021-09-06T14:07:27
lhoestq
[]
As mentioned in https://github.com/huggingface/datasets/issues/2552, there is a duplicate keys error in `wikicorpus`. I fixed that by taking into account the file index in the keys
true
981,317,775
https://api.github.com/repos/huggingface/datasets/issues/2843
https://github.com/huggingface/datasets/pull/2843
2,843
Fix extraction protocol inference from urls with params
closed
3
2021-08-27T14:40:57
2021-08-30T17:11:49
2021-08-30T13:12:01
lhoestq
[]
Previously it was unable to infer the compression protocol for files at URLs like ``` https://foo.bar/train.json.gz?dl=1 ``` because of the query parameters. I fixed that, this should allow 10+ datasets to work in streaming mode: ``` "discovery", "emotion", "grail_qa", "guardian_authorship", "pragmeval", "simple_questions_v2", "versae/adobo", "w-nicole/childes_data", "w-nicole/childes_data_no_tags_", "w-nicole/childes_data_with_tags", "w-nicole/childes_data_with_tags_" ``` cc @severo
true
980,725,899
https://api.github.com/repos/huggingface/datasets/issues/2842
https://github.com/huggingface/datasets/issues/2842
2,842
always requiring the username in the dataset name when there is one
closed
6
2021-08-26T23:31:53
2021-10-22T09:43:35
2021-10-22T09:43:35
stas00
[ "enhancement" ]
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due. So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software and things broke, since there is no `openwebtext-10k` So this feature request is asking to tighten the checking and not allow dataset loading if it was downloaded with the user prefix, but then attempted to be used w/o it. The same in code: ``` # first run python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')" # now run immediately python -c "from datasets import load_dataset; load_dataset('openwebtext-10k')" # the second command should fail, but it doesn't fail now. ``` Please let me know if I explained myself clearly. Thank you!
false
980,497,321
https://api.github.com/repos/huggingface/datasets/issues/2841
https://github.com/huggingface/datasets/issues/2841
2,841
Adding GLUECoS Hinglish and Spanglish code-switching bemchmark
open
1
2021-08-26T17:47:39
2021-10-20T18:41:20
null
yjernite
[ "dataset request" ]
## Adding a Dataset - **Name:** GLUECoS - **Description:** a Microsoft Benchmark to evaluate code-switching for only two language pairs but a variety of tasks - **Paper:** https://aclanthology.org/2020.acl-main.329/ - **Data:** https://github.com/microsoft/GLUECoS - **Motivation:** We currently only have [one other](https://huggingface.co/datasets/lince) dataset for code-switching Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
980,489,074
https://api.github.com/repos/huggingface/datasets/issues/2840
https://github.com/huggingface/datasets/issues/2840
2,840
How can I compute BLEU-4 score use `load_metric` ?
closed
0
2021-08-26T17:36:37
2021-08-27T08:13:24
2021-08-27T08:13:24
Doragd
[]
I have found the sacrebleu metric. But, I do not know the difference between it and BLEU-4. If I want to compute BLEU-4 score, what can i do?
false
980,271,715
https://api.github.com/repos/huggingface/datasets/issues/2839
https://github.com/huggingface/datasets/issues/2839
2,839
OpenWebText: NonMatchingSplitsSizesError
closed
5
2021-08-26T13:50:26
2021-09-21T14:12:40
2021-09-21T14:09:43
thomasw21
[ "bug" ]
## Describe the bug When downloading `openwebtext`, I'm getting: ``` datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=39769494896, num_examples=8013769, dataset_name='openwebtext'), 'recorded': SplitInfo(name='train', num_bytes=39611023912, num_examples=7982430, dataset_name='openwebtext')}] ``` I suspect that the file we download from has changed since the size doesn't look like to match with documentation `Downloading: 0%| | 0.00/12.9G [00:00<?, ?B/s]` This suggest the total size is 12.9GB, whereas the one documented mentions `Size of downloaded dataset files: 12283.35 MB`. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("openwebtext", download_mode="force_redownload") ``` ## Expected results Loading is successful ## Actual results Loading throws above error. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.2 - Platform: linux (Redhat version 8.1) - Python version: 3.8 - PyArrow version: 4.0.1
false
980,067,186
https://api.github.com/repos/huggingface/datasets/issues/2838
https://github.com/huggingface/datasets/pull/2838
2,838
Add error_bad_chunk to the JSON loader
open
4
2021-08-26T10:07:32
2023-09-25T09:06:42
null
lhoestq
[]
Add the `error_bad_chunk` parameter to the JSON loader. Setting `error_bad_chunk=False` allows to skip an unparsable chunk of JSON data without raising an error. Additional note: In case of an unparsable JSON chunk, the JSON loader no longer tries to load the full JSON (which could take a lot of time in streaming mode) to get the JSON fields that the user may have forgotten to pass. Ex : for squad-like data, the user has to pass `field="data"` to tell the loader to get the list of examples from this field. TODO: update docs cc @lvwerra
true
979,298,297
https://api.github.com/repos/huggingface/datasets/issues/2837
https://github.com/huggingface/datasets/issues/2837
2,837
prepare_module issue when loading from read-only fs
closed
1
2021-08-25T15:21:26
2021-10-05T17:58:22
2021-10-05T17:58:22
Dref360
[ "bug" ]
## Describe the bug When we use prepare_module from a readonly file system, we create a FileLock using the `local_path`. This path is not necessarily writable. `lock_path = local_path + ".lock"` ## Steps to reproduce the bug Run `load_dataset` on a readonly python loader file. ```python ds = load_dataset( python_loader, data_files={"train": train_path, "test": test_path} ) ``` where `python_loader` is a path to a file located in a readonly folder. ## Expected results This should work I think? ## Actual results ```python return load_dataset( File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 711, in load_dataset module_path, hash, resolved_file_path = prepare_module( File "/usr/local/lib/python3.8/dist-packages/datasets/load.py", line 465, in prepare_module with FileLock(lock_path): File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 314, in __enter__ self.acquire() File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 263, in acquire self._acquire() File "/usr/local/lib/python3.8/dist-packages/datasets/utils/filelock.py", line 378, in _acquire fd = os.open(self._lock_file, open_mode) OSError: [Errno 30] Read-only file system: 'YOUR_FILE.py.lock' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.7.0 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.8 - PyArrow version: 3.0.0
false
979,230,142
https://api.github.com/repos/huggingface/datasets/issues/2836
https://github.com/huggingface/datasets/pull/2836
2,836
Optimize Dataset.filter to only compute the indices to keep
closed
2
2021-08-25T14:41:22
2021-09-14T14:51:53
2021-09-13T15:50:21
lhoestq
[]
Optimize `Dataset.filter` to only compute the indices of the rows to keep, instead of creating a new Arrow table with the rows to keep. Creating a new table was an issue because it could take a lot of disk space. This will be useful to process audio datasets for example cc @patrickvonplaten
true
979,209,394
https://api.github.com/repos/huggingface/datasets/issues/2835
https://github.com/huggingface/datasets/pull/2835
2,835
Update: timit_asr - make the dataset streamable
closed
0
2021-08-25T14:22:49
2021-09-07T13:15:47
2021-09-07T13:15:46
lhoestq
[]
The TIMIT ASR dataset had two issues that was preventing it from being streamable: 1. it was missing a call to `open` before `pd.read_csv` 2. it was using `os.path.dirname` which is not supported for streaming I made the dataset streamable by using `open` to load the CSV, and by adding the support for `os.path.dirname` in dataset scripts to stream data You can now do ```python from datasets import load_dataset timit_asr = load_dataset("timit_asr", streaming=True) print(next(iter(timit_asr["train"]))) ``` prints: ```json {"file": "zip://data/TRAIN/DR4/MMDM0/SI681.WAV::https://data.deepai.org/timit.zip", "phonetic_detail": {"start": [0, 1960, 2466, 3480, 4000, 5960, 7480, 7880, 9400, 9960, 10680, 13480, 15680, 15880, 16920, 18297, 18882, 19480, 21723, 22516, 24040, 25190, 27080, 28160, 28560, 30120, 31832, 33240, 34640, 35968, 37720], "utterance": ["h#", "w", "ix", "dcl", "s", "ah", "tcl", "ch", "ix", "n", "ae", "kcl", "t", "ix", "v", "r", "ix", "f", "y", "ux", "zh", "el", "bcl", "b", "iy", "y", "ux", "s", "f", "el", "h#"], "stop": [1960, 2466, 3480, 4000, 5960, 7480, 7880, 9400, 9960, 10680, 13480, 15680, 15880, 16920, 18297, 18882, 19480, 21723, 22516, 24040, 25190, 27080, 28160, 28560, 30120, 31832, 33240, 34640, 35968, 37720, 39920]}, "sentence_type": "SI", "id": "SI681", "speaker_id": "MMDM0", "dialect_region": "DR4", "text": "Would such an act of refusal be useful?", "word_detail": { "start": [1960, 4000, 9400, 10680, 15880, 18297, 27080, 30120], "utterance": ["would", "such", "an", "act", "of", "refusal", "be", "useful"], "stop": [4000, 9400, 10680, 15880, 18297, 27080, 30120, 37720] }} ``` cc @patrickvonplaten @vrindaprabhu
true
978,309,749
https://api.github.com/repos/huggingface/datasets/issues/2834
https://github.com/huggingface/datasets/pull/2834
2,834
Fix IndexError by ignoring empty RecordBatch
closed
0
2021-08-24T17:06:13
2021-08-24T17:21:18
2021-08-24T17:21:18
lhoestq
[]
We need to ignore the empty record batches for the interpolation search to work correctly when querying arrow tables Close #2833 cc @SaulLu
true
978,296,140
https://api.github.com/repos/huggingface/datasets/issues/2833
https://github.com/huggingface/datasets/issues/2833
2,833
IndexError when accessing first element of a Dataset if first RecordBatch is empty
closed
0
2021-08-24T16:49:20
2021-08-24T17:21:17
2021-08-24T17:21:17
lhoestq
[]
The computation of the offsets of the underlying Table of a Dataset has some issues if the first RecordBatch is empty. ```python from datasets import Dataset import pyarrow as pa pa_table = pa.Table.from_pydict({"a": [1]}) pa_table2 = pa.Table.from_pydict({"a": []}, schema=pa_table.schema) ds_table = pa.concat_tables([pa_table2, pa_table]) dataset = Dataset(ds_table) print([len(b) for b in dataset.data._batches]) # [0, 1] print(dataset.data._offsets) # [0 0 1] (should be [0, 1]) dataset[0] ``` raises ```python --------------------------------------------------------------------------- IndexError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/datasets/table.py in _interpolation_search(arr, x) 90 else: 91 i, j = i, k ---> 92 raise IndexError(f"Invalid query '{x}' for size {arr[-1] if len(arr) else 'none'}.") 93 94 IndexError: Invalid query '0' for size 1. ``` This can be fixed by ignoring empty batches when computing `table._batches` and `table._offsets` cc @SaulLu
false
978,012,800
https://api.github.com/repos/huggingface/datasets/issues/2832
https://github.com/huggingface/datasets/issues/2832
2,832
Logging levels not taken into account
closed
2
2021-08-24T11:50:41
2023-07-12T17:19:30
2023-07-12T17:19:29
LysandreJik
[ "bug" ]
## Describe the bug The `logging` module isn't working as intended relative to the levels to set. ## Steps to reproduce the bug ```python from datasets import logging logging.set_verbosity_debug() logger = logging.get_logger() logger.error("ERROR") logger.warning("WARNING") logger.info("INFO") logger.debug("DEBUG" ``` ## Expected results I expect all logs to be output since I'm putting a `debug` level. ## Actual results Only the two first logs are output. ## Environment info - `datasets` version: 1.11.0 - Platform: Linux-5.13.9-arch1-1-x86_64-with-glibc2.33 - Python version: 3.9.6 - PyArrow version: 5.0.0 ## To go further This logging issue appears in `datasets` but not in `transformers`. It happens because there is no handler defined for the logger. When no handler is defined, the `logging` library will output a one-off error to stderr, using a `StderrHandler` with level `WARNING`. `transformers` sets a default `StreamHandler` [here](https://github.com/huggingface/transformers/blob/5c6eca71a983bae2589eed01e5c04fcf88ba5690/src/transformers/utils/logging.py#L86)
false
977,864,600
https://api.github.com/repos/huggingface/datasets/issues/2831
https://github.com/huggingface/datasets/issues/2831
2,831
ArrowInvalid when mapping dataset with missing values
open
1
2021-08-24T08:50:42
2021-08-31T14:15:34
null
uniquefine
[ "bug" ]
## Describe the bug I encountered an `ArrowInvalid` when mapping dataset with missing values. Here are the files for a minimal example. The exception is only thrown when the first line in the csv has a missing value (if you move the last line to the top it isn't thrown). [data_small.csv](https://github.com/huggingface/datasets/files/7037838/data_small.csv) [data.csv](https://github.com/huggingface/datasets/files/7037842/data.csv) ## Steps to reproduce the bug ```python from datasets import load_dataset datasets = load_dataset("csv", data_files=['data_small.csv']) datasets = datasets.map(lambda e: {'labels': e['match']}, remove_columns=['id']) ``` ## Expected results No error ## Actual results ``` File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Invalid null value ``` ## Environment info - `datasets` version: 1.5.0 - Platform: Linux-5.11.0-25-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.7.1+cpu (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no
false
977,563,947
https://api.github.com/repos/huggingface/datasets/issues/2830
https://github.com/huggingface/datasets/pull/2830
2,830
Add imagefolder dataset
closed
15
2021-08-23T23:34:06
2022-03-01T16:29:44
2022-03-01T16:29:44
nateraw
[]
A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`. Resolves #2508 --- Example Usage: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb)
true
977,233,360
https://api.github.com/repos/huggingface/datasets/issues/2829
https://github.com/huggingface/datasets/issues/2829
2,829
Optimize streaming from TAR archives
closed
1
2021-08-23T16:56:40
2022-09-21T14:29:46
2022-09-21T14:08:39
lhoestq
[ "enhancement", "streaming" ]
Hi ! As you know TAR has some constraints for data streaming. While it is optimized for buffering, the files in the TAR archive **need to be streamed in order**. It means that we can't choose which file to stream from, and this notation is to be avoided for TAR archives: ``` tar://books_large_p1.txt::https://storage.googleapis.com/huggingface-nlp/datasets/bookcorpus/bookcorpus.tar.bz2 ``` Instead, I suggest we implement `iter_archive` for the `StreamingDownloadManager`. The regular `DownloadManager` already has it. Then we will have to update the json/txt/csv/etc. loaders to make them use `iter_archive` on TAR archives. That's also what Tensorflow Datasets is doing in this case. See this [dataset](https://github.com/tensorflow/datasets/blob/93895059c80a9e05805e8f32a2e310f66a23fc98/tensorflow_datasets/image_classification/flowers.py) for example. Therefore instead of doing ```python uncompressed = dl_manager.extract(tar_archive) filename = "books_large_p1.txt" with open(os.path.join(uncompressed, filename)) as f: for line in f: ... ``` we'll do ```python for filename, f in dl_manager.iter_archive(tar_archive): for line in f: ... ```
false
977,181,517
https://api.github.com/repos/huggingface/datasets/issues/2828
https://github.com/huggingface/datasets/pull/2828
2,828
Add code-mixed Kannada Hope speech dataset
closed
0
2021-08-23T15:55:09
2021-10-01T17:21:03
2021-10-01T17:21:03
adeepH
[]
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Paper:** *https://arxiv.org/abs/2108.04616* - **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset* - **Motivation:** *The dataset is amongst the very few resources available for code-mixed low-resourced Dravidian languages of India*
true
976,976,552
https://api.github.com/repos/huggingface/datasets/issues/2827
https://github.com/huggingface/datasets/pull/2827
2,827
add a text classification dataset
closed
0
2021-08-23T12:24:41
2021-08-23T15:51:18
2021-08-23T15:51:18
adeepH
[]
null
true
976,974,254
https://api.github.com/repos/huggingface/datasets/issues/2826
https://github.com/huggingface/datasets/issues/2826
2,826
Add a Text Classification dataset: KanHope
closed
1
2021-08-23T12:21:58
2021-10-01T18:06:59
2021-10-01T18:06:59
adeepH
[ "dataset request" ]
## Adding a Dataset - **Name:** *KanHope* - **Description:** *A code-mixed English-Kannada dataset for Hope speech detection* - **Paper:** *https://arxiv.org/abs/2108.04616* (I am the author of the paper} - **Author:** *[AdeepH](https://github.com/adeepH)* - **Data:** *https://github.com/adeepH/KanHope/tree/main/dataset* - **Motivation:** *The dataset is amongst the very few resources available for code-mixed Dravidian languages* - I tried following the steps as per the instructions. However, could not resolve an error. Any help would be appreciated. - The dataset card and the scripts for the dataset *https://github.com/adeepH/datasets/tree/multilingual-hope-speech/datasets/mhs_eval* ``` Using custom data configuration default Downloading and preparing dataset bn_hate_speech/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/bn_hate_speech/default/0.0.0/5f417ddc89777278abd29988f909f39495f0ec802090f7d8fa63b5bffb121762... --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-114-4a9cdb519e4c> in <module>() 1 from datasets import load_dataset 2 ----> 3 data = load_dataset('/content/bn') 9 frames /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, task, streaming, **config_kwargs) 850 ignore_verifications=ignore_verifications, 851 try_from_hf_gcs=try_from_hf_gcs, --> 852 use_auth_token=use_auth_token, 853 ) 854 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 614 if not downloaded_from_gcs: 615 self._download_and_prepare( --> 616 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 617 ) 618 # Sync info /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 691 try: 692 # Prepare split will record examples associated to the split --> 693 self._prepare_split(split_generator, **prepare_split_kwargs) 694 except OSError as e: 695 raise OSError( /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _prepare_split(self, split_generator) 1107 disable=bool(logging.get_verbosity() == logging.NOTSET), 1108 ): -> 1109 example = self.info.features.encode_example(record) 1110 writer.write(example, key) 1111 finally: /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example) 1015 """ 1016 example = cast_to_python_objects(example) -> 1017 return encode_nested_example(self, example) 1018 1019 def encode_batch(self, batch): /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj) 863 if isinstance(schema, dict): 864 return { --> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 866 } 867 elif isinstance(schema, (list, tuple)): /usr/local/lib/python3.7/dist-packages/datasets/features.py in <dictcomp>(.0) 863 if isinstance(schema, dict): 864 return { --> 865 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 866 } 867 elif isinstance(schema, (list, tuple)): /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_nested_example(schema, obj) 890 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks 891 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)): --> 892 return schema.encode_example(obj) 893 # Other object should be directly convertible to a native Arrow type (like Translation and Translation) 894 return obj /usr/local/lib/python3.7/dist-packages/datasets/features.py in encode_example(self, example_data) 665 # If a string is given, convert to associated integer 666 if isinstance(example_data, str): --> 667 example_data = self.str2int(example_data) 668 669 # Allowing -1 to mean no label. /usr/local/lib/python3.7/dist-packages/datasets/features.py in str2int(self, values) 623 if value not in self._str2int: 624 value = str(value).strip() --> 625 output.append(self._str2int[str(value)]) 626 else: 627 # No names provided, try to integerize KeyError: ' ' ```
false
976,584,926
https://api.github.com/repos/huggingface/datasets/issues/2825
https://github.com/huggingface/datasets/issues/2825
2,825
The datasets.map function does not load cached dataset after moving python script
closed
6
2021-08-23T03:23:37
2024-07-29T11:25:50
2021-08-31T13:13:36
hobbitlzy
[ "bug" ]
## Describe the bug The datasets.map function caches the processed data to a certain directory. When the map function is called another time with totally the same parameters, the cached data are supposed to be reloaded instead of re-processing. However, it doesn't reuse cached data sometimes. I use the common data processing in different tasks, the datasets are processed again, the only difference is that I run them in different files. ## Steps to reproduce the bug Just run the following codes in different .py files. ```python if __name__ == '__main__': from datasets import load_dataset from transformers import AutoTokenizer raw_datasets = load_dataset("wikitext", "wikitext-2-raw-v1") tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") def tokenize_function(examples): return tokenizer(examples["text"], padding="max_length", truncation=True) tokenized_datasets = raw_datasets.map(tokenize_function, batched=True) ``` ## Expected results The map function should reload data in the second or any later runs. ## Actual results The processing happens in each run. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.8.0 - Platform: linux - Python version: 3.7.6 - PyArrow version: 3.0.0 This is the first time I report a bug. If there is any problem or confusing description, please let me know 😄.
false
976,394,721
https://api.github.com/repos/huggingface/datasets/issues/2824
https://github.com/huggingface/datasets/pull/2824
2,824
Fix defaults in cache_dir docstring in load.py
closed
0
2021-08-22T14:48:37
2021-08-26T13:23:32
2021-08-26T11:55:16
mariosasko
[]
Fix defaults in the `cache_dir` docstring.
true
976,135,355
https://api.github.com/repos/huggingface/datasets/issues/2823
https://github.com/huggingface/datasets/issues/2823
2,823
HF_DATASETS_CACHE variable in Windows
closed
1
2021-08-21T13:17:44
2021-08-21T13:20:11
2021-08-21T13:20:11
rp2839
[]
I can't seem to use a custom Cache directory in Windows. I have tried: set HF_DATASETS_CACHE = "C:\Datasets" set HF_DATASETS_CACHE = "C:/Datasets" set HF_DATASETS_CACHE = "C:\\Datasets" set HF_DATASETS_CACHE = "r'C:\Datasets'" set HF_DATASETS_CACHE = "\Datasets" set HF_DATASETS_CACHE = "/Datasets" In each instance I get the "[WinError 123] The filename, directory name, or volume label syntax is incorrect" error when attempting to load a dataset
false
975,744,463
https://api.github.com/repos/huggingface/datasets/issues/2822
https://github.com/huggingface/datasets/pull/2822
2,822
Add url prefix convention for many compression formats
closed
3
2021-08-20T16:11:23
2021-08-23T15:59:16
2021-08-23T15:59:14
lhoestq
[]
## Intro When doing dataset streaming, the uncompression of compressed files is done on the fly using `fsspec`. In particular, the download manager method `download_and_extract` doesn't return a path to the local download and extracted file, but instead a chained URL so that the uncompression can be done when the file is opened. A few examples of chained URLS: - `gz://file.txt::https://foo.bar/file.txt.gz` - `bz2://file.txt::https://foo.bar/file.txt.bz2` - `zip://::https://foo.bar/archive.zip` - `tar://::https://foo.bar/archive.tar.gz` (the TAR uncompression includes gz, bz2 etc. uncompression in `fsspec`) This syntax is highly inspired by the `fsspec` URL chaining syntax from https://filesystem-spec.readthedocs.io/en/latest/features.html#url-chaining This url prefixing allows `open` to know what kind of uncompression to do in a dataset script when doing ```python def _generate_examples(self, urlpath): with open(urlpath) as f: .... ``` ## What it changes This changes the previous behavior from https://github.com/huggingface/datasets/pull/2786 , in which `open` was trying to infer the compression automatically. Infering the compression made it impossible to know whether the user wanted `open` to return compressed data (as the default behavior of the buitin open), or the uncompressed data. By adding uncompression prefixes to the URL, `open` know directly if it has to uncompress or not, and also which protocol to use. ## Additional notes This PR should close https://github.com/huggingface/datasets/issues/2813 It should also close this PR https://github.com/huggingface/datasets/pull/2811 since the oscar dataset script won't try to uncompress twice anymore Note that I had to temporarily remove the support for passing tar and zip files to `data_files` for streaming to make it work, since it makes it ambiguous whether a zip file passed as `data_files` should be uncompressed or not. IMO we can make it work again by changing the syntax to make the glob explicit: ```python load_dataset("json", data_files="zip://*.jsonl::https://foo.bar/archive.zip") ``` This is the exact same convention as fsspec and it removes all ambiguities cc @albertvillanova @lewtun
true
975,556,032
https://api.github.com/repos/huggingface/datasets/issues/2821
https://github.com/huggingface/datasets/issues/2821
2,821
Cannot load linnaeus dataset
closed
1
2021-08-20T12:15:15
2021-08-31T13:13:02
2021-08-31T13:12:09
NielsRogge
[ "bug" ]
## Describe the bug The [linnaeus](https://huggingface.co/datasets/linnaeus) dataset cannot be loaded. To reproduce: ``` from datasets import load_dataset datasets = load_dataset("linnaeus") ``` This results in: ``` Downloading and preparing dataset linnaeus/linnaeus (download: 17.36 MiB, generated: 8.74 MiB, post-processed: Unknown size, total: 26.10 MiB) to /root/.cache/huggingface/datasets/linnaeus/linnaeus/1.0.0/2ff05dbc256108233262f596e09e322dbc3db067202de14286913607cd9cb704... --------------------------------------------------------------------------- ConnectionError Traceback (most recent call last) <ipython-input-4-7ef3a88f6276> in <module>() 1 from datasets import load_dataset 2 ----> 3 datasets = load_dataset("linnaeus") 11 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 603 raise FileNotFoundError("Couldn't find file at {}".format(url)) 604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") --> 605 raise ConnectionError("Couldn't reach {}".format(url)) 606 607 # Try a second time ConnectionError: Couldn't reach https://drive.google.com/u/0/uc?id=1OletxmPYNkz2ltOr9pyT0b0iBtUWxslh&export=download/ ```
false
975,210,712
https://api.github.com/repos/huggingface/datasets/issues/2820
https://github.com/huggingface/datasets/issues/2820
2,820
Downloading “reddit” dataset keeps timing out.
closed
10
2021-08-20T02:52:36
2021-09-08T14:52:02
2021-09-08T14:52:02
smeyerhot
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. Everytime I try and download the reddit dataset it times out before finishing and I have to try again. There is some timeout error that I will post once it happens again. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("reddit", ignore_verifications=True, cache_dir="/Volumes/My Passport for Mac/og-chat-data") ``` ## Expected results A clear and concise description of the expected results. I would expect the download to finish, or at least provide a parameter to extend the read timeout window. ## Actual results Specify the actual results or traceback. Shown below in error message. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: macOS - Python version: 3.9.6 (conda env) - PyArrow version: N/A
false
974,683,155
https://api.github.com/repos/huggingface/datasets/issues/2819
https://github.com/huggingface/datasets/pull/2819
2,819
Added XL-Sum dataset
closed
10
2021-08-19T13:47:45
2021-09-29T08:13:44
2021-09-23T17:49:05
abhik1505040
[]
Added XL-Sum dataset published in ACL-IJCNLP 2021. (https://aclanthology.org/2021.findings-acl.413/). The default timeout values in `src/datasets/utils/file_utls.py` were increased to enable downloading from the original google drive links.
true
974,552,009
https://api.github.com/repos/huggingface/datasets/issues/2818
https://github.com/huggingface/datasets/issues/2818
2,818
cannot load data from my loacal path
closed
1
2021-08-19T11:13:30
2023-07-25T17:42:15
2023-07-25T17:42:15
yang-collect
[ "bug" ]
## Describe the bug I just want to directly load data from my local path,but find a bug.And I compare it with pandas to provide my local path is real. here is my code ```python3 # print my local path print(config.train_path) # read data and print data length tarin=pd.read_csv(config.train_path) print(len(tarin)) # loading data by load_dataset data = load_dataset('csv',data_files=config.train_path) print(len(data)) ``` ## Steps to reproduce the bug ```python C:\Users\wie\Documents\项目\文本分类\data\train.csv 7613 Traceback (most recent call last): File "c:/Users/wie/Documents/项目/文本分类/lib/DataPrecess.py", line 17, in <module> data = load_dataset('csv',data_files=config.train_path) File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\load.py", line 830, in load_dataset **config_kwargs, File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\load.py", line 710, in load_dataset_builder **config_kwargs, File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 271, in __init__ **config_kwargs, File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 386, in _create_builder_config config_kwargs, custom_features=custom_features, use_auth_token=self.use_auth_token File "C:\Users\wie\Miniconda3\lib\site-packages\datasets\builder.py", line 156, in create_config_id raise ValueError("Please provide a valid `data_files` in `DatasetBuilder`") ValueError: Please provide a valid `data_files` in `DatasetBuilder` ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: win10 - Python version: 3.7.9 - PyArrow version: 5.0.0
false
974,486,051
https://api.github.com/repos/huggingface/datasets/issues/2817
https://github.com/huggingface/datasets/pull/2817
2,817
Rename The Pile subsets
closed
2
2021-08-19T09:56:22
2021-08-23T16:24:10
2021-08-23T16:24:09
lhoestq
[]
After discussing with @yjernite we think it's better to have the subsets of The Pile explicitly have "the_pile" in their names. I'm doing the changes for the subsets that @richarddwang added: - [x] books3 -> the_pile_books3 https://github.com/huggingface/datasets/pull/2801 - [x] stack_exchange -> the_pile_stack_exchange https://github.com/huggingface/datasets/pull/2803 - [x] openwebtext2 -> the_pile_openwebtext2 https://github.com/huggingface/datasets/pull/2802 For consistency we should also rename `bookcorpusopen` to `the_pile_bookcorpus` IMO, but let me know what you think. (we can just add a deprecation message to `bookcorpusopen` for now and add `the_pile_bookcorpus`)
true
974,031,404
https://api.github.com/repos/huggingface/datasets/issues/2816
https://github.com/huggingface/datasets/issues/2816
2,816
Add Mostly Basic Python Problems Dataset
open
1
2021-08-18T20:28:39
2021-09-10T08:04:20
null
osanseviero
[ "dataset request" ]
## Adding a Dataset - **Name:** Mostly Basic Python Problems Dataset - **Description:** The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry level programmers, covering programming fundamentals, standard library functionality, and so on. Each problem consists of a task description, code solution and 3 automated test cases. - **Paper:** *link to the dataset paper if available* - **Data:** https://github.com/google-research/google-research/tree/master/mbpp - **Motivation:** Simple, small dataset related to coding problems. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
973,862,024
https://api.github.com/repos/huggingface/datasets/issues/2815
https://github.com/huggingface/datasets/pull/2815
2,815
Tiny typo fixes of "fo" -> "of"
closed
0
2021-08-18T16:36:11
2021-08-19T08:03:02
2021-08-19T08:03:02
aronszanto
[]
Noticed a few of these when reading docs- feel free to ignore the PR and just fix on some main contributor branch if more helpful. Thanks for the great library! :)
true
973,632,645
https://api.github.com/repos/huggingface/datasets/issues/2814
https://github.com/huggingface/datasets/pull/2814
2,814
Bump tqdm version
closed
0
2021-08-18T12:51:29
2021-08-18T13:44:11
2021-08-18T13:39:50
mariosasko
[]
The recently released tqdm 4.62.1 includes a fix for PermissionError on Windows (submitted by me in https://github.com/tqdm/tqdm/pull/1207), which means we can remove expensive `gc.collect` calls by bumping tqdm to that version. This PR does exactly that and, additionally, fixes a `disable_tqdm` definition that would previously, if used, raise a PermissionError on Windows.
true
973,470,580
https://api.github.com/repos/huggingface/datasets/issues/2813
https://github.com/huggingface/datasets/issues/2813
2,813
Remove compression from xopen
closed
1
2021-08-18T09:35:59
2021-08-23T15:59:14
2021-08-23T15:59:14
albertvillanova
[ "generic discussion" ]
We implemented support for streaming with 2 requirements: - transparent use for the end user: just needs to pass the parameter `streaming=True` - no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loading scripts should not involve additional code to support streaming In order to fulfill these requirements, streaming implementation patched some Python functions: - the `open(urlpath)` function was patched with `fsspec.open(urlpath)` - the `os.path.join(urlpath, *others)` function was patched in order to add to `urlpath` hops (`::`) and extractor protocols (`zip://`), which are required by `fsspec.open` Recently, we implemented support for streaming all archive+compression formats: zip, tar, gz, bz2, lz4, xz, zst; tar.gz, tar.bz2,... Under the hood, the implementation: - passes an additional parameter `compression` to `fsspec.open`, so that it performs the decompression on the fly: `fsspec.open(urlpath, compression=...)` Some concerns have been raised about passing the parameter `compression` to `fsspec.open`: - https://github.com/huggingface/datasets/pull/2786#discussion_r689550254 - #2811 The main argument is that if `open` decompresses the file and afterwards we call `gzip.open` on it, that will raise an error in `oscar` dataset: ```python gzip.open(open(urlpath ``` While this is true: - it is not natural/usual to call `open` inside `gzip.open` (never seen this before) - indeed, this was recently (2 months ago) coded that way in `datasets` in order to allow streaming support (with previous implementation of streaming) In this particular case, there is a natural fix solution: #2811: - Revert the `open` inside the `gzip.open` (change done 2 months ago): `gzip.open(open(urlpath` => `gzip.open(urlpath` - Patch `gzip.open(urlpath` with `fsspec.open(urlpath, compression="gzip"` Are there other issues apart from this? Note that there is an issue just because the open inside of the gzip.open. There is no issue in the other cases where datasets loading scripts use just - `gzip.open` - `open` (after having called dl_manager.download_and_extract) TODO: - [ ] Is this really an issue? Please enumerate the `datasets` loading scripts where this is problematic. - For the moment, there are only 3 datasets where we have an `open` inside a `gzip.open`: - oscar (since 23 June), mc4 (since 2 July) and c4 (since 2 July) - In the 3 datasets, the only reason to put an open inside a gzip.open was indeed to force supporting streaming - [ ] If this is indeed an issue, which are the possible alternatives? Pros/cons?
false
972,936,889
https://api.github.com/repos/huggingface/datasets/issues/2812
https://github.com/huggingface/datasets/issues/2812
2,812
arXiv Dataset verification problem
open
0
2021-08-17T18:01:48
2022-01-19T14:15:35
null
eladsegal
[ "bug", "dataset bug" ]
## Describe the bug `dataset_infos.json` for `arxiv_dataset` contains a fixed number of training examples, however the data (downloaded from an external source) is updated every week with additional examples. Therefore, loading the dataset without `ignore_verifications=True` results in a verification error.
false
972,522,480
https://api.github.com/repos/huggingface/datasets/issues/2811
https://github.com/huggingface/datasets/pull/2811
2,811
Fix stream oscar
closed
3
2021-08-17T10:10:59
2021-08-26T10:26:15
2021-08-26T10:26:14
albertvillanova
[]
Previously, an additional `open` was added to oscar to make it stream-compatible: 587bbb94e891b22863b312b99696e32708c379f4. This was argued that might be problematic: https://github.com/huggingface/datasets/pull/2786#discussion_r690045921 This PR: - removes that additional `open` - patches `gzip.open` with `xopen` + `compression="gzip"`
true
972,040,022
https://api.github.com/repos/huggingface/datasets/issues/2810
https://github.com/huggingface/datasets/pull/2810
2,810
Add WIT Dataset
closed
1
2021-08-16T19:34:09
2022-05-06T12:27:29
2022-05-06T12:26:16
hassiahk
[]
Adds Google's [WIT](https://github.com/google-research-datasets/wit) dataset.
true
971,902,613
https://api.github.com/repos/huggingface/datasets/issues/2809
https://github.com/huggingface/datasets/pull/2809
2,809
Add Beans Dataset
closed
0
2021-08-16T16:22:33
2021-08-26T11:42:27
2021-08-26T11:42:27
nateraw
[]
Adds the [beans](https://github.com/AI-Lab-Makerere/ibean/) image classification dataset.
true
971,882,320
https://api.github.com/repos/huggingface/datasets/issues/2808
https://github.com/huggingface/datasets/issues/2808
2,808
Enable streaming for Wikipedia corpora
closed
1
2021-08-16T15:59:12
2023-07-20T13:45:30
2023-07-20T13:45:30
lewtun
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Several of the [Wikipedia corpora](https://huggingface.co/datasets?search=wiki) on the Hub involve quite large files that would be a good candidate for streaming. Currently it is not possible to stream these corpora: ```python from datasets import load_dataset # Throws ValueError: Builder wikipedia is not streamable. wiki_dataset_streamed = load_dataset("wikipedia", "20200501.en", split="train", streaming=True) ``` Given that these corpora are derived from Wikipedia dumps in XML format which are then processed with Apache Beam, I am not sure whether streaming is possible in principle. The goal of this issue is to discuss whether this feature even makes sense :) **Describe the solution you'd like** It would be nice to be able to stream Wikipedia corpora from the Hub with something like ```python from datasets import load_dataset wiki_dataset_streamed = load_dataset("wikipedia", "20200501.en", split="train", streaming=True) ```
false
971,849,863
https://api.github.com/repos/huggingface/datasets/issues/2807
https://github.com/huggingface/datasets/pull/2807
2,807
Add cats_vs_dogs dataset
closed
0
2021-08-16T15:21:11
2021-08-30T16:35:25
2021-08-30T16:35:24
nateraw
[]
Adds Microsoft's [Cats vs. Dogs](https://www.microsoft.com/en-us/download/details.aspx?id=54765) dataset.
true
971,625,449
https://api.github.com/repos/huggingface/datasets/issues/2806
https://github.com/huggingface/datasets/pull/2806
2,806
Fix streaming tar files from canonical datasets
closed
5
2021-08-16T11:10:28
2021-10-13T09:04:03
2021-10-13T09:04:02
albertvillanova
[]
Previous PR #2800 implemented support to stream remote tar files when passing the parameter `data_files`: they required a glob string `"*"`. However, this glob string creates an error when streaming canonical datasets (with a `join` after the `open`). This PR fixes this issue and allows streaming tar files both from: - canonical datasets scripts and - data files. This PR also adds support for compressed tar files: `.tar.gz`, `.tar.bz2`,...
true
971,436,456
https://api.github.com/repos/huggingface/datasets/issues/2805
https://github.com/huggingface/datasets/pull/2805
2,805
Fix streaming zip files from canonical datasets
closed
0
2021-08-16T07:11:40
2021-08-16T10:34:00
2021-08-16T10:34:00
albertvillanova
[]
Previous PR #2798 fixed streaming remote zip files when passing the parameter `data_files`. However, that broke streaming zip files used in canonical `datasets` scripts, which normally have a subsequent `join()` (patched with `xjoin()`) after the `StreamingDownloadManager.download_and_extract()` is called. This PR fixes this issue and allows streaming zip files both from: - canonical datasets scripts and - data files.
true
971,353,437
https://api.github.com/repos/huggingface/datasets/issues/2804
https://github.com/huggingface/datasets/pull/2804
2,804
Add Food-101
closed
0
2021-08-16T04:26:15
2021-08-20T14:31:33
2021-08-19T12:48:06
nateraw
[]
Adds image classification dataset [Food-101](https://data.vision.ee.ethz.ch/cvl/datasets_extra/food-101/).
true
970,858,928
https://api.github.com/repos/huggingface/datasets/issues/2803
https://github.com/huggingface/datasets/pull/2803
2,803
add stack exchange
closed
2
2021-08-14T08:11:02
2021-08-19T10:07:33
2021-08-19T08:07:38
richarddwang
[]
stack exchange is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. I also change default `timeout` to 100 seconds instead of 10 seconds, otherwise I keep getting read time out when downloading source data of stack exchange and cc100 dataset. When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797 Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675
true
970,848,302
https://api.github.com/repos/huggingface/datasets/issues/2802
https://github.com/huggingface/datasets/pull/2802
2,802
add openwebtext2
closed
3
2021-08-14T07:09:03
2021-08-23T14:06:14
2021-08-23T14:06:14
richarddwang
[]
openwebtext2 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797 Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675
true
970,844,617
https://api.github.com/repos/huggingface/datasets/issues/2801
https://github.com/huggingface/datasets/pull/2801
2,801
add books3
closed
4
2021-08-14T07:04:25
2021-08-19T16:43:09
2021-08-18T15:36:59
richarddwang
[]
books3 is part of EleutherAI/The Pile, but AFAIK, The Pile dataset blend all sub datasets together thus we are not able to use just one of its sub dataset from The Pile data. So I create an independent dataset using The Pile preliminary components. When I was creating dataset card. I found there is room for creating / editing dataset card. I've made it an issue. #2797 Also I am wondering whether the import of The Pile dataset is actively undertaken (because I may need it recently)? #1675
true
970,819,988
https://api.github.com/repos/huggingface/datasets/issues/2800
https://github.com/huggingface/datasets/pull/2800
2,800
Support streaming tar files
closed
1
2021-08-14T04:40:17
2021-08-26T10:02:30
2021-08-14T04:55:57
albertvillanova
[]
This PR adds support to stream tar files by using the `fsspec` tar protocol. It also uses the custom `readline` implemented in PR #2786. The corresponding test is implemented in PR #2786.
true
970,507,351
https://api.github.com/repos/huggingface/datasets/issues/2799
https://github.com/huggingface/datasets/issues/2799
2,799
Loading JSON throws ArrowNotImplementedError
closed
11
2021-08-13T15:31:48
2022-01-10T18:59:32
2022-01-10T18:59:32
lewtun
[ "bug" ]
## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no problem loading the dataset with `pandas` which suggests some incorrect type inference is being made on the `datasets` side. For example, the stack trace indicates that some URL fields are being parsed as timestamps. You can find a Colab notebook which reproduces the error [here](https://colab.research.google.com/drive/1YUCM0j1vx5ZrouQbYSzal6RwB4-Aoh4o?usp=sharing). **Edit:** If one repeatedly tries to load the dataset, it _eventually_ works but I think it would still be good to understand why it fails in the first place :) ## Steps to reproduce the bug ```python from datasets import load_dataset from huggingface_hub import hf_hub_url import pandas as pd # returns https://huggingface.co/datasets/lewtun/github-issues-test/resolve/main/issues-datasets.jsonl data_files = hf_hub_url(repo_id="lewtun/github-issues-test", filename="issues-datasets.jsonl", repo_type="dataset") # throws ArrowNotImplementedError dset = load_dataset("json", data_files=data_files, split="test") # no problem with pandas ... df = pd.read_json(data_files, orient="records", lines=True) df.head() ``` ## Expected results I can load any line-separated JSON file, similar to `pandas`. ## Actual results ``` --------------------------------------------------------------------------- ArrowNotImplementedError Traceback (most recent call last) <ipython-input-7-5b8e82b6c3a2> in <module>() ----> 1 dset = load_dataset("json", data_files=data_files, split="test") 9 frames /usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowNotImplementedError: JSON conversion to struct<url: timestamp[s], html_url: timestamp[s], labels_url: timestamp[s], id: int64, node_id: timestamp[s], number: int64, title: timestamp[s], description: timestamp[s], creator: struct<login: timestamp[s], id: int64, node_id: timestamp[s], avatar_url: timestamp[s], gravatar_id: timestamp[s], url: timestamp[s], html_url: timestamp[s], followers_url: timestamp[s], following_url: timestamp[s], gists_url: timestamp[s], starred_url: timestamp[s], subscriptions_url: timestamp[s], organizations_url: timestamp[s], repos_url: timestamp[s], events_url: timestamp[s], received_events_url: timestamp[s], type: timestamp[s], site_admin: bool>, open_issues: int64, closed_issues: int64, state: timestamp[s], created_at: timestamp[s], updated_at: timestamp[s], due_on: timestamp[s], closed_at: timestamp[s]> is not supported ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyArrow version: 3.0.0
false
970,493,126
https://api.github.com/repos/huggingface/datasets/issues/2798
https://github.com/huggingface/datasets/pull/2798
2,798
Fix streaming zip files
closed
2
2021-08-13T15:17:01
2021-08-16T14:16:50
2021-08-13T15:38:28
albertvillanova
[]
Currently, streaming remote zip data files gives `FileNotFoundError` message: ```python data_files = f"https://huggingface.co/datasets/albertvillanova/datasets-tests-compression/resolve/main/sample.zip" ds = load_dataset("json", split="train", data_files=data_files, streaming=True) next(iter(ds)) ``` This PR fixes it by adding a glob string. The corresponding test is implemented in PR #2786.
true
970,331,634
https://api.github.com/repos/huggingface/datasets/issues/2797
https://github.com/huggingface/datasets/issues/2797
2,797
Make creating/editing dataset cards easier, by editing on site and dumping info from test command.
open
0
2021-08-13T11:54:49
2021-08-14T08:42:09
null
richarddwang
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** Creating and editing dataset cards should be but not that easy - If other else know Some information I don't know (bias of dataset, dataset curation, supported dataset, ...), he/she should know the description on hf.co comes from README.md under github huggingface/datasets/datasets/the dataset, and willing to make a pr to add or fix information. - Many information is also saved in `dataset_info.json` (citaion, description), but still need to write it down to README.md again. - Contributor need to pip install and start a local server just for tagging the dataset's size. And contributor may be creating the dataset on lab's server, which can't open a browser. - if any one proposes a new tag, it doesn't show in the list that another creator see. (a stackoverflow way may be ideal) - dataset card generator web app doesn't generate the necessary subsecion `Contributions` for us. **Describe the solution you'd like** - Everyone (or at least the author/contributor) can edit the description, information, tags of the dataset, on hf.co website. Just like wikipedia+stackoverflow - We can infer the actual data size, citation, data instance, ... from `dataset_info.json` and `dataset.arrow` via `dataset-cli test`
false
970,235,846
https://api.github.com/repos/huggingface/datasets/issues/2796
https://github.com/huggingface/datasets/pull/2796
2,796
add cedr dataset
closed
1
2021-08-13T09:37:35
2021-08-27T16:01:36
2021-08-27T16:01:36
naumov-al
[]
null
true