id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
969,728,545
https://api.github.com/repos/huggingface/datasets/issues/2794
https://github.com/huggingface/datasets/issues/2794
2,794
Warnings and documentation about pickling incorrect
open
0
2021-08-12T23:09:13
2021-08-12T23:09:31
null
mbforbes
[ "bug" ]
## Describe the bug I have a docs bug and a closely related docs enhancement suggestion! ### Bug The warning and documentation say "either `dill` or `pickle`" for fingerprinting. But it seems that `dill`, which is installed by `datasets` by default, _must_ work, or else the fingerprinting fails. Warning: https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L262 Docs: > For a transform to be hashable, it needs to be pickleable using dill or pickle. > – [docs](https://huggingface.co/docs/datasets/processing.html#fingerprinting) For my code, `pickle` works, but `dill` fails. The `dill` failure has already been reported in https://github.com/huggingface/datasets/issues/2643. However, the `dill` failure causes a hashing failure in the datasets library, without any backing off to `pickle`. This implies that it's not the case that either `dill` **or** `pickle` can work, but that `dill` must work if it is installed. I think this is more accurate wording, since it is installed and used by default: https://github.com/huggingface/datasets/blob/c93525dc291346e54212567fa72d7d607befe937/setup.py#L83 ... and the hashing will fail if it fails. ### Enhancement I think it'd be very helpful to add to the documentation how to debug hashing failures. It took me a while to figure out how to diagnose this. There is a very nice two-liner by @lhoestq in https://github.com/huggingface/datasets/issues/2516#issuecomment-865173139: ```python from datasets.fingerprint import Hasher Hasher.hash(my_object) ``` I think add this to the docs will help future users quickly debug any hashing troubles of their own :-) ## Steps to reproduce the bug `dill` but not `pickle` hashing failure in https://github.com/huggingface/datasets/issues/2643 ## Expected results If either `dill` or `pickle` can successfully hash, the hashing will succeed. ## Actual results If `dill` or `pickle` cannot hash, the hashing fails. ## Environment info - `datasets` version: 1.9.0 - Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
false
968,967,773
https://api.github.com/repos/huggingface/datasets/issues/2793
https://github.com/huggingface/datasets/pull/2793
2,793
Fix type hint for data_files
closed
0
2021-08-12T14:42:37
2021-08-12T15:35:29
2021-08-12T15:35:29
albertvillanova
[]
Fix type hint for `data_files` in signatures and docstrings.
true
968,650,274
https://api.github.com/repos/huggingface/datasets/issues/2792
https://github.com/huggingface/datasets/pull/2792
2,792
Update: GooAQ - add train/val/test splits
closed
2
2021-08-12T11:40:18
2021-08-27T15:58:45
2021-08-27T15:58:14
bhavitvyamalik
[]
[GooAQ](https://github.com/allenai/gooaq) dataset was recently updated after splits were added for the same. This PR contains new updated GooAQ with train/val/test splits and updated README as well.
true
968,360,314
https://api.github.com/repos/huggingface/datasets/issues/2791
https://github.com/huggingface/datasets/pull/2791
2,791
Fix typo in cnn_dailymail
closed
0
2021-08-12T08:38:42
2021-08-12T11:17:59
2021-08-12T11:17:59
omaralsayed
[]
null
true
967,772,181
https://api.github.com/repos/huggingface/datasets/issues/2790
https://github.com/huggingface/datasets/pull/2790
2,790
Fix typo in test_dataset_common
closed
0
2021-08-12T01:10:29
2021-08-12T11:31:29
2021-08-12T11:31:29
nateraw
[]
null
true
967,361,934
https://api.github.com/repos/huggingface/datasets/issues/2789
https://github.com/huggingface/datasets/pull/2789
2,789
Updated dataset description of DaNE
closed
1
2021-08-11T19:58:48
2021-08-12T16:10:59
2021-08-12T16:06:01
KennethEnevoldsen
[]
null
true
967,149,389
https://api.github.com/repos/huggingface/datasets/issues/2788
https://github.com/huggingface/datasets/issues/2788
2,788
How to sample every file in a list of files making up a split in a dataset when loading?
closed
1
2021-08-11T17:43:21
2023-07-25T17:40:50
2023-07-25T17:40:50
brijow
[]
I am loading a dataset with multiple train, test, and validation files like this: ``` data_files_dict = { "train": [train_file1, train_file2], "test": [test_file1, test_file2], "val": [val_file1, val_file2] } dataset = datasets.load_dataset( "csv", data_files=data_files_dict, split=['train[:8]', 'test[:8]', 'val[:8]'] ) ``` However, this only selects the first 8 rows from train_file1, test_file1, val_file1, since they are the first files in the lists. I'm trying to formulate a split argument that can sample from each file specified in my list of files that make up each split. Is this type of splitting supported? If so, how can I do it?
false
967,018,406
https://api.github.com/repos/huggingface/datasets/issues/2787
https://github.com/huggingface/datasets/issues/2787
2,787
ConnectionError: Couldn't reach https://raw.githubusercontent.com
closed
9
2021-08-11T16:19:01
2023-10-03T12:39:25
2021-08-18T15:09:18
jinec
[ "bug" ]
Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 250, in main datasets = load_dataset("glue", data_args.task_name, cache_dir=model_args.cache_dir) File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 718, in load_dataset use_auth_token=use_auth_token, File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\load.py", line 320, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "C:\install\Anaconda3\envs\huggingface\lib\site-packages\datasets\utils\file_utils.py", line 623, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py Trying to do python run_glue.py --model_name_or_path bert-base-cased --task_name mrpc --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3 --output_dir ./tmp/mrpc/ Is this something on my end? From what I can tell, this was re-fixeded by @fullyz a few months ago. Thank you!
false
966,282,934
https://api.github.com/repos/huggingface/datasets/issues/2786
https://github.com/huggingface/datasets/pull/2786
2,786
Support streaming compressed files
closed
0
2021-08-11T09:02:06
2021-08-17T05:28:39
2021-08-16T06:36:19
albertvillanova
[]
Add support to stream compressed files (current options in fsspec): - bz2 - lz4 - xz - zstd cc: @lewtun
true
965,461,382
https://api.github.com/repos/huggingface/datasets/issues/2783
https://github.com/huggingface/datasets/pull/2783
2,783
Add KS task to SUPERB
closed
5
2021-08-10T22:14:07
2021-08-12T16:45:01
2021-08-11T20:19:17
anton-l
[]
Add the KS (keyword spotting) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051). - [s3prl instructions](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/README.md#ks-keyword-spotting) - [s3prl implementation](https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/speech_commands/dataset.py) - [TFDS implementation](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/audio/speech_commands.py) Some notable quirks: - The dataset is originally single-archive (train+val+test all in one), but the test set has a "canonical" distribution in a separate archive, which is also used here (see `_split_ks_files()`). - The `_background_noise_`/`_silence_` audio files are much longer than others, so they require some sort of slicing for downstream training. I decided to leave the implementation of that up to the users, since TFDS and s3prl take different approaches (either slicing wavs deterministically, or subsampling randomly at runtime) Related to #2619.
true
964,858,439
https://api.github.com/repos/huggingface/datasets/issues/2782
https://github.com/huggingface/datasets/pull/2782
2,782
Fix renaming of corpus_bleu args
closed
0
2021-08-10T11:02:34
2021-08-10T11:16:07
2021-08-10T11:16:07
albertvillanova
[]
Last `sacrebleu` release (v2.0.0) has renamed `sacrebleu.corpus_bleu` args from `(sys_stream, ref_streams)` to `(hipotheses, references)`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR passes the args without parameter names, so that it is valid for all versions of `sacrebleu`. This is a partial hotfix of #2781. Close #2781.
true
964,805,351
https://api.github.com/repos/huggingface/datasets/issues/2781
https://github.com/huggingface/datasets/issues/2781
2,781
Latest v2.0.0 release of sacrebleu has broken some metrics
closed
0
2021-08-10T09:59:41
2021-08-10T11:16:07
2021-08-10T11:16:07
albertvillanova
[ "bug" ]
## Describe the bug After `sacrebleu` v2.0.0 release (see changes here: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15), some of `datasets` metrics are broken: - Default tokenizer `sacrebleu.DEFAULT_TOKENIZER` no longer exists: - #2739 - #2778 - Bleu tokenizers are no longer accessible with `sacrebleu.TOKENIZERS`: - #2779 - `corpus_bleu` args have been renamed from `(sys_stream, ref_streams)` to `(hipotheses, references)`: - #2782
false
964,794,764
https://api.github.com/repos/huggingface/datasets/issues/2780
https://github.com/huggingface/datasets/pull/2780
2,780
VIVOS dataset for Vietnamese ASR
closed
0
2021-08-10T09:47:36
2021-08-12T11:09:30
2021-08-12T11:09:30
binh234
[]
null
true
964,775,085
https://api.github.com/repos/huggingface/datasets/issues/2779
https://github.com/huggingface/datasets/pull/2779
2,779
Fix sacrebleu tokenizers
closed
0
2021-08-10T09:24:27
2021-08-10T11:03:08
2021-08-10T10:57:54
albertvillanova
[]
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.TOKENIZERS`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR makes a hot fix of the bug by using a private function in `sacrebleu`: `sacrebleu.metrics.bleu._get_tokenizer()`. Eventually, this should be further fixed in order to use only public functions. This is a partial hotfix of #2781.
true
964,737,422
https://api.github.com/repos/huggingface/datasets/issues/2778
https://github.com/huggingface/datasets/pull/2778
2,778
Do not pass tokenize to sacrebleu
closed
0
2021-08-10T08:40:37
2021-08-10T10:03:37
2021-08-10T10:03:37
albertvillanova
[]
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR does not pass `tokenize` to `sacrebleu` (note that the user cannot pass it anyway) and `sacrebleu` will use its default, no matter where it is and how it is called. Related to #2739. This is a partial hotfix of #2781.
true
964,696,380
https://api.github.com/repos/huggingface/datasets/issues/2777
https://github.com/huggingface/datasets/pull/2777
2,777
Use packaging to handle versions
closed
0
2021-08-10T07:51:39
2021-08-18T13:56:27
2021-08-18T13:56:27
albertvillanova
[]
Use packaging module to handle/validate/check versions of Python packages. Related to #2769.
true
964,400,596
https://api.github.com/repos/huggingface/datasets/issues/2776
https://github.com/huggingface/datasets/issues/2776
2,776
document `config.HF_DATASETS_OFFLINE` and precedence
open
0
2021-08-09T21:23:17
2021-08-09T21:23:17
null
stas00
[ "enhancement" ]
https://github.com/huggingface/datasets/pull/1976 implemented `HF_DATASETS_OFFLINE`, but: 1. `config.HF_DATASETS_OFFLINE` is not documented 2. the precedence is not documented (env, config) I'm thinking it probably should be similar to what it says https://huggingface.co/docs/datasets/loading_datasets.html#from-the-huggingface-hub about `datasets.config.IN_MEMORY_MAX_SIZE`: Quote: > The default in 🤗 Datasets is to memory-map the dataset on disk unless you set datasets.config.IN_MEMORY_MAX_SIZE different from 0 bytes (default). In that case, the dataset will be copied in-memory if its size is smaller than datasets.config.IN_MEMORY_MAX_SIZE bytes, and memory-mapped otherwise. This behavior can be enabled by setting either the configuration option datasets.config.IN_MEMORY_MAX_SIZE (higher precedence) or the environment variable HF_DATASETS_IN_MEMORY_MAX_SIZE (lower precedence) to nonzero. Context: trying to use `config.HF_DATASETS_OFFLINE` here: https://github.com/bigscience-workshop/Megatron-DeepSpeed/pull/48 but are uncertain if it's safe, since it's not documented as a public API. Thank you! @lhoestq, @albertvillanova
false
964,303,626
https://api.github.com/repos/huggingface/datasets/issues/2775
https://github.com/huggingface/datasets/issues/2775
2,775
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
closed
3
2021-08-09T19:28:51
2024-01-26T15:05:36
2024-01-26T15:05:35
mbforbes
[ "bug" ]
## Describe the bug **Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_seed()` has been called, and I think that using `set_seed()` is a standard procedure to aid reproducibility. I've added more details to reproduce this below. Hi there! I'm using my own local dataset and custom preprocessing function. My preprocessing function seems to be unpickle-able, perhaps because it is from a closure (will debug this separately). I get this warning, which is expected: https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L260-L265 However, what's not expected is that the `datasets` actually _does_ seem to cache and reuse this dataset between runs! After that line, the next thing that's logged looks like: ```text Loading cached processed dataset at /home/xxx/.cache/huggingface/datasets/csv/default-xxx/0.0.0/xxx/cache-xxx.arrow ``` The path is exactly the same each run (e.g., last 26 runs). This becomes a problem because I'll pass in the `--max_eval_samples` flag to the HuggingFace example script I'm running off of ([run_swag.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/multiple-choice/run_swag.py)). The fact that the cached dataset is reused means this flag gets ignored. I'll try to load 100 examples, and it will load the full cached 1,000,000. I think that https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L248 ... is actually consistent because randomness is being controlled in HuggingFace/Transformers for reproducibility. I've added a demo of this below. ## Steps to reproduce the bug ```python # Contents of print_fingerprint.py from transformers import set_seed from datasets.fingerprint import generate_random_fingerprint set_seed(42) print(generate_random_fingerprint()) ``` ```bash for i in {0..10}; do python print_fingerprint.py done 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d 1c80317fa3b1799d ``` ## Expected results After the "random hash" warning is emitted, a random hash is generated, and no outdated cached datasets are reused. ## Actual results After the "random hash" warning is emitted, an identical hash is generated each time, and an outdated cached dataset is reused each run. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.9.0 - Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31 - Python version: 3.9.6 - PyArrow version: 4.0.1
false
963,932,199
https://api.github.com/repos/huggingface/datasets/issues/2774
https://github.com/huggingface/datasets/pull/2774
2,774
Prevent .map from using multiprocessing when loading from cache
closed
6
2021-08-09T12:11:38
2021-09-09T10:20:28
2021-09-09T10:20:28
thomasw21
[]
## Context On our setup, we use different setup to train vs proprocessing datasets. Usually we are able to obtain a high number of cpus to preprocess, which allows us to use `num_proc` however we can't use as many during training phase. Currently if we use `num_proc={whatever the preprocessing value was}` we load from cache, but we get: ``` Traceback (most recent call last): File "lib/python3.8/site-packages/multiprocess/pool.py", line 131, in worker put((job, i, result)) File "lib/python3.8/site-packages/multiprocess/queues.py", line 371, in put self._writer.send_bytes(obj) File "lib/python3.8/site-packages/multiprocess/connection.py", line 203, in send_bytes self._send_bytes(m[offset:offset + size]) File "lib/python3.8/site-packages/multiprocess/connection.py", line 414, in _send_bytes self._send(header + buf) File "lib/python3.8/site-packages/multiprocess/connection.py", line 371, in _send n = write(self._handle, buf) BrokenPipeError: [Errno 32] Broken pipe ``` Our current guess, is that we're spawning too many processes compared to the number of cpus available, and it's running OOM. Also we're loading this in DDP setting which means that for each gpu, I need to spawn a high number of processes to match the preprocessing fingerprint. Instead what we suggest: - Allow loading shard sequentially, sharing the same fingerprint as the multiprocessed one, in order to leverage multiprocessing when we actually generate the cache, and remove it when loading from cache. ## Current issues ~I'm having a hard time making fingerprints match. For some reason, the multiprocessing and the sequential version generate two different hash.~ **EDIT**: Turns out multiprocessing and sequential have different `transform` value for fingerprinting (check `fingerprint_transform`) when running `_map_single`: - sequential : `datasets.arrow_dataset.Dataset._map_single` - multiprocessing: `datasets.arrow_dataset._map_single` This discrepancy is caused by multiprocessing pickling the transformer function, it doesn't seem to keep the `Dataset` hierarchy. I'm still unclear on why `func.__qual_name__` isn't handled correctly in multiprocessing. But replacing `__qualname__` by `__name__` fixes the issue. ## What was done ~We try to prevent the usage of multiprocessing when loading a dataset. Instead we load all cached shards sequentially.~ I couldn't find a nice way to obtain the cached_file_name and check they all exist before deciding to use the multiprocessing flow or not. Instead I expose an optional boolean `sequential` in `map` method. ## TODO - [x] Check that the multiprocessed version and the sequential version output the same output - [x] Check that sequential can load multiprocessed - [x] Check that multiprocessed can load sequential ## Test ```python from datasets import load_dataset from multiprocessing import Pool import random def process(batch, rng): length = len(batch["text"]) return {**batch, "processed_text": [f"PROCESSED {rng.random()}" for _ in range(length)]} dataset = load_dataset("stas/openwebtext-10k", split="train") print(dataset.column_names) print(type(dataset)) rng = random.Random(42) dataset1 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={"rng": rng}) # This one should be loaded from cache rng = random.Random(42) dataset2 = dataset.map(process, batched=True, batch_size=50, num_proc=4, fn_kwargs={"rng": rng}, sequential=True) # Just to check that the random generator was correct print(dataset1[-1]["processed_text"]) print(dataset2[-1]["processed_text"]) ``` ## Other solutions I chose to load everything sequentially, but we can probably find a way to load shards in parallel using another number of workers (essentially this would be an argument not used for fingerprinting, allowing to allow `m` shards using `n` processes, which would be very useful when same dataset have to be loaded on two different setup, and we still want to leverage cache). Also we can use a env variable similarly to `TOKENIZERS_PARALLELISM` as this seems generally setup related (though this changes slightly if we use multiprocessing). cc @lhoestq (since I had asked you previously on `num_proc` being used for fingerprinting). Don't know if this is acceptable.
true
963,730,497
https://api.github.com/repos/huggingface/datasets/issues/2773
https://github.com/huggingface/datasets/issues/2773
2,773
Remove dataset_infos.json
closed
1
2021-08-09T07:43:19
2024-05-04T14:52:10
2024-05-04T14:52:10
albertvillanova
[ "enhancement", "generic discussion" ]
**Is your feature request related to a problem? Please describe.** As discussed, there are infos in the `dataset_infos.json` which are redundant and we could have them only in the README file. Others could be migrated to the README, like: "dataset_size", "size_in_bytes", "download_size", "splits.split_name.[num_bytes, num_examples]",... However, there are others that do not seem too meaningful in the README, like the checksums. **Describe the solution you'd like** Open a discussion to decide what to do with the `dataset_infos.json` files: which information to be migrated and/or which information to be kept. cc: @julien-c @lhoestq
false
963,348,834
https://api.github.com/repos/huggingface/datasets/issues/2772
https://github.com/huggingface/datasets/issues/2772
2,772
Remove returned feature constrain
open
0
2021-08-08T04:01:30
2021-08-08T08:48:01
null
PosoSAgapo
[ "enhancement" ]
In the current version, the returned value of the map function has to be list or ndarray. However, this makes it unsuitable for many tasks. In NLP, many features are sparse like verb words, noun chunks, if we want to assign different values to different words, which will result in a large sparse matrix if we only score useful words like verb words. Mostly, when using it on large scale, saving it as a whole takes a lot of disk storage and making it hard to read, the normal method is saving it in sparse form. However, the NumPy does not support sparse, therefore I have to use PyTorch or scipy to transform a matrix into special sparse form, which is not a form that can be transformed into list or ndarry. This violates the feature constraints of the map function. I do appreciate the convenience of Datasets package, but I do not think the compulsory datatype constrain is necessary, in some cases, we just cannot transform it into a list or ndarray due to some reasons. Any way to fix this? Or what I can do to disable the compulsory datatype constrain?
false
963,257,036
https://api.github.com/repos/huggingface/datasets/issues/2771
https://github.com/huggingface/datasets/pull/2771
2,771
[WIP][Common Voice 7] Add common voice 7.0
closed
2
2021-08-07T16:01:10
2021-12-06T23:24:02
2021-12-06T23:24:02
patrickvonplaten
[]
This PR allows to load the new common voice dataset manually as explained when doing: ```python from datasets import load_dataset ds = load_dataset("./datasets/datasets/common_voice_7", "ab") ``` => ``` Please follow the manual download instructions: You need to manually the dataset from `https://commonvoice.mozilla.org/en/datasets`. Make sure you choose the version `Common Voice Corpus 7.0`. Choose a language of your choice and find the corresponding language-id, *e.g.*, `Abkhaz` with language-id `ab`. The following language-ids are available: ['ab', 'ar', 'as', 'az', 'ba', 'bas', 'be', 'bg', 'br', 'ca', 'cnh', 'cs', 'cv', 'cy', 'de', 'dv', 'el', 'en', 'eo', 'es', 'et', 'eu', 'fa', 'fi', 'fr', 'fy-NL', 'ga-IE', 'gl', 'gn', 'ha', 'hi', 'hsb', 'hu', 'hy-AM', 'ia', 'id', 'it', 'ja', 'ka', 'kab', 'kk', 'kmr', 'ky', 'lg', 'lt', 'lv', 'mn', 'mt', 'nl', 'or', 'pa-IN', 'pl', 'pt', 'rm-sursilv', 'rm-vallader', 'ro', 'ru', 'rw', 'sah', 'sk', 'sl', 'sr', 'sv-SE', 'ta', 'th', 'tr', 'tt', 'ug', 'uk', 'ur', 'uz', 'vi', 'vot', 'zh-CN', 'zh-HK', 'zh-TW'] Next, you will have to enter your email address to download the dataset in the `tar.gz` format. Save the file under <path-to-file>. The file should then be extracted with: ``tar -xvzf <path-to-file>`` which will extract a folder called ``cv-corpus-7.0-2021-07-21``. The dataset can then be loaded with `datasets.load_dataset("common_voice", <language-id>, data_dir="<path-to-'cv-corpus-7.0-2021-07-21'-folder>", ignore_verifications=True). ``` Having followed those instructions one can then download the data as follows: ```python from datasets import load_dataset ds = load_dataset("./datasets/datasets/common_voice_7", "ab", data_dir="./cv-corpus-7.0-2021-07-21/", ignore_verifications=True) ``` ## TODO - [ ] Discuss naming. Is the name ok here "common_voice_7"? The dataset script differs only really in one point from `common_voice.py` in that all the metadata is different (more hours etc...) and that it has to use manual data dir for now - [ ] Ideally we should get a bundled download link. For `common_voice.py` there is a bundled download link: `https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/{}.tar.gz` that allows one to directly download the data. However such a link is missing for Common Voice 7. I guess we should try to contact common voice about it and ask whether we could host the data or help otherwise somehow. See: https://github.com/common-voice/common-voice-bundler/issues/15 cc @yjernite - [ ] I did not compute the dataset.json and it would mean that I'd have to download 76 datasets totalling around 1TB manually before running the checksum command. This just takes too much time. For now the user will have to add a `ignore_verifications=True` to download the data. This step would also be much easier if we could get a bundled link - [ ] Add dummy data
true
963,246,512
https://api.github.com/repos/huggingface/datasets/issues/2770
https://github.com/huggingface/datasets/pull/2770
2,770
Add support for fast tokenizer in BertScore
closed
0
2021-08-07T15:00:03
2021-08-09T12:34:43
2021-08-09T11:16:25
mariosasko
[]
This PR adds support for a fast tokenizer in BertScore, which has been added recently to the lib. Fixes #2765
true
963,240,802
https://api.github.com/repos/huggingface/datasets/issues/2769
https://github.com/huggingface/datasets/pull/2769
2,769
Allow PyArrow from source
closed
0
2021-08-07T14:26:44
2021-08-09T15:38:39
2021-08-09T15:38:39
patrickvonplaten
[]
When installing pyarrow from source the version is: ```python >>> import pyarrow; pyarrow.__version__ '2.1.0.dev612' ``` -> however this breaks the install check at init of `datasets`. This PR makes sure that everything coming after the last `'.'` is removed.
true
963,229,173
https://api.github.com/repos/huggingface/datasets/issues/2768
https://github.com/huggingface/datasets/issues/2768
2,768
`ArrowInvalid: Added column's length must match table's length.` after using `select`
closed
2
2021-08-07T13:17:29
2021-08-09T11:26:43
2021-08-09T11:26:43
lvwerra
[ "bug" ]
## Describe the bug I would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not updated when calling `select`. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset("tweets_hate_speech_detection")['train'].select(range(128)) ds = ds.add_column('ones', [1]*128) ``` ## Expected results I would expect a new column named `ones` filled with `1`. When I check the length of `ds` it says `128`. Interestingly, it works when calling `ds = ds.map(lambda x: x)` before adding the column. ## Actual results Specify the actual results or traceback. ```python --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) /var/folders/l4/2905jygx4tx5jv8_kn03vxsw0000gn/T/ipykernel_6301/868709636.py in <module> 1 from datasets import load_dataset 2 ds = load_dataset("tweets_hate_speech_detection")['train'].select(range(128)) ----> 3 ds = ds.add_column('ones', [0]*128) ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 183 } 184 # apply actual function --> 185 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 186 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 187 # re-apply format to the output ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 395 # Call actual function 396 --> 397 out = func(self, *args, **kwargs) 398 399 # Update fingerprint of in-place transforms + update in-place history of transforms ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_column(self, name, column, new_fingerprint) 2965 column_table = InMemoryTable.from_pydict({name: column}) 2966 # Concatenate tables horizontally -> 2967 table = ConcatenationTable.from_tables([self._data, column_table], axis=1) 2968 # Update features 2969 info = self.info.copy() ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in from_tables(cls, tables, axis) 715 table_blocks = to_blocks(table) 716 blocks = _extend_blocks(blocks, table_blocks, axis=axis) --> 717 return cls.from_blocks(blocks) 718 719 @property ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in from_blocks(cls, blocks) 663 return cls(table, blocks) 664 else: --> 665 table = cls._concat_blocks_horizontally_and_vertically(blocks) 666 return cls(table, blocks) 667 ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in _concat_blocks_horizontally_and_vertically(cls, blocks) 623 if not tables: 624 continue --> 625 pa_table_horizontally_concatenated = cls._concat_blocks(tables, axis=1) 626 pa_tables_to_concat_vertically.append(pa_table_horizontally_concatenated) 627 return cls._concat_blocks(pa_tables_to_concat_vertically, axis=0) ~/git/semantic-clustering/env/lib/python3.8/site-packages/datasets/table.py in _concat_blocks(blocks, axis) 612 else: 613 for name, col in zip(table.column_names, table.columns): --> 614 pa_table = pa_table.append_column(name, col) 615 return pa_table 616 else: ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.append_column() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.add_column() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/git/semantic-clustering/env/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: Added column's length must match table's length. Expected length 31962 but got length 128 ``` ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 5.0.0
false
963,002,120
https://api.github.com/repos/huggingface/datasets/issues/2767
https://github.com/huggingface/datasets/issues/2767
2,767
equal operation to perform unbatch for huggingface datasets
closed
5
2021-08-06T19:45:52
2022-03-07T13:58:00
2022-03-07T13:58:00
dorooddorood606
[ "bug" ]
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to make it similar to what T5 originally did: https://github.com/google-research/text-to-text-transfer-transformer/blob/3c58859b8fe72c2dbca6a43bc775aa510ba7e706/t5/data/preprocessors.py#L925 Here please find an example: For example, a typical example from ReCoRD might look like { 'passsage': 'This is the passage.', 'query': 'A @placeholder is a bird.', 'entities': ['penguin', 'potato', 'pigeon'], 'answers': ['penguin', 'pigeon'], } and I need a prosessor which would turn this example into the following two examples: { 'inputs': 'record query: A @placeholder is a bird. entities: penguin, ' 'potato, pigeon passage: This is the passage.', 'targets': 'penguin', } and { 'inputs': 'record query: A @placeholder is a bird. entities: penguin, ' 'potato, pigeon passage: This is the passage.', 'targets': 'pigeon', } For doing this, one need unbatch, as each entry can map to multiple samples depending on the number of answers, I am not sure how to perform this operation with huggingface datasets library and greatly appreciate your help @lhoestq Thank you very much.
false
962,994,198
https://api.github.com/repos/huggingface/datasets/issues/2766
https://github.com/huggingface/datasets/pull/2766
2,766
fix typo (ShuffingConfig -> ShufflingConfig)
closed
0
2021-08-06T19:31:40
2021-08-10T14:17:03
2021-08-10T14:17:02
daleevans
[]
pretty straightforward, it should be Shuffling instead of Shuffing
true
962,861,395
https://api.github.com/repos/huggingface/datasets/issues/2765
https://github.com/huggingface/datasets/issues/2765
2,765
BERTScore Error
closed
1
2021-08-06T15:58:57
2021-08-09T11:16:25
2021-08-09T11:16:25
gagan3012
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python predictions = ["hello there", "general kenobi"] references = ["hello there", "general kenobi"] bert = load_metric('bertscore') bert.compute(predictions=predictions, references=references,lang='en') ``` # Bug `TypeError: get_hash() missing 1 required positional argument: 'use_fast_tokenizer'` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Colab - Python version: - PyArrow version:
false
962,554,799
https://api.github.com/repos/huggingface/datasets/issues/2764
https://github.com/huggingface/datasets/pull/2764
2,764
Add DER metric for SUPERB speaker diarization task
closed
1
2021-08-06T09:12:36
2023-07-11T09:35:23
2023-07-11T09:35:23
albertvillanova
[ "transfer-to-evaluate" ]
null
true
961,895,523
https://api.github.com/repos/huggingface/datasets/issues/2763
https://github.com/huggingface/datasets/issues/2763
2,763
English wikipedia datasets is not clean
closed
1
2021-08-05T14:37:24
2023-07-25T17:43:04
2023-07-25T17:43:04
lucadiliello
[ "bug" ]
## Describe the bug Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset w = load_dataset('wikipedia', '20200501.en') print(w['train'][0]['text']) ``` > 'Yangliuqing () is a market town in Xiqing District, in the western suburbs of Tianjin, People\'s Republic of China. Despite its relatively small size, it has been named since 2006 in the "famous historical and cultural market towns in China".\n\nIt is best known in China for creating nianhua or Yangliuqing nianhua. For more than 400 years, Yangliuqing has in effect specialised in the creation of these woodcuts for the New Year. wood block prints using vivid colourschemes to portray traditional scenes of children\'s games often interwoven with auspiciouse objects.\n\n, it had 27 residential communities () and 25 villages under its administration.\n\nShi Family Grand Courtyard\n\nShi Family Grand Courtyard (Tiānjīn Shí Jiā Dà Yuàn, 天津石家大院) is situated in Yangliuqing Town of Xiqing District, which is the former residence of wealthy merchant Shi Yuanshi - the 4th son of Shi Wancheng, one of the eight great masters in Tianjin. First built in 1875, it covers over 6,000 square meters, including large and small yards and over 200 folk houses, a theater and over 275 rooms that served as apartments and places of business and worship for this powerful family. Shifu Garden, which finished its expansion in October 2003, covers 1,200 square meters, incorporates the elegance of imperial garden and delicacy of south garden. Now the courtyard of Shi family covers about 10,000 square meters, which is called the first mansion in North China. Now it serves as the folk custom museum in Yangliuqing, which has a large collection of folk custom museum in Yanliuqing, which has a large collection of folk art pieces like Yanliuqing New Year pictures, brick sculpture.\n\nShi\'s ancestor came from Dong\'e County in Shandong Province, engaged in water transport of grain. As the wealth gradually accumulated, the Shi Family moved to Yangliuqing and bought large tracts of land and set up their residence. Shi Yuanshi came from the fourth generation of the family, who was a successful businessman and a good household manager, and the residence was thus enlarged for several times until it acquired the present scale. It is believed to be the first mansion in the west of Tianjin.\n\nThe residence is symmetric based on the axis formed by a passageway in the middle, on which there are four archways. On the east side of the courtyard, there are traditional single-story houses with rows of rooms around the four sides, which was once the living area for the Shi Family. The rooms on north side were the accountants\' office. On the west are the major constructions including the family hall for worshipping Buddha, theater and the south reception room. On both sides of the residence are side yard rooms for maids and servants.\n\nToday, the Shi mansion, located in the township of Yangliuqing to the west of central Tianjin, stands as a surprisingly well-preserved monument to China\'s pre-revolution mercantile spirit. It also serves as an on-location shoot for many of China\'s popular historical dramas. Many of the rooms feature period furniture, paintings and calligraphy, and the extensive Shifu Garden.\n\nPart of the complex has been turned into the Yangliuqing Museum, which includes displays focused on symbolic aspects of the courtyards\' construction, local folk art and customs, and traditional period furnishings and crafts.\n\n**See also \n\nList of township-level divisions of Tianjin\n\nReferences \n\n http://arts.cultural-china.com/en/65Arts4795.html\n\nCategory:Towns in Tianjin'** ## Expected results I expect no junk in the data. ## Actual results Specify the actual results or traceback. ## Environment info - `datasets` version: 1.10.2 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.5 - PyArrow version: 3.0.0
false
961,652,046
https://api.github.com/repos/huggingface/datasets/issues/2762
https://github.com/huggingface/datasets/issues/2762
2,762
Add RVL-CDIP dataset
closed
3
2021-08-05T09:57:05
2022-04-21T17:15:41
2022-04-21T17:15:41
NielsRogge
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** RVL-CDIP - **Description:** The RVL-CDIP (Ryerson Vision Lab Complex Document Information Processing) dataset consists of 400,000 grayscale images in 16 classes, with 25,000 images per class. There are 320,000 training images, 40,000 validation images, and 40,000 test images. The images are sized so their largest dimension does not exceed 1000 pixels. - **Paper:** https://www.cs.cmu.edu/~aharley/icdar15/ - **Data:** https://www.cs.cmu.edu/~aharley/rvl-cdip/ - **Motivation:** I'm currently adding LayoutLMv2 and LayoutXLM to HuggingFace Transformers. LayoutLM (v1) already exists in the library. This dataset has a large value for document image classification (i.e. classifying scanned documents). LayoutLM models obtain SOTA on this dataset, so would be great to directly use it in notebooks. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
961,568,287
https://api.github.com/repos/huggingface/datasets/issues/2761
https://github.com/huggingface/datasets/issues/2761
2,761
Error loading C4 realnewslike dataset
closed
4
2021-08-05T08:16:58
2021-08-08T19:44:34
2021-08-08T19:44:34
danshirron
[ "bug" ]
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15.3M/15.3M [00:00<00:00, 28.1MB/s]Traceback (most recent call last): File "run_mlm_tf.py", line 794, in <module> main() File "run_mlm_tf.py", line 425, in main raw_datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/load.py", line 843, in load_dataset builder_instance.download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 608, in download_and_prepare self._download_and_prepare( File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/builder.py", line 698, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/home/dshirron/.local/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 74, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='validation', num_bytes=38165657946, num_examples=13799838, dataset_name='c4'), 'recorded': SplitInfo(name='validation', num_bytes=37875873, num_examples=13863, dataset_name='c4')}] ## Environment info - `datasets` version: 1.10.2 - Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 4.0.1
false
961,372,667
https://api.github.com/repos/huggingface/datasets/issues/2760
https://github.com/huggingface/datasets/issues/2760
2,760
Add Nuswide dataset
open
0
2021-08-05T03:00:41
2021-12-08T12:06:23
null
shivangibithel
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** *NUSWIDE* - **Description:** *[A Real-World Web Image Dataset from National University of Singapore](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/NUS-WIDE.html)* - **Paper:** *[here](https://lms.comp.nus.edu.sg/wp-content/uploads/2019/research/nuswide/nuswide-civr2009.pdf)* - **Data:** *[here](https://github.com/wenting-zhao/nuswide)* - **Motivation:** *This dataset is a benchmark in the Text Retrieval task.* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
960,206,575
https://api.github.com/repos/huggingface/datasets/issues/2758
https://github.com/huggingface/datasets/pull/2758
2,758
Raise ManualDownloadError when loading a dataset that requires previous manual download
closed
0
2021-08-04T10:19:55
2021-08-04T11:36:30
2021-08-04T11:36:30
albertvillanova
[]
This PR implements the raising of a `ManualDownloadError` when loading a dataset that requires previous manual download, and this is missing. The `ManualDownloadError` is raised whether the dataset is loaded in normal or streaming mode. Close #2749. cc: @severo
true
959,984,081
https://api.github.com/repos/huggingface/datasets/issues/2757
https://github.com/huggingface/datasets/issues/2757
2,757
Unexpected type after `concatenate_datasets`
closed
2
2021-08-04T07:10:39
2021-08-04T16:01:24
2021-08-04T16:01:23
JulesBelveze
[ "bug" ]
## Describe the bug I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`. It then leads to a weird tensors when trying to convert it to a `DataLoader`. However, if I use each `Dataset` separately everything behave as expected. ## Steps to reproduce the bug ```python >>> featurized_teacher Dataset({ features: ['t_labels', 't_input_ids', 't_token_type_ids', 't_attention_mask'], num_rows: 502 }) >>> for f in featurized_teacher.features: print(featurized_teacher[f].shape) torch.Size([502]) torch.Size([502, 300]) torch.Size([502, 300]) torch.Size([502, 300]) >>> featurized_student Dataset({ features: ['s_features', 's_labels'], num_rows: 502 }) >>> for f in featurized_student.features: print(featurized_student[f].shape) torch.Size([502, 64]) torch.Size([502]) ``` The shapes seem alright to me. Then the results after concatenation are as follow: ```python >>> concat_dataset = datasets.concatenate_datasets([featurized_student, featurized_teacher], axis=1) >>> type(concat_dataset["t_labels"]) <class 'list'> ``` One would expect to obtain the same type as the one before concatenation. Am I doing something wrong here? Any idea on how to fix this unexpected behavior? ## Environment info - `datasets` version: 1.9.0 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.9.5 - PyArrow version: 3.0.0
false
959,255,646
https://api.github.com/repos/huggingface/datasets/issues/2756
https://github.com/huggingface/datasets/pull/2756
2,756
Fix metadata JSON for ubuntu_dialogs_corpus dataset
closed
0
2021-08-03T15:48:59
2021-08-04T09:43:25
2021-08-04T09:43:25
albertvillanova
[]
Related to #2743.
true
959,115,888
https://api.github.com/repos/huggingface/datasets/issues/2755
https://github.com/huggingface/datasets/pull/2755
2,755
Fix metadata JSON for turkish_movie_sentiment dataset
closed
0
2021-08-03T13:25:44
2021-08-04T09:06:54
2021-08-04T09:06:53
albertvillanova
[]
Related to #2743.
true
959,105,577
https://api.github.com/repos/huggingface/datasets/issues/2754
https://github.com/huggingface/datasets/pull/2754
2,754
Generate metadata JSON for telugu_books dataset
closed
0
2021-08-03T13:14:52
2021-08-04T08:49:02
2021-08-04T08:49:02
albertvillanova
[]
Related to #2743.
true
959,036,995
https://api.github.com/repos/huggingface/datasets/issues/2753
https://github.com/huggingface/datasets/pull/2753
2,753
Generate metadata JSON for reclor dataset
closed
0
2021-08-03T11:52:29
2021-08-04T08:07:15
2021-08-04T08:07:15
albertvillanova
[]
Related to #2743.
true
959,023,608
https://api.github.com/repos/huggingface/datasets/issues/2752
https://github.com/huggingface/datasets/pull/2752
2,752
Generate metadata JSON for lm1b dataset
closed
0
2021-08-03T11:34:56
2021-08-04T06:40:40
2021-08-04T06:40:39
albertvillanova
[]
Related to #2743.
true
959,021,262
https://api.github.com/repos/huggingface/datasets/issues/2751
https://github.com/huggingface/datasets/pull/2751
2,751
Update metadata for wikihow dataset
closed
0
2021-08-03T11:31:57
2021-08-03T15:52:09
2021-08-03T15:52:09
albertvillanova
[]
Update metadata for wikihow dataset: - Remove leading new line character in description and citation - Update metadata JSON - Remove no longer necessary `urls_checksums/checksums.txt` file Related to #2748.
true
958,984,730
https://api.github.com/repos/huggingface/datasets/issues/2750
https://github.com/huggingface/datasets/issues/2750
2,750
Second concatenation of datasets produces errors
closed
5
2021-08-03T10:47:04
2022-01-19T14:23:43
2022-01-19T14:19:05
Aktsvigun
[ "bug" ]
Hi, I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`. ``` from datasets import load_dataset, concatenate_datasets data = load_dataset('trec')['train'] concatenated = concatenate_datasets([data, data]) concatenated_2 = concatenate_datasets([concatenated, concatenated]) print('True features of features:', concatenated.features) print('\nProduced features of features:', concatenated_2.features) ``` outputs ``` True features of features: {'label-coarse': ClassLabel(num_classes=6, names=['DESC', 'ENTY', 'ABBR', 'HUM', 'NUM', 'LOC'], names_file=None, id=None), 'label-fine': ClassLabel(num_classes=47, names=['manner', 'cremat', 'animal', 'exp', 'ind', 'gr', 'title', 'def', 'date', 'reason', 'event', 'state', 'desc', 'count', 'other', 'letter', 'religion', 'food', 'country', 'color', 'termeq', 'city', 'body', 'dismed', 'mount', 'money', 'product', 'period', 'substance', 'sport', 'plant', 'techmeth', 'volsize', 'instru', 'abb', 'speed', 'word', 'lang', 'perc', 'code', 'dist', 'temp', 'symbol', 'ord', 'veh', 'weight', 'currency'], names_file=None, id=None), 'text': Value(dtype='string', id=None)} Produced features of features: {'label-coarse': Value(dtype='int64', id=None), 'label-fine': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)} ``` I am using `datasets` v.1.11.0
false
958,968,748
https://api.github.com/repos/huggingface/datasets/issues/2749
https://github.com/huggingface/datasets/issues/2749
2,749
Raise a proper exception when trying to stream a dataset that requires to manually download files
closed
2
2021-08-03T10:26:27
2021-08-09T08:53:35
2021-08-04T11:36:30
severo
[ "bug" ]
## Describe the bug At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("reclor", streaming=True) ``` ## Expected results Ideally: raise a specific exception, something like `ManualDownloadError`. Or at least give the reason in the message, as when we load in normal mode: ```python from datasets import load_dataset dataset = load_dataset("reclor") ``` ``` AssertionError: The dataset reclor with config default requires manual data. Please follow the manual download instructions: to use ReClor you need to download it manually. Please go to its homepage (http://whyu.me/reclor/) fill the google form and you will receive a download link and a password to extract it.Please extract all files in one folder and use the path folder in datasets.load_dataset('reclor', data_dir='path/to/folder/folder_name') . Manual data can be loaded with `datasets.load_dataset(reclor, data_dir='<path/to/manual/data>') ``` ## Actual results ``` TypeError: expected str, bytes or os.PathLike object, not NoneType ``` ## Environment info - `datasets` version: 1.11.0 - Platform: macOS-11.5-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
false
958,889,041
https://api.github.com/repos/huggingface/datasets/issues/2748
https://github.com/huggingface/datasets/pull/2748
2,748
Generate metadata JSON for wikihow dataset
closed
0
2021-08-03T08:55:40
2021-08-03T10:17:51
2021-08-03T10:17:51
albertvillanova
[]
Related to #2743.
true
958,867,627
https://api.github.com/repos/huggingface/datasets/issues/2747
https://github.com/huggingface/datasets/pull/2747
2,747
add multi-proc in `to_json`
closed
17
2021-08-03T08:30:13
2021-10-19T18:24:21
2021-09-13T13:56:37
bhavitvyamalik
[]
Closes #2663. I've tried adding multiprocessing in `to_json`. Here's some benchmarking I did to compare the timings of current version (say v1) and multi-proc version (say v2). I did this with `cpu_count` 4 (2015 Macbook Air) 1. Dataset name: `ascent_kb` - 8.9M samples (all samples were used, reporting this for a single run) v1- ~225 seconds for converting whole dataset to json v2- ~200 seconds for converting whole dataset to json 2. Dataset name: `lama` - 1.3M samples (all samples were used, reporting this for 2 runs) v1- ~26 seconds for converting whole dataset to json v2- ~23.6 seconds for converting whole dataset to json I think it's safe to say that v2 is 10% faster as compared to v1. Timings may improve further with better configuration. The only bottleneck I feel is writing to file from the output list. If we can improve that aspect then timings may improve further. Let me know if any changes/improvements can be done in this @stas00, @lhoestq, @albertvillanova. @lhoestq even suggested to extend this work with other export methods as well like `csv` or `parquet`.
true
958,551,619
https://api.github.com/repos/huggingface/datasets/issues/2746
https://github.com/huggingface/datasets/issues/2746
2,746
Cannot load `few-nerd` dataset
closed
6
2021-08-02T22:18:57
2021-11-16T08:51:34
2021-08-03T19:45:43
Mehrad0711
[ "bug" ]
## Describe the bug Cannot load `few-nerd` dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('few-nerd', 'supervised') ``` ## Actual results Executing above code will give the following error: ``` Using the latest cached version of the module from /Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53 (last modified on Wed Jun 2 11:34:25 2021) since it couldn't be found locally at /Users/Mehrad/Documents/GitHub/genienlp/few-nerd/few-nerd.py, or remotely (FileNotFoundError). Downloading and preparing dataset few_nerd/supervised (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/Mehrad/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53... Traceback (most recent call last): File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split disable=bool(logging.get_verbosity() == logging.NOTSET), File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53/few-nerd.py", line 196, in _generate_examples with open(filepath, encoding="utf-8") as f: FileNotFoundError: [Errno 2] No such file or directory: '/Users/Mehrad/.cache/huggingface/datasets/downloads/supervised/train.json' ``` The bug is probably in identifying and downloading the dataset. If I download the json splits directly from [link](https://github.com/nbroad1881/few-nerd/tree/main/uncompressed) and put them under the downloads directory, they will be processed into arrow format correctly. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Python version: 3.8 - PyArrow version: 1.0.1
false
958,269,579
https://api.github.com/repos/huggingface/datasets/issues/2745
https://github.com/huggingface/datasets/pull/2745
2,745
added semeval18_emotion_classification dataset
closed
7
2021-08-02T15:39:55
2021-10-29T09:22:05
2021-09-21T09:48:35
maxpel
[]
I added the data set of SemEval 2018 Task 1 (Subtask 5) for emotion detection in three languages. ``` datasets-cli test datasets/semeval18_emotion_classification/ --save_infos --all_configs RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_semeval18_emotion_classification ``` Both commands ran successfully. I couldn't create the dummy data (the files are tsvs but have .txt ending, maybe that's the problem?) and therefore the test on the dummy data fails, maybe someone can help here. I also formatted the code: ``` black --line-length 119 --target-version py36 datasets/semeval18_emotion_classification/ isort datasets/semeval18_emotion_classification/ flake8 datasets/semeval18_emotion_classification/ ``` That's the publication for reference: Mohammad, S., Bravo-Marquez, F., Salameh, M., & Kiritchenko, S. (2018). SemEval-2018 task 1: Affect in tweets. Proceedings of the 12th International Workshop on Semantic Evaluation, 1–17. https://doi.org/10.18653/v1/S18-1001
true
958,146,637
https://api.github.com/repos/huggingface/datasets/issues/2744
https://github.com/huggingface/datasets/pull/2744
2,744
Fix key by recreating metadata JSON for journalists_questions dataset
closed
0
2021-08-02T13:27:53
2021-08-03T09:25:34
2021-08-03T09:25:33
albertvillanova
[]
Close #2743.
true
958,119,251
https://api.github.com/repos/huggingface/datasets/issues/2743
https://github.com/huggingface/datasets/issues/2743
2,743
Dataset JSON is incorrect
closed
2
2021-08-02T13:01:26
2021-08-03T10:06:57
2021-08-03T09:25:33
severo
[ "bug" ]
## Describe the bug The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/dataset_infos.json. The only config should be `plain_text`, but the first key in the JSON is `journalists_questions` (the dataset id) instead. ```json { "journalists_questions": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ``` ## Steps to reproduce the bug Look at the files. ## Expected results The first key should be `plain_text`: ```json { "plain_text": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ``` ## Actual results ```json { "journalists_questions": { "description": "The journalists_questions corpus (version 1.0) is a collection of 10K human-written Arabic\ntweets manually labeled for question identification over Arabic tweets posted by journalists.\n", ... ```
false
958,114,064
https://api.github.com/repos/huggingface/datasets/issues/2742
https://github.com/huggingface/datasets/issues/2742
2,742
Improve detection of streamable file types
closed
1
2021-08-02T12:55:09
2021-11-12T17:18:10
2021-11-12T17:18:10
severo
[ "enhancement", "dataset-viewer" ]
**Is your feature request related to a problem? Please describe.** ```python from datasets import load_dataset_builder from datasets.utils.streaming_download_manager import StreamingDownloadManager builder = load_dataset_builder("journalists_questions", name="plain_text") builder._split_generators(StreamingDownloadManager(base_path=builder.base_path)) ``` raises ``` NotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is not implemented yet ``` But the file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U is a text file and it can be streamed: ```bash curl --header "Range: bytes=0-100" -L https://drive.google.com/uc\?export\=download\&id\=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_U9U 506938088174940160 yes 1 302221719412830209 yes 1 289761704907268096 yes 1 513820885032378369 yes % ``` Yet, it's wrongly categorized as a file type that cannot be streamed because the test is currently based on 1. the presence of a file extension at the end of the URL (here: no extension), and 2. the inclusion of this extension in a list of supported formats. **Describe the solution you'd like** In the case of an URL (instead of a local path), ask for the MIME type, and decide on that value? Note that it would not work in that case, because the value of `content_type` is `text/html; charset=UTF-8`. **Describe alternatives you've considered** Add a variable in the dataset script to set the data format by hand.
false
957,979,559
https://api.github.com/repos/huggingface/datasets/issues/2741
https://github.com/huggingface/datasets/issues/2741
2,741
Add Hypersim dataset
open
0
2021-08-02T10:06:50
2021-12-08T12:06:51
null
osanseviero
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** Hypersim - **Description:** photorealistic synthetic dataset for holistic indoor scene understanding - **Paper:** *link to the dataset paper if available* - **Data:** https://github.com/apple/ml-hypersim Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
957,911,035
https://api.github.com/repos/huggingface/datasets/issues/2740
https://github.com/huggingface/datasets/pull/2740
2,740
Update release instructions
closed
0
2021-08-02T08:46:00
2021-08-02T14:39:56
2021-08-02T14:39:56
albertvillanova
[]
Update release instructions.
true
957,751,260
https://api.github.com/repos/huggingface/datasets/issues/2739
https://github.com/huggingface/datasets/pull/2739
2,739
Pass tokenize to sacrebleu only if explicitly passed by user
closed
0
2021-08-02T05:09:05
2021-08-03T04:23:37
2021-08-03T04:23:37
albertvillanova
[]
Next `sacrebleu` release (v2.0.0) will remove `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR passes `tokenize` to `sacrebleu` only if explicitly passed by the user, otherwise it will not pass it (and `sacrebleu` will use its default, no matter where it is and how it is called). Close: #2737.
true
957,517,746
https://api.github.com/repos/huggingface/datasets/issues/2738
https://github.com/huggingface/datasets/pull/2738
2,738
Sunbird AI Ugandan low resource language dataset
closed
4
2021-08-01T15:18:00
2022-10-03T09:37:30
2022-10-03T09:37:30
ak3ra
[ "dataset contribution" ]
Multi-way parallel text corpus of 5 key Ugandan languages for the task of machine translation.
true
957,124,881
https://api.github.com/repos/huggingface/datasets/issues/2737
https://github.com/huggingface/datasets/issues/2737
2,737
SacreBLEU update
closed
5
2021-07-30T23:53:08
2021-09-22T10:47:41
2021-08-03T04:23:37
devrimcavusoglu
[ "bug" ]
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries to import it anyways. This can be fixed currently with fixing `sacrebleu==1.5.0` ## Steps to reproduce the bug ```python sacrebleu= datasets.load_metric('sacrebleu') predictions = ["It is a guide to action which ensures that the military always obeys the commands of the party"] references = ["It is a guide to action that ensures that the military will forever heed Party commands"] results = sacrebleu.compute(predictions=predictions, references=references) print(results) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Windows-10-10.0.19041-SP0 - Python version: Python 3.8.0 - PyArrow version: 5.0.0
false
956,895,199
https://api.github.com/repos/huggingface/datasets/issues/2736
https://github.com/huggingface/datasets/issues/2736
2,736
Add Microsoft Building Footprints dataset
open
1
2021-07-30T16:17:08
2021-12-08T12:09:03
null
albertvillanova
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** Microsoft Building Footprints - **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge. - **Paper:** *link to the dataset paper if available* - **Data:** https://www.microsoft.com/en-us/maps/building-footprints - **Motivation:** this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Reported by: @sashavor
false
956,889,365
https://api.github.com/repos/huggingface/datasets/issues/2735
https://github.com/huggingface/datasets/issues/2735
2,735
Add Open Buildings dataset
open
0
2021-07-30T16:08:39
2021-07-31T05:01:25
null
albertvillanova
[ "dataset request" ]
## Adding a Dataset - **Name:** Open Buildings - **Description:** A dataset of building footprints to support social good applications. Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science. This large-scale open dataset contains the outlines of buildings derived from high-resolution satellite imagery in order to support these types of uses. The project being based in Ghana, the current focus is on the continent of Africa. See: "Mapping Africa's Buildings with Satellite Imagery" https://ai.googleblog.com/2021/07/mapping-africas-buildings-with.html - **Paper:** https://arxiv.org/abs/2107.12283 - **Data:** https://sites.research.google/open-buildings/ - **Motivation:** *what are some good reasons to have this dataset* Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Reported by: @osanseviero
false
956,844,874
https://api.github.com/repos/huggingface/datasets/issues/2734
https://github.com/huggingface/datasets/pull/2734
2,734
Update BibTeX entry
closed
0
2021-07-30T15:22:51
2021-07-30T15:47:58
2021-07-30T15:47:58
albertvillanova
[]
Update BibTeX entry.
true
956,725,476
https://api.github.com/repos/huggingface/datasets/issues/2733
https://github.com/huggingface/datasets/pull/2733
2,733
Add missing parquet known extension
closed
0
2021-07-30T13:01:20
2021-07-30T13:24:31
2021-07-30T13:24:30
lhoestq
[]
This code was failing because the parquet extension wasn't recognized: ```python from datasets import load_dataset base_url = "https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/" data_files = {"train": base_url + "wikipedia-train.parquet"} wiki = load_dataset("parquet", data_files=data_files, split="train", streaming=True) ``` It raises ```python NotImplementedError: Extraction protocol for file at https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/wikipedia-train.parquet is not implemented yet ``` I added `parquet` to the list of known extensions EDIT: added pickle, conllu, xml extensions as well
true
956,676,360
https://api.github.com/repos/huggingface/datasets/issues/2732
https://github.com/huggingface/datasets/pull/2732
2,732
Updated TTC4900 Dataset
closed
2
2021-07-30T11:52:14
2021-07-30T16:00:51
2021-07-30T15:58:14
yavuzKomecoglu
[]
- The source address of the TTC4900 dataset of [@savasy](https://github.com/savasy) has been updated for direct download. - Updated readme.
true
956,087,452
https://api.github.com/repos/huggingface/datasets/issues/2731
https://github.com/huggingface/datasets/pull/2731
2,731
Adding to_tf_dataset method
closed
7
2021-07-29T18:10:25
2021-09-16T13:50:54
2021-09-16T13:50:54
Rocketknight1
[]
Oh my **god** do not merge this yet, it's just a draft. I've added a method (via a mixin) to the `arrow_dataset.Dataset` class that automatically converts our Dataset classes to TF Dataset classes ready for training. It hopefully has most of the features we want, including streaming from disk (no need to load the whole dataset in memory!), correct shuffling, variable-length batches to reduce compute, and correct support for unusual padding. It achieves that by calling the tokenizer `pad` method in the middle of a TF compute graph via a very hacky call to `tf.py_function`, which is heretical but seems to work. A number of issues need to be resolved before it's ready to merge, though: 1) Is a MixIn the right way to do this? Do other classes besides `arrow_dataset.Dataset` need this method too? 2) Needs an argument to support constant-length batches for TPU training - this is easy to add and I'll do it soon. 3) Needs the user to supply the list of columns to drop from the arrow `Dataset`. Is there some automatic way to get the columns we want, or see which columns were added by the tokenizer? 4) Assumes the label column is always present and always called "label" - this is probably not great, but I'm not sure what the 'correct' thing to do here is.
true
955,987,834
https://api.github.com/repos/huggingface/datasets/issues/2730
https://github.com/huggingface/datasets/issues/2730
2,730
Update CommonVoice with new release
open
3
2021-07-29T15:59:59
2021-08-07T16:19:19
null
yjernite
[ "dataset request" ]
## Adding a Dataset - **Name:** CommonVoice mid-2021 release - **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8x, from 24 to 220). - **Paper:** https://discourse.mozilla.org/t/common-voice-2021-mid-year-dataset-release/83812 - **Data:** https://commonvoice.mozilla.org/en/datasets - **Motivation:** More data and more varied. I think we just need to add configs in the existing dataset script. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
false
955,920,489
https://api.github.com/repos/huggingface/datasets/issues/2729
https://github.com/huggingface/datasets/pull/2729
2,729
Fix IndexError while loading Arabic Billion Words dataset
closed
0
2021-07-29T14:47:02
2021-07-30T13:03:55
2021-07-30T13:03:55
albertvillanova
[ "bug" ]
Catch `IndexError` and ignore that record. Close #2727.
true
955,892,970
https://api.github.com/repos/huggingface/datasets/issues/2728
https://github.com/huggingface/datasets/issues/2728
2,728
Concurrent use of same dataset (already downloaded)
open
4
2021-07-29T14:18:38
2021-08-02T07:25:57
null
PierreColombo
[ "bug" ]
## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" "bert-large-cased" "roberta-large" "albert-base-v1" "albert-large-v1"; do for TASK_NAME in "mrpc" "rte" 'imdb' "paws" "mnli"; do export OUTPUT_DIR=${MODEL}_${TASK_NAME} sbatch --job-name=${OUTPUT_DIR} \ --gres=gpu:1 \ --no-requeue \ --cpus-per-task=10 \ --hint=nomultithread \ --time=1:00:00 \ --output=jobinfo/${OUTPUT_DIR}_%j.out \ --error=jobinfo/${OUTPUT_DIR}_%j.err \ --qos=qos_gpu-t4 \ --wrap="module purge; module load pytorch-gpu/py3/1.7.0 ; export HF_DATASETS_OFFLINE=1; export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets; python compute_measures.py --seed=$SEED --saving_path=results --batch_size=$BATCH_SIZE --task_name=$TASK_NAME --model_name=/gpfswork/rech/toto/transformers_models/$MODEL" done done ```python # Sample code to reproduce the bug dataset_train = load_dataset('imdb', split='train', download_mode="reuse_cache_if_exists") dataset_train = dataset_train.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True).select(list(range(args.filter))) dataset_val = load_dataset('imdb', split='train', download_mode="reuse_cache_if_exists") dataset_val = dataset_val.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True).select(list(range(args.filter, args.filter + 5000))) dataset_test = load_dataset('imdb', split='test', download_mode="reuse_cache_if_exists") dataset_test = dataset_test.map(lambda e: tokenizer(e['text'], truncation=True, padding='max_length'), batched=True) ``` ## Expected results I believe I am doing something wrong with the objects. ## Actual results Traceback (most recent call last): File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 983, in _prepare_split check_duplicates=True, File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/arrow_writer.py", line 192, in __init__ self.stream = pa.OSFile(self._path, "wb") File "pyarrow/io.pxi", line 829, in pyarrow.lib.OSFile.__cinit__ File "pyarrow/io.pxi", line 844, in pyarrow.lib.OSFile._open_writable File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 97, in pyarrow.lib.check_status FileNotFoundError: [Errno 2] Failed to open local file '/gpfswork/rech/tts/unm25jp/datasets/paws/labeled_final/1.1.0/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete/paws-test.arrow'. Detail: [errno 2] No such file or directory During handling of the above exception, another exception occurred: Traceback (most recent call last): File "compute_measures.py", line 181, in <module> train_loader, val_loader, test_loader = get_dataloader(args) File "/gpfsdswork/projects/rech/toto/intRAOcular/dataset_utils.py", line 69, in get_dataloader dataset_train = load_dataset('paws', "labeled_final", split='train', download_mode="reuse_cache_if_exists") File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset use_auth_token=use_auth_token, File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.0/lib/python3.7/site-packages/datasets/builder.py", line 658, in _download_and_prepare + str(e) OSError: Cannot find data file. Original error: [Errno 2] Failed to open local file '/gpfswork/rech/toto/datasets/paws/labeled_final/1.1.0/09d8fae989bb569009a8f5b879ccf2924d3e5cd55bfe2e89e6dab1c0b50ecd34.incomplete/paws-test.arrow'. Detail: [errno 2] No such file or directory ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: datasets==1.8.0 - Platform: linux (jeanzay) - Python version: pyarrow==2.0.0 - PyArrow version: 3.7.8
false
955,812,149
https://api.github.com/repos/huggingface/datasets/issues/2727
https://github.com/huggingface/datasets/issues/2727
2,727
Error in loading the Arabic Billion Words Corpus
closed
2
2021-07-29T12:53:09
2021-07-30T13:03:55
2021-07-30T13:03:55
M-Salti
[ "bug" ]
## Describe the bug I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset. ## Steps to reproduce the bug ```python load_dataset("arabic_billion_words", "Techreen") load_dataset("arabic_billion_words", "Almustaqbal") ``` ## Expected results The datasets load succefully. ## Actual results ```python _extract_tags(self, sample, tag) 139 if len(out) > 0: 140 break --> 141 return out[0] 142 143 def _clean_text(self, text): IndexError: list index out of range ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.2 - Platform: Ubuntu 18.04.5 LTS - Python version: 3.7.11 - PyArrow version: 3.0.0
false
955,674,388
https://api.github.com/repos/huggingface/datasets/issues/2726
https://github.com/huggingface/datasets/pull/2726
2,726
Typo fix `tokenize_exemple`
closed
0
2021-07-29T10:03:37
2021-07-29T12:00:25
2021-07-29T12:00:25
shabie
[]
There is a small typo in the main README.md
true
955,020,776
https://api.github.com/repos/huggingface/datasets/issues/2725
https://github.com/huggingface/datasets/pull/2725
2,725
Pass use_auth_token to request_etags
closed
0
2021-07-28T16:13:29
2021-07-28T16:38:02
2021-07-28T16:38:02
albertvillanova
[]
Fix #2724.
true
954,919,607
https://api.github.com/repos/huggingface/datasets/issues/2724
https://github.com/huggingface/datasets/issues/2724
2,724
404 Error when loading remote data files from private repo
closed
3
2021-07-28T14:24:23
2021-07-29T04:58:49
2021-07-28T16:38:01
albertvillanova
[ "bug" ]
## Describe the bug When loading remote data files from a private repo, a 404 error is raised. ## Steps to reproduce the bug ```python url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset") dset = load_dataset("json", data_files=url, use_auth_token=True) # HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/datasets/lewtun/asr-preds-test/resolve/main/preds.jsonl ``` ## Expected results Load dataset. ## Actual results 404 Error.
false
954,864,104
https://api.github.com/repos/huggingface/datasets/issues/2723
https://github.com/huggingface/datasets/pull/2723
2,723
Fix en subset by modifying dataset_info with correct validation infos
closed
0
2021-07-28T13:36:19
2021-07-28T15:22:23
2021-07-28T15:22:23
thomasw21
[]
- Related to: #2682 We correct the values of `en` subset concerning the expected validation values (both `num_bytes` and `num_examples`. Instead of having: `{"name": "validation", "num_bytes": 828589180707, "num_examples": 364868892, "dataset_name": "c4"}` We replace with correct values: `{"name": "validation", "num_bytes": 825767266, "num_examples": 364608, "dataset_name": "c4"}` There are still issues with validation with other subsets, but I can't download all the files, unzip to check for the correct number of bytes. (If you have a fast way to obtain those values for other subsets, I can do this in this PR ... otherwise I can't spend those resources)
true
954,446,053
https://api.github.com/repos/huggingface/datasets/issues/2722
https://github.com/huggingface/datasets/issues/2722
2,722
Missing cache file
closed
2
2021-07-28T03:52:07
2022-03-21T08:27:51
2022-03-21T08:27:51
PosoSAgapo
[ "bug" ]
Strangely missing cache file after I restart my program again. `glue_dataset = datasets.load_dataset('glue', 'sst2')` `FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json'`
false
954,238,230
https://api.github.com/repos/huggingface/datasets/issues/2721
https://github.com/huggingface/datasets/pull/2721
2,721
Deal with the bad check in test_load.py
closed
1
2021-07-27T20:23:23
2021-07-28T09:58:34
2021-07-28T08:53:18
mariosasko
[]
This PR removes a check that's been added in #2684. My intention with this check was to capture an URL in the error message, but instead, it captures a substring of the previous regex match in the test function. Another option would be to replace this check with: ```python m_paths = re.findall(r"\S*_dummy/_dummy.py\b", str(exc_info.value)) # on Linux this will match an URL as well as a local_path due to different os.sep, so take the last element (an URL always comes last in the list) assert len(m_paths) > 0 and is_remote_url(m_paths[-1]) # is_remote_url comes from datasets.utils.file_utils ``` @lhoestq Let me know which one of these two approaches (delete or replace) do you prefer?
true
954,024,426
https://api.github.com/repos/huggingface/datasets/issues/2720
https://github.com/huggingface/datasets/pull/2720
2,720
fix: 🐛 fix two typos
closed
0
2021-07-27T15:50:17
2021-07-27T18:38:17
2021-07-27T18:38:16
severo
[]
true
953,932,416
https://api.github.com/repos/huggingface/datasets/issues/2719
https://github.com/huggingface/datasets/issues/2719
2,719
Use ETag in streaming mode to detect resource updates
open
0
2021-07-27T14:17:09
2021-10-22T09:36:08
null
severo
[ "enhancement", "dataset-viewer" ]
**Is your feature request related to a problem? Please describe.** I want to cache data I generate from processing a dataset I've loaded in streaming mode, but I've currently no way to know if the remote data has been updated or not, thus I don't know when to invalidate my cache. **Describe the solution you'd like** Take the ETag of the data files into account and provide it (directly or through a hash) to give a signal that I can invalidate my cache. **Describe alternatives you've considered** None
false
953,360,663
https://api.github.com/repos/huggingface/datasets/issues/2718
https://github.com/huggingface/datasets/pull/2718
2,718
New documentation structure
closed
5
2021-07-26T23:15:13
2021-09-13T17:20:53
2021-09-13T17:20:52
stevhliu
[]
Organize Datasets documentation into four documentation types to improve clarity and discoverability of content. **Content to add in the very short term (feel free to add anything I'm missing):** - A discussion on why Datasets uses Arrow that includes some context and background about why we use Arrow. Would also be great to talk about Datasets speed and performance here, and if you can share any benchmarking/tests you did, that would be awesome! Finally, a discussion about how memory-mapping frees the user from RAM constraints would be very helpful. - Explain why you would want to disable or override verifications when loading a dataset. - If possible, include a code sample of when the number of elements in the field of an output dictionary aren’t the same as the other fields in the output dictionary (taken from the [note](https://huggingface.co/docs/datasets/processing.html#augmenting-the-dataset) here).
true
952,979,976
https://api.github.com/repos/huggingface/datasets/issues/2717
https://github.com/huggingface/datasets/pull/2717
2,717
Fix shuffle on IterableDataset that disables batching in case any functions were mapped
closed
0
2021-07-26T14:42:22
2021-07-26T18:04:14
2021-07-26T16:30:06
amankhandelia
[]
Made a very minor change to fix the issue#2716. Added the missing argument in the constructor call. As discussed in the bug report, the change is made to prevent the `shuffle` method call from resetting the value of `batched` attribute in `MappedExamplesIterable` Fix #2716.
true
952,902,778
https://api.github.com/repos/huggingface/datasets/issues/2716
https://github.com/huggingface/datasets/issues/2716
2,716
Calling shuffle on IterableDataset will disable batching in case any functions were mapped
closed
3
2021-07-26T13:24:59
2021-07-26T18:04:43
2021-07-26T18:04:43
amankhandelia
[ "bug" ]
When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False` I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/huggingface/datasets/blob/d25a0bf94d9f9a9aa6cabdf5b450b9c327d19729/src/datasets/iterable_dataset.py#L197) here as it is `self.ex_iterable.shuffle_data_sources(seed), function=self.function, batch_size=self.batch_size`, as one can see it is missing batched argument, which means that the iterator fallsback to default constructor value, which in this case is `False`. To remedy the problem we can change this line to `self.ex_iterable.shuffle_data_sources(seed), function=self.function, batched=self.batched, batch_size=self.batch_size`
false
952,845,229
https://api.github.com/repos/huggingface/datasets/issues/2715
https://github.com/huggingface/datasets/pull/2715
2,715
Update PAN-X data URL in XTREME dataset
closed
1
2021-07-26T12:21:17
2021-07-26T13:27:59
2021-07-26T13:27:59
albertvillanova
[]
Related to #2710, #2691.
true
952,580,820
https://api.github.com/repos/huggingface/datasets/issues/2714
https://github.com/huggingface/datasets/issues/2714
2,714
add more precise information for size
open
1
2021-07-26T07:11:03
2021-07-26T09:16:25
null
pennyl67
[ "enhancement" ]
For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a regex for existing datasets.
false
952,515,256
https://api.github.com/repos/huggingface/datasets/issues/2713
https://github.com/huggingface/datasets/pull/2713
2,713
Enumerate all ner_tags values in WNUT 17 dataset
closed
0
2021-07-26T05:22:16
2021-07-26T09:30:55
2021-07-26T09:30:55
albertvillanova
[]
This PR does: - Enumerate all ner_tags in dataset card Data Fields section - Add all metadata tags to dataset card Close #2709.
true
951,723,326
https://api.github.com/repos/huggingface/datasets/issues/2710
https://github.com/huggingface/datasets/pull/2710
2,710
Update WikiANN data URL
closed
1
2021-07-23T16:29:21
2021-07-26T09:34:23
2021-07-26T09:34:23
albertvillanova
[]
WikiANN data source URL is no longer accessible: 404 error from Dropbox. We have decided to host it at Hugging Face. This PR updates the data source URL, the metadata JSON file and the dataset card. Close #2691.
true
951,534,757
https://api.github.com/repos/huggingface/datasets/issues/2709
https://github.com/huggingface/datasets/issues/2709
2,709
Missing documentation for wnut_17 (ner_tags)
closed
1
2021-07-23T12:25:32
2021-07-26T09:30:55
2021-07-26T09:30:55
maxpel
[ "bug" ]
On the info page of the wnut_17 data set (https://huggingface.co/datasets/wnut_17), the model output of ner-tags is only documented for these 5 cases: `ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2), B-creative-work (3), I-creative-work (4).` I trained a model with the data and it gives me 13 classes: ``` "id2label": { "0": 0, "1": 1, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9, "10": 10, "11": 11, "12": 12 } "label2id": { "0": 0, "1": 1, "10": 10, "11": 11, "12": 12, "2": 2, "3": 3, "4": 4, "5": 5, "6": 6, "7": 7, "8": 8, "9": 9 } ``` The paper (https://www.aclweb.org/anthology/W17-4418.pdf) explains those 6 categories, but the ordering does not match: ``` 1. person 2. location (including GPE, facility) 3. corporation 4. product (tangible goods, or well-defined services) 5. creative-work (song, movie, book and so on) 6. group (subsuming music band, sports team, and non-corporate organisations) ``` I would be very helpful for me, if somebody could clarify the model ouputs and explain the "B-" and "I-" prefixes to me. Really great work with that and the other packages, I couldn't believe that training the model with that data was basically a one-liner!
false
951,092,660
https://api.github.com/repos/huggingface/datasets/issues/2708
https://github.com/huggingface/datasets/issues/2708
2,708
QASC: incomplete training set
closed
2
2021-07-22T21:59:44
2021-07-23T13:30:07
2021-07-23T13:30:07
danyaljj
[ "bug" ]
## Describe the bug The training instances are not loaded properly. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("qasc", script_version='1.10.2') def load_instances(split): instances = dataset[split] print(f"split: {split} - size: {len(instances)}") for x in instances: print(json.dumps(x)) load_instances('test') load_instances('validation') load_instances('train') ``` ## results For test and validation, we can see the examples in the output (which is good!): ``` split: test - size: 920 {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Anthax", "under water", "uterus", "wombs", "two", "moles", "live", "embryo"]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "What type of birth do therian mammals have? (A) Anthax (B) under water (C) uterus (D) wombs (E) two (F) moles (G) live (H) embryo", "id": "3C44YUNSI1OBFBB8D36GODNOZN9DPA", "question": "What type of birth do therian mammals have?"} {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Corvidae", "arthropods", "birds", "backbones", "keratin", "Jurassic", "front paws", "Parakeets."]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "By what time had mouse-sized viviparous mammals evolved? (A) Corvidae (B) arthropods (C) birds (D) backbones (E) keratin (F) Jurassic (G) front paws (H) Parakeets.", "id": "3B1NLC6UGZVERVLZFT7OUYQLD1SGPZ", "question": "By what time had mouse-sized viviparous mammals evolved?"} {"answerKey": "", "choices": {"label": ["A", "B", "C", "D", "E", "F", "G", "H"], "text": ["Reduced friction", "causes infection", "vital to a good life", "prevents water loss", "camouflage from consumers", "Protection against predators", "spur the growth of the plant", "a smooth surface"]}, "combinedfact": "", "fact1": "", "fact2": "", "formatted_question": "What does a plant's skin do? (A) Reduced friction (B) causes infection (C) vital to a good life (D) prevents water loss (E) camouflage from consumers (F) Protection against predators (G) spur the growth of the plant (H) a smooth surface", "id": "3QRYMNZ7FYGITFVSJET3PS0F4S0NT9", "question": "What does a plant's skin do?"} ... ``` However, only a few instances are loaded for the training split, which is not correct. ## Environment info - `datasets` version: '1.10.2' - Platform: MaxOS - Python version:3.7 - PyArrow version: 3.0.0
false
950,812,945
https://api.github.com/repos/huggingface/datasets/issues/2707
https://github.com/huggingface/datasets/issues/2707
2,707
404 Not Found Error when loading LAMA dataset
closed
3
2021-07-22T15:52:33
2021-07-26T14:29:07
2021-07-26T14:29:07
dwil2444
[]
The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download: Steps to Reproduce: 1. `from datasets import load_dataset` 2. `dataset = load_dataset('lama', 'trex')`. Results: `FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/lama/lama.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/lama/lama.py`
false
950,606,561
https://api.github.com/repos/huggingface/datasets/issues/2706
https://github.com/huggingface/datasets/pull/2706
2,706
Update BibTeX entry
closed
0
2021-07-22T12:29:29
2021-07-22T12:43:00
2021-07-22T12:43:00
albertvillanova
[]
Update BibTeX entry.
true
950,488,583
https://api.github.com/repos/huggingface/datasets/issues/2705
https://github.com/huggingface/datasets/issues/2705
2,705
404 not found error on loading WIKIANN dataset
closed
1
2021-07-22T09:55:50
2021-07-23T08:07:32
2021-07-23T08:07:32
ronbutan
[ "bug" ]
## Describe the bug Unable to retreive wikiann English dataset ## Steps to reproduce the bug ```python from datasets import list_datasets, load_dataset, list_metrics, load_metric WIKIANN = load_dataset("wikiann","en") ``` ## Expected results Colab notebook should display successful download status ## Actual results FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/12h3qqog6q4bjve/panx_dataset.tar?dl=1 ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.1 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyArrow version: 3.0.0
false
950,483,980
https://api.github.com/repos/huggingface/datasets/issues/2704
https://github.com/huggingface/datasets/pull/2704
2,704
Fix pick default config name message
closed
0
2021-07-22T09:49:43
2021-07-22T10:02:41
2021-07-22T10:02:40
lhoestq
[]
The error message to tell which config name to load is not displayed. This is because in the code it was considering the config kwargs to be non-empty, which is a special case for custom configs created on the fly. It appears after this change: https://github.com/huggingface/datasets/pull/2659 I fixed that by making the config kwargs empty by default, even if default parameters are passed Fix https://github.com/huggingface/datasets/issues/2703
true
950,482,284
https://api.github.com/repos/huggingface/datasets/issues/2703
https://github.com/huggingface/datasets/issues/2703
2,703
Bad message when config name is missing
closed
0
2021-07-22T09:47:23
2021-07-22T10:02:40
2021-07-22T10:02:40
lhoestq
[]
When loading a dataset that have several configurations, we expect to see an error message if the user doesn't specify a config name. However in `datasets` 1.10.0 and 1.10.1 it doesn't show the right message: ```python import datasets datasets.load_dataset("glue") ``` raises ```python AttributeError: 'BuilderConfig' object has no attribute 'text_features' ``` instead of ```python ValueError: Config name is missing. Please pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax'] Example of usage: `load_dataset('glue', 'cola')` ```
false
950,448,159
https://api.github.com/repos/huggingface/datasets/issues/2702
https://github.com/huggingface/datasets/pull/2702
2,702
Update BibTeX entry
closed
0
2021-07-22T09:04:39
2021-07-22T09:17:39
2021-07-22T09:17:38
albertvillanova
[]
Update BibTeX entry.
true
950,422,403
https://api.github.com/repos/huggingface/datasets/issues/2701
https://github.com/huggingface/datasets/pull/2701
2,701
Fix download_mode docstrings
closed
0
2021-07-22T08:30:25
2021-07-22T09:33:31
2021-07-22T09:33:31
albertvillanova
[ "documentation" ]
Fix `download_mode` docstrings.
true
950,276,325
https://api.github.com/repos/huggingface/datasets/issues/2700
https://github.com/huggingface/datasets/issues/2700
2,700
from datasets import Dataset is failing
closed
1
2021-07-22T03:51:23
2021-07-22T07:23:45
2021-07-22T07:09:07
kswamy15
[ "bug" ]
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import Dataset ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or traceback. /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in <module>() 25 import posixpath 26 import requests ---> 27 from tqdm.contrib.concurrent import thread_map 28 29 from .. import __version__, config, utils ModuleNotFoundError: No module named 'tqdm.contrib.concurrent' --------------------------------------------------------------------------- NOTE: If your import is failing due to a missing package, you can manually install dependencies using either !pip or !apt. To view examples of installing some common dependencies, click the "Open Examples" button below. --------------------------------------------------------------------------- ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: latest version as of 07/21/2021 - Platform: Google Colab - Python version: 3.7 - PyArrow version:
false
950,221,226
https://api.github.com/repos/huggingface/datasets/issues/2699
https://github.com/huggingface/datasets/issues/2699
2,699
cannot combine splits merging and streaming?
open
5
2021-07-22T01:13:25
2024-04-08T13:26:46
null
eyaler
[ "bug" ]
this does not work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)` with error: `ValueError: Bad split: train+validation. Available splits: ['train', 'validation']` these work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation')` `dataset = datasets.load_dataset('mc4','iw',split='train',streaming=True)` `dataset = datasets.load_dataset('mc4','iw',split='validation',streaming=True)` i could not find a reference to this in the documentation and the error message is confusing. also would be nice to allow streaming for the merged splits
false
950,159,867
https://api.github.com/repos/huggingface/datasets/issues/2698
https://github.com/huggingface/datasets/pull/2698
2,698
Ignore empty batch when writing
closed
0
2021-07-21T22:35:30
2021-07-26T14:56:03
2021-07-26T13:25:26
pcuenca
[]
This prevents an schema update with unknown column types, as reported in #2644. This is my first attempt at fixing the issue. I tested the following: - First batch returned by a batched map operation is empty. - An intermediate batch is empty. - `python -m unittest tests.test_arrow_writer` passes. However, `arrow_writer` looks like a pretty generic interface, I'm not sure if there are other uses I may have overlooked. Let me know if that's the case, or if a better approach would be preferable.
true
950,021,623
https://api.github.com/repos/huggingface/datasets/issues/2697
https://github.com/huggingface/datasets/pull/2697
2,697
Fix import on Colab
closed
1
2021-07-21T19:03:38
2021-07-22T07:09:08
2021-07-22T07:09:07
nateraw
[]
Fix #2695, fix #2700.
true
949,901,726
https://api.github.com/repos/huggingface/datasets/issues/2696
https://github.com/huggingface/datasets/pull/2696
2,696
Add support for disable_progress_bar on Windows
closed
1
2021-07-21T16:34:53
2021-07-26T13:31:14
2021-07-26T09:38:37
mariosasko
[]
This PR is a continuation of #2667 and adds support for `utils.disable_progress_bar()` on Windows when using multiprocessing. This [answer](https://stackoverflow.com/a/6596695/14095927) on SO explains it nicely why the current approach (with calling `utils.is_progress_bar_enabled()` inside `Dataset._map_single`) would not work on Windows.
true
949,864,823
https://api.github.com/repos/huggingface/datasets/issues/2695
https://github.com/huggingface/datasets/issues/2695
2,695
Cannot import load_dataset on Colab
closed
5
2021-07-21T15:52:51
2021-07-22T07:26:25
2021-07-22T07:09:07
bayartsogt-ya
[ "bug" ]
## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On colab: ```python !pip install datasets from datasets import load_dataset ``` ## Expected results Works without error ## Actual results Specify the actual results or traceback. ``` ModuleNotFoundError Traceback (most recent call last) <ipython-input-2-8cc7de4c69eb> in <module>() ----> 1 from datasets import load_dataset, load_metric, Metric, MetricInfo, Features, Value 2 from sklearn.metrics import mean_squared_error /usr/local/lib/python3.7/dist-packages/datasets/__init__.py in <module>() 31 ) 32 ---> 33 from .arrow_dataset import Dataset, concatenate_datasets 34 from .arrow_reader import ArrowReader, ReadInstruction 35 from .arrow_writer import ArrowWriter /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in <module>() 40 from tqdm.auto import tqdm 41 ---> 42 from datasets.tasks.text_classification import TextClassification 43 44 from . import config, utils /usr/local/lib/python3.7/dist-packages/datasets/tasks/__init__.py in <module>() 1 from typing import Optional 2 ----> 3 from ..utils.logging import get_logger 4 from .automatic_speech_recognition import AutomaticSpeechRecognition 5 from .base import TaskTemplate /usr/local/lib/python3.7/dist-packages/datasets/utils/__init__.py in <module>() 19 20 from . import logging ---> 21 from .download_manager import DownloadManager, GenerateMode 22 from .file_utils import DownloadConfig, cached_path, hf_bucket_url, is_remote_url, temp_seed 23 from .mock_download_manager import MockDownloadManager /usr/local/lib/python3.7/dist-packages/datasets/utils/download_manager.py in <module>() 24 25 from .. import config ---> 26 from .file_utils import ( 27 DownloadConfig, 28 cached_path, /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in <module>() 25 import posixpath 26 import requests ---> 27 from tqdm.contrib.concurrent import thread_map 28 29 from .. import __version__, config, utils ModuleNotFoundError: No module named 'tqdm.contrib.concurrent' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.10.0 - Platform: Colab - Python version: 3.7.11 - PyArrow version: 3.0.0
false
949,844,722
https://api.github.com/repos/huggingface/datasets/issues/2694
https://github.com/huggingface/datasets/pull/2694
2,694
fix: 🐛 change string format to allow copy/paste to work in bash
closed
0
2021-07-21T15:30:40
2021-07-22T10:41:47
2021-07-22T10:41:47
severo
[]
Before: copy/paste resulted in an error because the square bracket characters `[]` are special characters in bash
true
949,797,014
https://api.github.com/repos/huggingface/datasets/issues/2693
https://github.com/huggingface/datasets/pull/2693
2,693
Fix OSCAR Esperanto
closed
0
2021-07-21T14:43:50
2021-07-21T14:53:52
2021-07-21T14:53:51
lhoestq
[]
The Esperanto part (original) of OSCAR has the wrong number of examples: ```python from datasets import load_dataset raw_datasets = load_dataset("oscar", "unshuffled_original_eo") ``` raises ```python NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=314188336, num_examples=121171, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=314064514, num_examples=121168, dataset_name='oscar')}] ``` I updated the number of expected examples in dataset_infos.json cc @sgugger
true
949,765,484
https://api.github.com/repos/huggingface/datasets/issues/2692
https://github.com/huggingface/datasets/pull/2692
2,692
Update BibTeX entry
closed
0
2021-07-21T14:23:35
2021-07-21T15:31:41
2021-07-21T15:31:40
albertvillanova
[]
Update BibTeX entry
true
949,758,379
https://api.github.com/repos/huggingface/datasets/issues/2691
https://github.com/huggingface/datasets/issues/2691
2,691
xtreme / pan-x cannot be downloaded
closed
5
2021-07-21T14:18:05
2021-07-26T09:34:22
2021-07-26T09:34:22
severo
[ "bug" ]
## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Actual results ``` FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/12h3qqog6q4bjve/panx_dataset.tar?dl=1 ``` ## Environment info - `datasets` version: 1.9.0 - Platform: macOS-11.4-x86_64-i386-64bit - Python version: 3.8.11 - PyArrow version: 4.0.1
false
949,574,500
https://api.github.com/repos/huggingface/datasets/issues/2690
https://github.com/huggingface/datasets/pull/2690
2,690
Docs details
closed
1
2021-07-21T10:43:14
2021-07-27T18:40:54
2021-07-27T18:40:54
severo
[]
Some comments here: - the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch 'datasets[streaming]'` (maybe just link to https://huggingface.co/docs/datasets/installation.html + a one-liner that installs all the requirements / alternatively a requirements.txt file) - "If you’d like to play with the examples, you must install it from source." in https://huggingface.co/docs/datasets/installation.html: it's not clear to me what this means (what are these "examples"?) - in https://huggingface.co/docs/datasets/loading_datasets.html: "or AWS bucket if it’s not already stored in the library". It's the only place in the doc (aside from the docstring https://huggingface.co/docs/datasets/package_reference/loading_methods.html?highlight=aws bucket#datasets.list_datasets) where the "AWS bucket" is mentioned. It's not easy to understand what this means. Maybe explain more, and link to https://s3.amazonaws.com/datasets.huggingface.co and/or https://huggingface.co/docs/datasets/filesystems.html. - example in https://huggingface.co/docs/datasets/loading_datasets.html#manually-downloading-files is obsoleted by https://github.com/huggingface/datasets/pull/2326. Also: see https://github.com/huggingface/datasets/issues/2691 for a bug on this specific dataset. - in https://huggingface.co/docs/datasets/loading_datasets.html#manually-downloading-files the doc says "After you’ve downloaded the files, you can point to the folder hosting them locally with the data_dir argument as follows:", but the following example does not show how to use `data_dir` - in https://huggingface.co/docs/datasets/loading_datasets.html#csv-files, it would be nice to have an URL to the csv loader reference (but I'm not sure there is one in the API reference). This comment applies in many places in the doc: I would want the API reference to contain doc for all the code/functions/classes... and I would want a lot more links inside the doc pointing to the API entries. - in the API reference (docstrings) I would prefer "SOURCE" to link to github instead of a copy of the code inside the docs site (eg. https://github.com/huggingface/datasets/blob/master/src/datasets/load.py#L711 instead of https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset) - it seems like not all the API is exposed in the doc. For example, there is no doc for [`disable_progress_bar`](https://github.com/huggingface/datasets/search?q=disable_progress_bar), see https://huggingface.co/docs/datasets/search.html?q=disable_progress_bar, even if the code contains docstrings. Does it mean that the function is not officially supported? (otherwise, maybe it also deserves a mention in https://huggingface.co/docs/datasets/package_reference/logging_methods.html) - in https://huggingface.co/docs/datasets/loading_datasets.html?highlight=most%20efficient%20format%20have%20json%20files%20consisting%20multiple%20json%20objects#json-files, "The most efficient format is to have JSON files consisting of multiple JSON objects, one per line, representing individual data rows:", maybe link to https://en.wikipedia.org/wiki/JSON_streaming#Line-delimited_JSON and give it a name ("line-delimited JSON"? "JSON Lines" as in https://huggingface.co/docs/datasets/processing.html#exporting-a-dataset-to-csv-json-parquet-or-to-python-objects ?) - in https://huggingface.co/docs/datasets/loading_datasets.html, for the local files sections, it would be nice to provide sample csv / json / text files to download, so that it's easier for the reader to try to load them (instead: they won't try) - the doc explains how to shard a dataset, but does not explain why and when a dataset should be sharded (I have no idea... for [parallelizing](https://huggingface.co/docs/datasets/processing.html#multiprocessing)?). It does neither give an idea of the number of shards a dataset typically should have and why. - the code example in https://huggingface.co/docs/datasets/processing.html#mapping-in-a-distributed-setting does not work, because `training_args` has not been defined before in the doc.
true