id
int64
599M
3.18B
number
int64
1
7.65k
title
stringlengths
1
290
state
stringclasses
2 values
body
stringlengths
0
228k
is_pull_request
bool
1 class
created_at
stringdate
2020-04-14 10:18:02
2025-06-26 12:23:48
updated_at
stringdate
2020-04-27 16:04:17
2025-06-26 14:02:38
closed_at
stringlengths
20
20
βŒ€
user_login
stringlengths
3
26
author_association
stringclasses
4 values
pr_url
stringlengths
46
49
βŒ€
pr_merged_at
stringlengths
20
20
βŒ€
comments_count
int64
0
70
reactions_total
int64
0
61
reactions_plus1
int64
0
39
reactions_heart
int64
0
22
draft
bool
2 classes
locked
bool
1 class
labels
listlengths
0
4
html_url
stringlengths
46
51
is_pr_url
bool
2 classes
comments
listlengths
0
30
694,947,599
579
Doc metrics
closed
Adding documentation on metrics loading/using/sharing
true
2020-09-07T10:15:24Z
2020-09-10T13:06:11Z
2020-09-10T13:06:10Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/579
2020-09-10T13:06:10Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/579
true
[]
694,849,940
578
Add CommonGen Dataset
closed
CC Authors: @yuchenlin @MichaelZhouwang
true
2020-09-07T08:17:17Z
2020-09-07T11:50:29Z
2020-09-07T11:49:07Z
JetRunner
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/578
2020-09-07T11:49:07Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/578
true
[]
694,607,148
577
Some languages in wikipedia dataset are not loading
closed
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', 'an'] for lang in langs: data = nlp.load_dataset('wikipedia', f'20200501.{lang}', beam_runner='DirectRunner', split='train') print(lang, len(data)) ``` Here's what I see for 'ar' (it gets stuck there): ``` Downloading and preparing dataset wikipedia/20200501.ar (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/gaguilar/.cache/huggingface/datasets/wikipedia/20200501.ar/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50... ``` Note that those languages are indeed in the list of expected languages. Any suggestions on how to work around this? Thanks!
true
2020-09-07T01:16:29Z
2023-04-11T22:50:48Z
2022-10-11T11:16:04Z
gaguilar
CONTRIBUTOR
null
null
16
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/577
false
[ "Some wikipedia languages have already been processed by us and are hosted on our google storage. This is the case for \"fr\" and \"en\" for example.\r\n\r\nFor other smaller languages (in terms of bytes), they are directly downloaded and parsed from the wikipedia dump site.\r\nParsing can take some time for langua...
694,348,645
576
Fix the code block in doc
closed
true
2020-09-06T11:40:55Z
2020-09-07T07:37:32Z
2020-09-07T07:37:18Z
JetRunner
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/576
2020-09-07T07:37:18Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/576
true
[ "thanks :)" ]
693,691,611
575
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading.
closed
Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ``` However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the last few lines): ``` /net/vaosl01/opt/NFS/su0/miniconda3/envs/hf/lib/python3.7/site-packages/nlp/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only) 354 " to False." 355 ) --> 356 raise ConnectionError("Couldn't reach {}".format(url)) 357 358 # From now on, connected is True. ConnectionError: Couldn't reach https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc ``` I tried glue with cola and sst2. I got the same error, just instead of mrpc in the URL, it was replaced with cola and sst2. Since this was not working, I thought I'll try another dataset. So I tried downloading the imdb dataset: ``` ds = load_dataset('imdb', split='train') ``` This downloads the data, but it just blocks after that: ``` Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.56k/4.56k [00:00<00:00, 1.38MB/s] Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2.07k/2.07k [00:00<00:00, 1.15MB/s] Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown sizetotal: 207.28 MiB) to /net/vaosl01/opt/NFS/su0/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743... Downloading: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 84.1M/84.1M [00:07<00:00, 11.1MB/s] ``` I checked the folder `$HF_HOME/datasets/downloads/extracted/<id>/aclImdb`. This folder is constantly growing in size. When I navigated to the train folder within, there was no file. However, the test folder seemed to be populating. The last time I checked it was 327M. I thought the Imdb dataset was smaller than that. My questions are: 1. Why is it still blocking? Is it still downloading? 2. I specified split as train, so why is the test folder being populated? 3. I read somewhere that after downloading, `nlp` converts the text files into some sort of `arrow` files, which will also take a while. Is this also happening here? Thanks.
true
2020-09-04T21:46:25Z
2020-09-22T10:41:36Z
2020-09-22T10:41:36Z
sudarshan85
NONE
null
null
6
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/575
false
[ "Update:\r\n\r\nThe imdb download completed after a long time (about 45 mins). Ofcourse once download loading was instantaneous. Also, the loaded object was of type `arrow_dataset`. \r\n\r\nThe urls for glue still doesn't work though.", "Thanks for the report, I'll give a look!", "I am also seeing a similar err...
693,364,853
574
Add modules cache
closed
As discusses in #554 , we should use a module cache directory outside of the python packages directory since we may not have write permissions. I added a new HF_MODULES_PATH directory that is added to the python path when doing `import nlp`. In this directory, a module `nlp_modules` is created so that datasets can be added to `nlp_modules.datasets` and metrics to `nlp_modules.metrics`. `nlp_modules` doesn't exist on Pypi. If someone using cloudpickle still wants to have the downloaded dataset/metrics scripts to be inside the nlp directory, it is still possible to change the environment variable HF_MODULES_CACHE to be a path inside the nlp lib.
true
2020-09-04T16:30:03Z
2020-09-22T10:27:08Z
2020-09-07T09:01:35Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/574
2020-09-07T09:01:35Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/574
true
[ "All the tests pass on my side. Not sure if it is a cache issue or a pytest issue or a circleci issue.\r\nEDIT: I have the same error on google colab. Trying to fix that", "I think I fixed it (sorry didn't notice you were on it as well)" ]
693,091,790
573
Faster caching for text dataset
closed
As mentioned in #546 and #548 , hashing `data_files` contents to get the cache directory name for a text dataset can take a long time. To make it faster I changed the hashing so that it takes into account the `path` and the `last modified timestamp` of each data file, instead of iterating through the content of each file to get a hash.
true
2020-09-04T11:58:34Z
2020-09-04T12:53:24Z
2020-09-04T12:53:23Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/573
2020-09-04T12:53:23Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/573
true
[]
692,598,231
572
Add CLUE Benchmark (11 datasets)
closed
Add 11 tasks of [CLUE](https://github.com/CLUEbenchmark/CLUE).
true
2020-09-04T01:57:40Z
2020-09-07T09:59:11Z
2020-09-07T09:59:10Z
JetRunner
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/572
2020-09-07T09:59:10Z
3
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/572
true
[ "Thanks, @lhoestq! I've addressed the comments. \r\nAlso, I have tried to use `ClassLabel` [when possible](https://github.com/huggingface/nlp/pull/572/files#diff-1026ac7d7b78bf029cb0ebe63162c77dR297). Is there still somewhere else we can use `ClassLabel`? ", "I believe CI failure is unrelated.", "Great job! " ]
692,109,287
571
Serialization
closed
I added `save` and `load` method to serialize/deserialize a dataset object in a folder. It moves the arrow files there (or write them if the tables were in memory), and saves the pickle state in a json file `state.json`, except the info that are in a separate file `dataset_info.json`. Example: ```python import nlp squad = nlp.load_dataset("squad", split="train") squad.save("tmp/squad") squad = nlp.Dataset.load("tmp/squad") ``` `ls tmp/squad` ``` dataset_info.json squad-train.arrow state.json ``` `cat tmp/squad/state.json` ```json { "_data": null, "_data_files": [ { "filename": "squad-train.arrow", "skip": 0, "take": 87599 } ], "_fingerprint": "61f452797a686bc1", "_format_columns": null, "_format_kwargs": {}, "_format_type": null, "_indexes": {}, "_indices": null, "_indices_data_files": [], "_inplace_history": [ { "transforms": [] } ], "_output_all_columns": false, "_split": "train" } ``` `cat tmp/squad/dataset_info.json` ```json { "builder_name": "squad", "citation": "@article{2016arXiv160605250R,\n author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},\n Konstantin and {Liang}, Percy},\n title = \"{SQuAD: 100,000+ Questions for Machine Comprehension of Text}\",\n journal = {arXiv e-prints},\n year = 2016,\n eid = {arXiv:1606.05250},\n pages = {arXiv:1606.05250},\narchivePrefix = {arXiv},\n eprint = {1606.05250},\n}\n", "config_name": "plain_text", "dataset_size": 89789763, "description": "Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.\n", "download_checksums": { "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json": { "checksum": "95aa6a52d5d6a735563366753ca50492a658031da74f301ac5238b03966972c9", "num_bytes": 4854279 }, "https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json": { "checksum": "3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955", "num_bytes": 30288272 } }, "download_size": 35142551, "features": { "answers": { "_type": "Sequence", "feature": { "answer_start": { "_type": "Value", "dtype": "int32", "id": null }, "text": { "_type": "Value", "dtype": "string", "id": null } }, "id": null, "length": -1 }, "context": { "_type": "Value", "dtype": "string", "id": null }, "id": { "_type": "Value", "dtype": "string", "id": null }, "question": { "_type": "Value", "dtype": "string", "id": null }, "title": { "_type": "Value", "dtype": "string", "id": null } }, "homepage": "https://rajpurkar.github.io/SQuAD-explorer/", "license": "", "post_processed": { "features": null, "resources_checksums": { "train": {}, "train[:10%]": {} } }, "post_processing_size": 0, "size_in_bytes": 124932314, "splits": { "train": { "dataset_name": "squad", "name": "train", "num_bytes": 79317110, "num_examples": 87599 }, "validation": { "dataset_name": "squad", "name": "validation", "num_bytes": 10472653, "num_examples": 10570 } }, "supervised_keys": null, "version": { "description": "New split API (https://tensorflow.org/datasets/splits)", "major": 1, "minor": 0, "nlp_version_to_prepare": null, "patch": 0, "version_str": "1.0.0" } } ```
true
2020-09-03T16:21:38Z
2020-09-07T07:46:08Z
2020-09-07T07:46:07Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/571
2020-09-07T07:46:07Z
4
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/571
true
[ "I've added save/load for dataset dicts.\r\n\r\nI agree that in the future we should also have a way to save indexes too, and also the in-place history of transforms.\r\n\r\nAlso I understand that it would be cool to have the load function directly at the root of the library, but I'm not sure this should be inside ...
691,846,397
570
add reuters21578 dataset
closed
Reopen a PR this the merge.
true
2020-09-03T10:25:47Z
2020-09-03T10:46:52Z
2020-09-03T10:46:51Z
jplu
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/570
2020-09-03T10:46:51Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/570
true
[]
691,832,720
569
Revert "add reuters21578 dataset"
closed
Reverts huggingface/nlp#471
true
2020-09-03T10:06:16Z
2020-09-03T10:07:13Z
2020-09-03T10:07:12Z
jplu
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/569
2020-09-03T10:07:12Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/569
true
[]
691,638,656
568
`metric.compute` throws `ArrowInvalid` error
closed
I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0` ``` File "/home/beltagy/trainer.py", line 92, in validation_step rouge_scores = rouge.compute(predictions=generated_str, references=gold_str, rouge_types=['rouge2', 'rouge1', 'rougeL']) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/metric.py", line 224, in compute self.finalize(timeout=timeout) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/metric.py", line 213, in finalize self.data = Dataset(**reader.read_files(node_files)) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 217, in read_files dataset_kwargs = self._read_files(files=files, info=self._info, original_instructions=original_instructions) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 162, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 276, in _get_dataset_from_filename f = pa.ipc.open_stream(mmap) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/pyarrow/ipc.py", line 173, in open_stream return RecordBatchStreamReader(source) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/pyarrow/ipc.py", line 64, in __init__ self._open(source) File "pyarrow/ipc.pxi", line 469, in pyarrow.lib._RecordBatchStreamReader._open File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Tried reading schema message, was null or length 0 ```
true
2020-09-03T04:56:57Z
2020-10-05T16:33:53Z
2020-10-05T16:33:53Z
ibeltagy
NONE
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/568
false
[ "Hmm might be related to what we are solving in #564", "Could you try to update to `datasets>=1.0.0` (we changed the name of the library) and try again ?\r\nIf is was related to the distributed setup settings it must be fixed.\r\nIf it was related to empty metric inputs it's going to be fixed in #654 ", "Closin...
691,430,245
567
Fix BLEURT metrics for backward compatibility
closed
Fix #565
true
2020-09-02T21:22:35Z
2020-09-03T07:29:52Z
2020-09-03T07:29:50Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/567
2020-09-03T07:29:50Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/567
true
[]
691,160,208
566
Remove logger pickling to fix gg colab issues
closed
A `logger` objects are not picklable in google colab, contrary to `logger` objects in jupyter notebooks or in python shells. It creates some issues in google colab right now. Indeed by calling any `Dataset` method, the fingerprint update pickles the transform function, and as the logger comes with it, it results in an error (full stacktrace [here](http://pastebin.fr/64330)): ```python /usr/local/lib/python3.6/dist-packages/zmq/backend/cython/socket.cpython-36m-x86_64-linux-gnu.so in zmq.backend.cython.socket.Socket.__reduce_cython__() TypeError: no default __reduce__ due to non-trivial __cinit__ ``` To fix that I no longer dump the transform (`_map_single`, `select`, etc.), but the full name only (`nlp.arrow_dataset.Dataset._map_single`, `nlp.arrow_dataset.Dataset.select`, etc.)
true
2020-09-02T16:16:21Z
2020-09-03T16:31:53Z
2020-09-03T16:31:52Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/566
2020-09-03T16:31:52Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/566
true
[]
691,039,121
565
No module named 'nlp.logging'
closed
Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing? ``` >>> import nlp 2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 >>> bleurt = nlp.load_metric("bleurt") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/load.py", line 443, in load_metric metric_cls = import_main_class(module_path, dataset=False) File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/load.py", line 61, in import_main_class module = importlib.import_module(module_path) File "/home/melody/anaconda3/envs/transformers/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/metrics/bleurt/43448cf2959ea81d3ae0e71c5c8ee31dc15eed9932f197f5f50673cbcecff2b5/bleurt.py", line 20, in <module> from nlp.logging import get_logger ModuleNotFoundError: No module named 'nlp.logging' ``` Just to show once again that I can't import the logging module: ``` >>> import nlp 2020-09-02 13:48:38.190621: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 >>> nlp.__version__ '0.4.0' >>> from nlp.logging import get_logger Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'nlp.logging' ```
true
2020-09-02T13:49:50Z
2020-09-03T07:29:50Z
2020-09-03T07:29:50Z
melody-ju
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/565
false
[ "Thanks for reporting.\r\n\r\nApparently this is a versioning issue: the lib downloaded the `bleurt` script from the master branch where we did this change recently. We'll fix that in a new release this week or early next week. Cc @thomwolf \r\n\r\nUntil that, I'd suggest you to download the right bleurt folder fro...
691,000,020
564
Wait for writing in distributed metrics
closed
There were CI bugs where a distributed metric would try to read all the files in process 0 while the other processes haven't started writing. To fix that I added a custom locking mechanism that waits for the file to exist before trying to read it
true
2020-09-02T12:58:50Z
2020-09-09T09:13:23Z
2020-09-09T09:13:22Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/564
2020-09-09T09:13:22Z
7
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/564
true
[ "I agree this fix the problem for the CI where the files are always created in a new and clean temporary directory.\r\n\r\nHowever, in a general setting of a succession of fast distributed operation, the files could already exist from previous metrics runs but one process may still finish before another has even st...
690,908,674
563
[Large datasets] Speed up download and processing
closed
Various improvements to speed-up creation and processing of large scale datasets. Currently: - distributed downloads - remove etag from datafiles hashes to spare a request when restarting a failed download
true
2020-09-02T10:31:54Z
2020-09-09T09:03:33Z
2020-09-09T09:03:32Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/563
2020-09-09T09:03:32Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/563
true
[ "Looks all good :)\r\nI rebased from master and added a test for parallel `map_nested`", "you're da best" ]
690,907,604
562
[Reproductibility] Allow to pin versions of datasets/metrics
closed
Repurpose the `version` attribute in datasets and metrics to let the user pin a specific version of datasets and metric scripts: ``` dataset = nlp.load_dataset('squad', version='1.0.0') metric = nlp.load_metric('squad', version='1.0.0') ``` Notes: - version number are the release version of the library - currently only possible for canonical datasets/metrics, ie. integrated in the GitHub repo of the library
true
2020-09-02T10:30:13Z
2023-09-24T09:49:42Z
2020-09-09T13:04:54Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/562
null
1
0
0
0
true
false
[]
https://github.com/huggingface/datasets/pull/562
true
[ "Closing this one in favor of #584 " ]
690,871,415
561
Made `share_dataset` more readable
closed
true
2020-09-02T09:34:48Z
2020-09-03T09:00:30Z
2020-09-03T09:00:29Z
TevenLeScao
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/561
2020-09-03T09:00:29Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/561
true
[]
690,488,764
560
Using custom DownloadConfig results in an error
closed
## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error. ## How to reproduce ### Example without DownloadConfig --> works ```python import os os.environ["HF_HOME"] = "/data/hf-test-without-dl-config-01/" import logging import nlp logging.basicConfig(level=logging.INFO) if __name__ == "__main__": imdb = nlp.load_dataset(path="imdb") ``` ### Example with DownloadConfig --> doesn't work ```python import os os.environ["HF_HOME"] = "/data/hf-test-with-dl-config-01/" import logging import nlp from nlp.utils import DownloadConfig logging.basicConfig(level=logging.INFO) if __name__ == "__main__": download_config = DownloadConfig() imdb = nlp.load_dataset(path="imdb", download_config=download_config) ``` Error traceback: ``` Traceback (most recent call last): File "/.../example_with_dl_config.py", line 13, in <module> imdb = nlp.load_dataset(path="imdb", download_config=download_config) File "/.../python3.6/python3.6/site-packages/nlp/load.py", line 549, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/.../python3.6/python3.6/site-packages/nlp/builder.py", line 463, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/.../python3.6/python3.6/site-packages/nlp/builder.py", line 518, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/.../python3.6/python3.6/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743/imdb.py", line 86, in _split_generators arch_path = dl_manager.download_and_extract(_DOWNLOAD_URL) File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 220, in download_and_extract return self.extract(self.download(url_or_urls)) File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 158, in download self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths) File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 108, in _record_sizes_checksums self._recorded_sizes_checksums[url] = get_size_checksum_dict(path) File "/.../python3.6/python3.6/site-packages/nlp/utils/info_utils.py", line 79, in get_size_checksum_dict with open(path, "rb") as f: IsADirectoryError: [Errno 21] Is a directory: '/data/hf-test-with-dl-config-01/datasets/extracted/b6802c5b61824b2c1f7dbf7cda6696b5f2e22214e18d171ce1ed3be90c931ce5' ```
true
2020-09-01T22:23:02Z
2022-10-04T17:23:45Z
2022-10-04T17:23:45Z
ynouri
NONE
null
null
6
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/560
false
[ "From my limited understanding, part of the issue seems related to the `prepare_module` and `download_and_prepare` functions each handling the case where no config is passed. For example, `prepare_module` does mutate the object passed and forces the flags `extract_compressed_file` and `force_extract` to `True`.\r\...
690,411,263
559
Adding the KILT knowledge source and tasks
closed
This adds Wikipedia pre-processed for KILT, as well as the task data. Only the question IDs are provided for TriviaQA, but they can easily be mapped back with: ``` import nlp kilt_wikipedia = nlp.load_dataset('kilt_wikipedia') kilt_tasks = nlp.load_dataset('kilt_tasks') triviaqa = nlp.load_dataset('trivia_qa', 'unfiltered.nocontext') triviaqa_map = {} for k in ['train', 'validation', 'test']: triviaqa_map = dict([(q_id, i) for i, q_id in enumerate(triviaqa[k]['question_id'])]) kilt_tasks[k + '_triviaqa'] = kilt_tasks[k + '_triviaqa'].filter(lambda x: x['id'] in triviaqa_map) kilt_tasks[k + '_triviaqa'].map(lambda x: {'input': triviaqa[split][triviaqa_map[x['id']]]['question']}) ``` It would be great to have the dataset by Monday, which is when the paper should land on Arxiv and @fabiopetroni is planning on tweeting about the paper and `facebookresearch` repository for the datasett
true
2020-09-01T20:05:13Z
2020-09-04T18:05:47Z
2020-09-04T18:05:47Z
yjernite
MEMBER
https://github.com/huggingface/datasets/pull/559
2020-09-04T18:05:47Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/559
true
[ "Feel free to merge when you are happy with it @yjernite :-)" ]
690,318,105
558
Rerun pip install -e
closed
Hopefully it fixes the github actions
true
2020-09-01T17:24:39Z
2020-09-01T17:24:51Z
2020-09-01T17:24:50Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/558
2020-09-01T17:24:50Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/558
true
[]
690,220,135
557
Fix a few typos
closed
true
2020-09-01T15:03:24Z
2020-09-02T07:39:08Z
2020-09-02T07:39:07Z
julien-c
MEMBER
https://github.com/huggingface/datasets/pull/557
2020-09-02T07:39:06Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/557
true
[]
690,218,423
556
Add DailyDialog
closed
http://yanran.li/dailydialog.html https://arxiv.org/pdf/1710.03957.pdf
true
2020-09-01T15:01:15Z
2020-09-03T15:42:03Z
2020-09-03T15:38:39Z
julien-c
MEMBER
https://github.com/huggingface/datasets/pull/556
2020-09-03T15:38:39Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/556
true
[]
690,197,725
555
Upgrade pip in benchmark github action
closed
It looks like it fixes the `import nlp` issue we have
true
2020-09-01T14:37:26Z
2020-09-01T15:26:16Z
2020-09-01T15:26:15Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/555
2020-09-01T15:26:15Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/555
true
[]
690,173,214
554
nlp downloads to its module path
closed
I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_dataset = nlp.load_dataset('squad') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/load.py", line 530, in load_dataset module_path, hash = prepare_module(path, download_config=download_config, dataset=True) File "/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/load.py", line 329, in prepare_module os.makedirs(main_folder_path, exist_ok=True) File "/nix/store/685kq8pyhrvajah1hdsfn4q7gm3j4yd4-python3-3.8.5/lib/python3.8/os.py", line 223, in makedirs mkdir(name, mode) OSError: [Errno 30] Read-only file system: '/nix/store/2yhik0hhqayksmkkfb0ylqp8cf5wa5wp-python3-3.8.5-env/lib/python3.8/site-packages/nlp/datasets/squad' ``` Do you have any suggested workaround for this issue? Perhaps overriding the default value for `force_local_path` of `prepare_module`?
true
2020-09-01T14:06:14Z
2020-09-11T06:19:24Z
2020-09-11T06:19:24Z
danieldk
MEMBER
null
null
8
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/554
false
[ "Indeed this is a known issue arising from the fact that we try to be compatible with cloupickle.\r\n\r\nDoes this also happen if you are installing in a virtual environment?", "> Indeed this is a know issue with the fact that we try to be compatible with cloupickle.\r\n> \r\n> Does this also happen if you are in...
690,143,182
553
[Fix GitHub Actions] test adding tmate
closed
true
2020-09-01T13:28:03Z
2021-05-05T18:24:38Z
2020-09-03T09:01:13Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/553
null
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/553
true
[]
690,079,429
552
Add multiprocessing
closed
Adding multiprocessing to `.map` It works in 3 steps: - shard the dataset in `num_proc` shards - spawn one process per shard and call `map` on them - concatenate the resulting datasets Example of usage: ```python from nlp import load_dataset dataset = load_dataset("squad", split="train") def function(x): return {"lowered": x.lower()} processed = d.map( function, input_columns=["context"], num_proc=4, cache_file_name="playground/tmp.arrow", load_from_cache_file=False ) ``` Here it writes 4 files depending on the process rank: - `playground/tmp_00000_of_00004.arrow` - `playground/tmp_00001_of_00004.arrow` - `playground/tmp_00002_of_00004.arrow` - `playground/tmp_00003_of_00004.arrow` The suffix format can be specified by the user. If the `cache_file_name` is not specified, it writes into separated files depending on the fingerprint, as usual. I still need to: - write tests for this - try to improve the logging (currently it shows 4 progress bars, but if one finishes before the others, then the following messages are written over the progress bars)
true
2020-09-01T11:56:17Z
2020-09-22T15:11:56Z
2020-09-02T10:01:25Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/552
2020-09-02T10:01:25Z
10
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/552
true
[ "Logging looks like\r\n\r\n```\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess #0 will write at playground/tmp_00000_of_00004.arrow\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess #1 will write at playground/tmp_00001_of_00004.arrow\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess...
690,034,762
551
added HANS dataset
closed
Adds the [HANS](https://github.com/tommccoy1/hans) dataset to evaluate NLI systems.
true
2020-09-01T10:42:02Z
2020-09-01T12:17:10Z
2020-09-01T12:17:10Z
TevenLeScao
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/551
2020-09-01T12:17:10Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/551
true
[]
689,775,914
550
[BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539)
closed
Hi, I have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory: ``` python nlp-cli test ./datasets/lince --save_infos --all_configs ``` **NOTE**: I needed to change [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/commands/dummy_data.py#L8) from: `from .utils.logging import get_logger` to `from nlp.utils.logging import get_logger`, otherwise the script was not able to import `get_logger`. However, I did not include that in this PR since that could have been just my environment (and another PR could be fixing this already if it is actually an issue).
true
2020-09-01T03:27:03Z
2020-09-03T09:06:01Z
2020-09-03T09:06:01Z
gaguilar
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/550
2020-09-03T09:06:01Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/550
true
[ "Thanks a lot for that!\r\nThe line you are mentioning is a bug indeed, do you mind fixing it at the same time?", "No worries! \r\n\r\nI pushed right away the fix, but then I realized that the master branch already had it, so I ended up merging the master branch with lince locally and then overwriting the previou...
689,766,465
549
Fix bleurt logging import
closed
Bleurt started throwing an error in some code we have. This looks like the fix but... It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems). Any way for us to pin your metrics code so that they are guaranteed not to to change and possibly fail on repository changes? Thanks (and also for your continued work on the lib...)
true
2020-09-01T03:01:25Z
2020-09-03T18:04:46Z
2020-09-03T09:04:20Z
jbragg
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/549
null
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/549
true
[ "That’s a good point that we started to discuss internally as well. We should pin the dataset en metrics code by default indeed.\r\nLet’s update this in the coming release.", "Ok closed this with #567 and we are working on a more general solution to pin dataset version in #562 (should be in the coming release)." ...
689,285,996
548
[Breaking] Switch text loading to multi-threaded PyArrow loading
closed
Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader. If it works ok, it would fix #546. **Breaking change**: The text lines now do not include final line-breaks anymore.
true
2020-08-31T15:15:41Z
2020-09-08T10:19:58Z
2020-09-08T10:19:57Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/548
2020-09-08T10:19:57Z
5
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/548
true
[ "Awesome !\r\nAlso I was wondering if we should try to make the hashing of the `data_files` faster (it is used to build the cache directory of datasets like `text` or `json`). Right now it reads each file and hashes all of its data. We could simply hash the path and some metadata including the `time last modified` ...
689,268,589
547
[Distributed] Making loading distributed datasets a bit safer
closed
Add some file-locks during dataset loading
true
2020-08-31T14:51:34Z
2020-08-31T15:16:30Z
2020-08-31T15:16:29Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/547
2020-08-31T15:16:29Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/547
true
[]
689,186,526
546
Very slow data loading on large dataset
closed
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_files = glob.glob("xxx/*.txt",recursive=True) random.shuffle(train_files) print(train_files) dataset = nlp.load_dataset('text', data_files=train_files, name="customDataset", version="1.0.0", cache_dir="xxx/nlp") ``` Is there something that I am missing ?
true
2020-08-31T12:57:23Z
2024-01-02T20:26:24Z
2020-09-08T10:19:57Z
agemagician
NONE
null
null
28
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/546
false
[ "When you load a text file for the first time with `nlp`, the file is converted into Apache Arrow format. Arrow allows to use memory-mapping, which means that you can load an arbitrary large dataset.\r\n\r\nNote that as soon as the conversion has been done once, the next time you'll load the dataset it will be much...
689,138,878
545
New release coming up for this library
closed
Hi all, A few words on the roadmap for this library. The next release will be a big one and is planed at the end of this week. In addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will: - have support for multi-modal datasets - include various significant improvements on speed for standard processing (map, shuffling, ...) - have a better support for metrics (better caching, and a robust API) and a bigger focus on reproductibility - change the name to the final name (voted by the community): `datasets` - be the 1.0.0 release as we think the API will be mostly stabilized from now on
true
2020-08-31T11:37:38Z
2021-01-13T10:59:04Z
2021-01-13T10:59:04Z
thomwolf
MEMBER
null
null
1
4
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/545
false
[ "Update: release is planed mid-next week." ]
689,062,519
544
[Distributed] Fix load_dataset error when multiprocessing + add test
closed
Fix #543 + add test
true
2020-08-31T09:30:10Z
2020-08-31T11:15:11Z
2020-08-31T11:15:10Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/544
2020-08-31T11:15:10Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/544
true
[]
688,644,407
543
nlp.load_dataset is not safe for multi processes when loading from local files
closed
Loading from local files, e.g., `dataset = nlp.load_dataset('csv', data_files=['file_1.csv', 'file_2.csv'])` concurrently from multiple processes, will raise `FileExistsError` from builder's line 430, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/builder.py#L423-L438 Likely because multiple processes step into download_and_prepare, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/load.py#L550-L554 This can happen when launching distributed training with commands like `python -m torch.distributed.launch --nproc_per_node 4` on a new collection of files never loaded before. I can create a PR that puts in some file locks. It would be helpful if I can be informed of the convention for naming and placement of the lock.
true
2020-08-30T03:20:34Z
2020-08-31T11:15:10Z
2020-08-31T11:15:10Z
luyug
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/543
false
[ "I'll take a look!" ]
688,555,036
542
Add TensorFlow example
closed
Update the Quick Tour documentation in order to add the TensorFlow equivalent source code for the classification example. Now it is possible to select either the code in PyTorch or in TensorFlow in the Quick tour.
true
2020-08-29T15:39:27Z
2020-08-31T09:49:20Z
2020-08-31T09:49:19Z
jplu
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/542
2020-08-31T09:49:19Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/542
true
[]
688,521,224
541
Best practices for training tokenizers with nlp
closed
Hi, thank you for developing this library. What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used.
true
2020-08-29T12:06:49Z
2022-10-04T17:28:04Z
2022-10-04T17:28:04Z
moskomule
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/541
false
[ "Docs that explain how to train a tokenizer with `datasets` are available here: https://huggingface.co/docs/tokenizers/training_from_memory#using-the-datasets-library" ]
688,475,884
540
[BUGFIX] Fix Race Dataset Checksum bug
closed
In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :) Moreover, I have added some descriptions.
true
2020-08-29T07:00:10Z
2020-09-18T11:42:20Z
2020-09-18T11:42:20Z
abarbosa94
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/540
2020-09-18T11:42:20Z
4
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/540
true
[ "I'm not sure this would fix #537 .\r\nHowever your point about the missing `middle` data is right and we probably want to include these data as well.\r\nDo you think it would we worth having different configurations for this dataset for users who want to only load part of it (`high school` or `middle` or `all`) ?"...
688,323,602
539
[Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data
closed
Hi, There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset. How can I update the checksum of the library to solve this issue? The error is below and it also appears in the [nlp viewer](https://huggingface.co/nlp/viewer/?dataset=lince&config=lid_msaea): ```python import nlp nlp.load_dataset('lince', 'lid_msaea') ``` Output: ``` NonMatchingChecksumError: ['https://ritual.uh.edu/lince/libaccess/eyJ1c2VybmFtZSI6ICJodWdnaW5nZmFjZSBubHAiLCAidXNlcl9pZCI6IDExMSwgImVtYWlsIjogImR1bW15QGVtYWlsLmNvbSJ9/lid_msaea.zip'] Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 196, in <module> dts, fail = get(str(option.id), str(conf_option.name) if conf_option else None) File "/home/sasha/streamlit/lib/streamlit/caching.py", line 591, in wrapped_func return get_or_create_cached_value() File "/home/sasha/streamlit/lib/streamlit/caching.py", line 575, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp-viewer/run.py", line 150, in get builder_instance.download_and_prepare() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare download_config.force_download = download_mode == FORCE_REDOWNLOAD File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/builder.py", line 469, in _download_and_prepare File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 36, in verify_checksums raise NonMatchingChecksumError(str(bad_urls)) ``` Thank you in advance! @lhoestq
true
2020-08-28T19:55:51Z
2020-09-03T16:34:02Z
2020-09-03T16:34:01Z
gaguilar
CONTRIBUTOR
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/539
false
[ "Hi @gaguilar \r\n\r\nIf you want to take care of this, it very simple, you just need to regenerate the `dataset_infos.json` file as indicated [in the doc](https://huggingface.co/nlp/share_dataset.html#adding-metadata) by [installing from source](https://huggingface.co/nlp/installation.html#installing-from-source) ...
688,015,912
538
[logging] Add centralized logging - Bump-up cache loads to warnings
closed
Add a `nlp.logging` module to set the global logging level easily. The verbosity level also controls the tqdm bars (disabled when set higher than INFO). You can use: ``` nlp.logging.set_verbosity(verbosity: int) nlp.logging.set_verbosity_info() nlp.logging.set_verbosity_warning() nlp.logging.set_verbosity_debug() nlp.logging.set_verbosity_error() nlp.logging.get_verbosity() -> int ``` And use the levels: ``` nlp.logging.CRITICAL nlp.logging.DEBUG nlp.logging.ERROR nlp.logging.FATAL nlp.logging.INFO nlp.logging.NOTSET nlp.logging.WARN nlp.logging.WARNING ```
true
2020-08-28T11:42:29Z
2020-08-31T11:42:51Z
2020-08-31T11:42:51Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/538
2020-08-31T11:42:50Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/538
true
[]
687,614,699
537
[Dataset] RACE dataset Checksums error
closed
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-15-8bf7603ce0ed> in <module> ----> 1 dataset = nlp.load_dataset("race") 2 len(dataset["train"]), len(dataset["validation"]) ~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 546 547 # Download and prepare data --> 548 builder_instance.download_and_prepare( 549 download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, 550 ) ~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 460 logger.info("Dataset not on Hf google storage. Downloading and preparing it from source") 461 if not downloaded_from_gcs: --> 462 self._download_and_prepare( 463 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 464 ) ~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 519 # Checksums verification 520 if verify_infos: --> 521 verify_checksums( 522 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" 523 ) ~/miniconda3/envs/masters/lib/python3.8/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 38 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 39 logger.info("All the checksums matched successfully" + for_verification_name) 40 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['http://www.cs.cmu.edu/~glai1/data/race/RACE.tar.gz'] ```
true
2020-08-27T23:58:16Z
2020-09-18T12:07:04Z
2020-09-18T12:07:04Z
abarbosa94
CONTRIBUTOR
null
null
9
0
0
0
null
false
[ "dataset bug" ]
https://github.com/huggingface/datasets/issues/537
false
[ "`NonMatchingChecksumError` means that the checksum of the downloaded file is not the expected one.\r\nEither the file you downloaded was corrupted along the way, or the host updated the file.\r\nCould you try to clear your cache and run `load_dataset` again ? If the error is still there, it means that there was an...
687,378,332
536
Fingerprint
closed
This PR is a continuation of #513 , in which many in-place functions were introduced or updated (cast_, flatten_) etc. However the caching didn't handle these changes. Indeed the caching took into account only the previous cache file name of the table, and not the possible in-place transforms of the table. To fix that, I added the concept of dataset fingerprint, that is updated after each transform (in place or not), and stored inside the table metadata. When a dataset is created, an initial fingerprint is computed. If the dataset is memory-mapped, then the fingerprint generator doesn't read the table and only looks at the filename. However if the table is in-memory, then the fingerprint generator reads the content of the table using a batched non-crypto hashing. I added a utility class to compute hashes of arbitrary python objects in `fingerprint.py` : `Hasher`. The API is close to standard hashing tools (`.update`, `.hexdigest`). It also supports custom hashing functions depending on object types using a registry like pickle. I added a custom hashing function to hash a `pa.Table` in a batched way, and also for `nlp.DatasetInfo` to leverage its json serialization feature. Note about this PR: This is a draft PR because #513 needs to be merged first. The diff that is shown is for branches fingerprint -> indices (and not master, for now)
true
2020-08-27T16:27:09Z
2020-08-31T14:20:40Z
2020-08-31T14:20:39Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/536
2020-08-31T14:20:39Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/536
true
[ "I changed the way I implemented fingerprint updates to use decorator functions.\r\n\r\nI also added a new attribute called `_inplace_history` that stores the in-place history of transforms (like cast_, rename_columns, etc.). This history is useful to replay the changes that were done in-place when unpickling a dat...
686,238,315
535
Benchmarks
closed
Adding some benchmarks with DVC/CML To add a new tracked benchmark: - create a new python benchmarking script in `./benchmarks/`. The script can use the utilities in `./benchmarks/utils.py` and should output a JSON file with results in `./benchmarks/results/`. - add a new pipeline stage in [dvc.yaml](./dvc.yaml) with the name of your new benchmark. That's it
true
2020-08-26T11:21:26Z
2020-08-27T08:40:00Z
2020-08-27T08:39:59Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/535
2020-08-27T08:39:59Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/535
true
[]
686,115,912
534
`list_datasets()` is broken.
closed
version = '0.4.0' `list_datasets()` is broken. It results in the following error : ``` In [3]: nlp.list_datasets() Out[3]: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/core/formatters.py in __call__(self, obj) 700 type_pprinters=self.type_printers, 701 deferred_pprinters=self.deferred_printers) --> 702 printer.pretty(obj) 703 printer.flush() 704 return stream.getvalue() ~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj) 375 if cls in self.type_pprinters: 376 # printer registered in self.type_pprinters --> 377 return self.type_pprinters[cls](obj, self, cycle) 378 else: 379 # deferred printer ~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in inner(obj, p, cycle) 553 p.text(',') 554 p.breakable() --> 555 p.pretty(x) 556 if len(obj) == 1 and type(obj) is tuple: 557 # Special case for 1-item tuples. ~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in pretty(self, obj) 392 if cls is not object \ 393 and callable(cls.__dict__.get('__repr__')): --> 394 return _repr_pprint(obj, self, cycle) 395 396 return _default_pprint(obj, self, cycle) ~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/IPython/lib/pretty.py in _repr_pprint(obj, p, cycle) 698 """A pprint that just redirects to the normal repr function.""" 699 # Find newlines and replace them with p.break_() --> 700 output = repr(obj) 701 lines = output.splitlines() 702 with p.group(): ~/.virtualenvs/san-lgUCsFg_/lib/python3.8/site-packages/nlp/hf_api.py in __repr__(self) 110 111 def __repr__(self): --> 112 single_line_description = self.description.replace("\n", "") 113 return f"nlp.ObjectInfo(id='{self.id}', description='{single_line_description}', files={self.siblings})" 114 AttributeError: 'NoneType' object has no attribute 'replace' ```
true
2020-08-26T08:19:01Z
2020-08-27T06:31:11Z
2020-08-27T06:31:11Z
ashutosh-dwivedi-e3502
NONE
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/534
false
[ "Thanks for reporting !\r\nThis has been fixed in #475 and the fix will be available in the next release", "What you can do instead to get the list of the datasets is call\r\n\r\n```python\r\nprint([dataset.id for dataset in nlp.list_datasets()])\r\n```", "Thanks @lhoestq . " ]
685,585,914
533
Fix ArrayXD for pyarrow 0.17.1 by using non fixed length list arrays
closed
It should fix the CI problems in #513
true
2020-08-25T15:32:44Z
2020-08-26T08:02:24Z
2020-08-26T08:02:23Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/533
2020-08-26T08:02:23Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/533
true
[]
685,540,614
532
File exists error when used with TPU
open
Hi, I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8). I modified [line 131 in the original `run_language_modeling.py`](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py#L131) as follows: ```python # line 131: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size) dataset = load_dataset("text", data_files=file_path, split="train") dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True) dataset.set_format(type='torch', columns=['input_ids']) return dataset ``` When I run this with [`xla_spawn.py`](https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py), I get the following error (it produces one message per core in TPU, which I believe is fine). It seems the current version doesn't take into account distributed training processes as in [this example](https://github.com/huggingface/transformers/blob/a573777901e662ec2e565be312ffaeedef6effec/src/transformers/data/datasets/language_modeling.py#L35-L38)? ``` 08/25/2020 13:59:41 - WARNING - nlp.builder - Using custom data configuration default 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) 08/25/2020 13:59:43 - INFO - nlp.builder - Generating dataset text (/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d) Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Exception in device=TPU:6: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Exception in device=TPU:4: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Exception in device=TPU:1: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Exception in device=TPU:7: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Exception in device=TPU:3: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Downloading and preparing dataset text/default-b0932b2bdbb63283 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/ 447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d... Exception in device=TPU:2: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Exception in device=TPU:0: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) Traceback (most recent call last): FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 231, in _start_fn fn(gindex, *args) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/home/*****/huggingface_roberta/run_language_modeling.py", line 300, in _mp_fn main() File "/home/*****/huggingface_roberta/run_language_modeling.py", line 240, in main train_dataset = get_dataset(data_args, tokenizer=tokenizer) if training_args.do_train else None File "/home/*****/huggingface_roberta/run_language_modeling.py", line 134, in get_dataset dataset = load_dataset("text", data_files=file_path, split="train") File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/load.py", line 546, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 450, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/contextlib.py", line 81, in __enter__ return next(self.gen) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/site-packages/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/anaconda3/envs/torch-xla-1.6/lib/python3.6/os.py", line 220, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/home/*****/.cache/huggingface/datasets/text/default-b0932b2bdbb63283/0.0.0/447f2bcfa2a721a37bc8fdf23800eade1523cf07f7eada6fe661fe4d070d380d.incomplete' ```
true
2020-08-25T14:36:38Z
2020-09-01T12:14:56Z
null
go-inoue
NONE
null
null
21
1
1
0
null
false
[]
https://github.com/huggingface/datasets/issues/532
false
[ "I am facing probably facing similar issues with \r\n\r\n`wiki40b_en_100_0`", "Could you try to run `dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")` once before calling the script ?\r\n\r\nIt looks like several processes try to create the dataset in arrow format at the same time. If the d...
685,291,036
531
add concatenate_datasets to the docs
closed
true
2020-08-25T08:40:05Z
2020-08-25T09:02:20Z
2020-08-25T09:02:19Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/531
2020-08-25T09:02:19Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/531
true
[]
684,825,612
530
use ragged tensor by default
closed
I think it's better if it's clear whether the returned tensor is ragged or not when the type is set to tensorflow. Previously it was a tensor (not ragged) if numpy could stack the output (which can change depending on the batch of example you take), which make things difficult to handle, as it may sometimes return a ragged tensor and sometimes not. Therefore I reverted this behavior to always return a ragged tensor as we used to do.
true
2020-08-24T17:06:15Z
2021-10-22T19:38:40Z
2020-08-24T19:22:25Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/530
2020-08-24T19:22:25Z
4
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/530
true
[ "Yes I agree. Maybe something that lets specify different format depending on the column ? Especially to better control dtype and shape (and ragged for tf)\r\n\r\nOh and I forgot: this one should also fix the second issue found in #477 for the next release", "I am running into the same issue with the error messag...
684,797,157
529
Add MLSUM
closed
Hello (again :) !), So, I started a new branch because of a [rebase issue](https://github.com/huggingface/nlp/pull/463), sorry for the mess. However, the command `pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mlsum` still fails because there is no default language dataset : the script throws an error as a specific config language is necessary. I think that setting a default language would be a bad workaround for this so I kept it as it is. Putting all the train files across languages together would also be a bad idea because of the size. Thanks for your help, Rachel
true
2020-08-24T16:18:35Z
2020-08-26T08:04:11Z
2020-08-26T08:04:11Z
RachelKer
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/529
2020-08-26T08:04:10Z
3
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/529
true
[ "Could you test to run the test using the changes in #527 and let me know if it fixes the issue ? If so I'll merge it and we'll be good to go :)", "Hello, it does work on the fixing real dataset branch. Merci Quentin :)", "Nice, glad to hear that :)\r\nde rien !" ]
684,673,673
528
fix missing variable names in docs
closed
fix #524
true
2020-08-24T13:31:48Z
2020-08-25T09:04:04Z
2020-08-25T09:04:03Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/528
2020-08-25T09:04:03Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/528
true
[ "The problem came from `default: ` that is rendered differently and hides the parameter names. I changed `default: ...` to `defaults to ...`" ]
684,632,930
527
Fix config used for slow test on real dataset
closed
As noticed in #470, #474, #476, #504 , the slow test `test_load_real_dataset` couldn't run on datasets that require config parameters. To fix that I replaced it with one test with the first config of BUILDER_CONFIGS `test_load_real_dataset`, and another test that runs all of the configs in BUILDER_CONFIGS `test_load_real_dataset_all_configs`
true
2020-08-24T12:39:34Z
2020-08-25T09:20:45Z
2020-08-25T09:20:44Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/527
2020-08-25T09:20:44Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/527
true
[]
684,615,455
526
Returning None instead of "python" if dataset is unformatted
closed
Following the discussion on Slack, this small fix ensures that calling `dataset.set_format(type=dataset.format["type"])` works properly. Slightly breaking as calling `dataset.format` when the dataset is unformatted will return `None` instead of `python`.
true
2020-08-24T12:10:35Z
2020-08-24T12:50:43Z
2020-08-24T12:50:42Z
TevenLeScao
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/526
2020-08-24T12:50:42Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/526
true
[ "We have to change the tests to expect `None` instead of `python` then", "Merging!" ]
683,875,483
525
wmt download speed example
closed
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-en') ``` Downloads at 49.1 KB/S Whereas ``` pip install gdown # download from google drive !gdown https://drive.google.com/uc?id=1iO7um-HWoNoRKDtw27YUSgyeubn9uXqj ``` Downloads at 127 MB/s. (The file is a copy of wmt-en-de raw). ``` nlp.load_dataset('wmt16', 'ro-en') ``` goes at 27 MB/s, much faster. if we wget the same data from s3 is the same download speed, but ΒΌ the file size: ``` wget https://s3.amazonaws.com/datasets.huggingface.co/translation/wmt_en_ro_packed_200_rand.tgz ``` Finally, ``` nlp.load_dataset('wmt19', 'zh-en') ``` Starts fast, but broken. (duplicate of #493 )
true
2020-08-21T23:29:06Z
2022-10-04T17:45:39Z
2022-10-04T17:45:39Z
sshleifer
CONTRIBUTOR
null
null
8
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/525
false
[ "Thanks for creating the issue :)\r\nThe download link for wmt-en-de raw looks like a mirror. We should use that instead of the current url.\r\nIs this mirror official ?\r\n\r\nAlso it looks like for `ro-en` it tried to download other languages. If we manage to only download the one that is asked it'd be cool\r\n\r...
683,686,359
524
Some docs are missing parameter names
closed
See https://huggingface.co/nlp/master/package_reference/main_classes.html#nlp.Dataset.map. I believe this is because the parameter names are enclosed in backticks in the docstrings, maybe it's an old docstring format that doesn't work with the current Sphinx version.
true
2020-08-21T16:47:34Z
2020-08-25T09:04:03Z
2020-08-25T09:04:03Z
jarednielsen
CONTRIBUTOR
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/524
false
[ "Indeed, good catch!" ]
682,573,232
523
Speed up Tokenization by optimizing cast_to_python_objects
closed
I changed how `cast_to_python_objects` works to make it faster. It is used to cast numpy/pytorch/tensorflow/pandas objects to python lists, and it works recursively. To avoid iterating over possibly long lists, it first checks if the first element that is not None has to be casted. If the first element needs to be casted, then all the elements of the list will be casted, otherwise they'll stay the same. This trick allows to cast objects that contain tokenizers outputs without iterating over every single token for example. Speed improvement: ```python import transformers import nlp tok = transformers.BertTokenizerFast.from_pretrained("bert-base-uncased") txt = ["a " * 512] * 1000 dataset = nlp.Dataset.from_dict({"txt": txt}) # Tokenization using .map is now faster. Previously it was taking 3.5s %time _ = dataset.map(lambda x: tok(x["txt"]), batched=True, load_from_cache_file=False) # 450ms # for comparison %time _ = tok(txt) # 280ms ```
true
2020-08-20T09:42:02Z
2020-08-24T08:54:15Z
2020-08-24T08:54:14Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/523
2020-08-24T08:54:14Z
1
1
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/523
true
[ "I took your comments into account and added tests for `cast_to_python_objects`" ]
682,478,833
522
dictionnary typo in docs
closed
Many places dictionary is spelled dictionnary, not sure if its on purpose or not. Fixed in this pr: https://github.com/huggingface/nlp/pull/521
true
2020-08-20T07:11:05Z
2020-08-20T07:52:14Z
2020-08-20T07:52:13Z
yonigottesman
CONTRIBUTOR
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/522
false
[ "Thanks!" ]
682,477,648
521
Fix dictionnary (dictionary) typo
closed
This error happens many times I'm thinking maybe its spelled like this on purpose?
true
2020-08-20T07:09:02Z
2020-08-20T07:52:04Z
2020-08-20T07:52:04Z
yonigottesman
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/521
2020-08-20T07:52:04Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/521
true
[ "Hahah thanks Yonatan. It was not on purpose, we are just not very good at spelling :)" ]
682,264,839
520
Transform references for sacrebleu
closed
Currently it is impossible to use sacrebleu when len(predictions) != the number of references per prediction (very uncommon), due to a strange format expected by sacrebleu. If one passes in the data to `nlp.metric.compute()` in sacrebleu format, `nlp` throws an error due to mismatching lengths between predictions and references. If one uses a more standard format where predictions and references are lists of the same length, sacrebleu throws an error. This PR transforms reference data in a more standard format into the [unusual format](https://github.com/mjpost/sacreBLEU#using-sacrebleu-from-python) expected by sacrebleu.
true
2020-08-20T00:26:55Z
2020-08-20T09:30:54Z
2020-08-20T09:30:53Z
jbragg
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/520
2020-08-20T09:30:53Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/520
true
[ "I think I agree @lhoestq so I pushed a change.\r\nThanks for your work on the library!" ]
682,193,882
519
[BUG] Metrics throwing new error on master since 0.4.0
closed
The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu. Wasn't happening on 0.4.0 but happening now on master. ``` File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute self.add_batch(predictions=predictions, references=references) File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 242, in add_batch batch = self.info.features.encode_batch(batch) File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in encode_batch encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column] File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 527, in <listcomp> encoded_batch[key] = [encode_nested_example(self[key], cast_to_python_objects(obj)) for obj in column] File "/usr/local/lib/python3.7/site-packages/nlp/features.py", line 456, in encode_nested_example raise ValueError("Got a string but expected a list instead: '{}'".format(obj)) ```
true
2020-08-19T21:29:15Z
2022-06-02T16:41:01Z
2020-08-19T22:04:40Z
jbragg
CONTRIBUTOR
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/519
false
[ "Update - maybe this is only failing on bleu because I was not tokenizing inputs to the metric", "Closing - seems to be just forgetting to tokenize. And found the helpful discussion in huggingface/evaluate#105 " ]
682,131,165
518
[METRICS, breaking] Refactor caching behavior, pickle/cloudpickle metrics and dataset, add tests on metrics
closed
Move the acquisition of the filelock at a later stage during metrics processing so it can be pickled/cloudpickled after instantiation. Also add some tests on pickling, concurrent but separate metric instances and concurrent and distributed metric instances. Changes significantly the caching behavior for the metrics: - if the metric is used in a non-distributed setup (most common case) we try to find a free cache file using UUID instead of asking for an `experiment_id` if we can't lock the cache file this allows to use several instances of the same metrics in parallel. - if the metrics is used in a distributed setup we ask for an `experiment_id` if we can't lock the cache file (because all the nodes need to have related cache file names for the final sync. - after the computation, we free the locks and delete all the cache files. Breaking: Some arguments for Metrics initialization have been removed for simplicity (`version`...) and some have been renamed for consistency with the rest of the library (`in_memory` => `keep_in_memory`). Also remove the `_has_transformers` detection in utils to avoid importing transformers everytime during loading.
true
2020-08-19T19:43:08Z
2020-08-24T16:01:40Z
2020-08-24T16:01:39Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/518
2020-08-24T16:01:39Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/518
true
[ "(test failure is unrelated)", "As discussed with @thomwolf merging since the hyperparameter-search has been merged in transformers." ]
681,896,944
517
add MLDoc dataset
open
Hi, I am recommending that someone add MLDoc, a multilingual news topic classification dataset. - Here's a link to the Github: https://github.com/facebookresearch/MLDoc - and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf Looks like the dataset contains news stories in multiple languages that can be classified into four hierarchical groups: CCAT (Corporate/Industrial), ECAT (Economics), GCAT (Government/Social) and MCAT (Markets). There are 13 languages: Dutch, French, German, Chinese, Japanese, Russian, Portuguese, Spanish, Latin American Spanish, Italian, Danish, Norwegian, and Swedish
true
2020-08-19T14:41:59Z
2021-08-03T05:59:33Z
null
jxmorris12
CONTRIBUTOR
null
null
2
4
4
0
null
false
[ "dataset request" ]
https://github.com/huggingface/datasets/issues/517
false
[ "Any updates on this?", "This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies." ]
681,846,032
516
[Breaking] Rename formated to formatted
closed
`formated` is not correct but `formatted` is
true
2020-08-19T13:35:23Z
2020-08-20T08:41:17Z
2020-08-20T08:41:16Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/516
2020-08-20T08:41:16Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/516
true
[]
681,845,619
515
Fix batched map for formatted dataset
closed
If you had a dataset formatted as numpy for example, and tried to do a batched map, then it would crash because one of the elements from the inputs was missing for unchanged columns (ex: batch of length 999 instead of 1000). The happened during the creation of the `pa.Table`, since columns had different lengths.
true
2020-08-19T13:34:50Z
2020-08-20T20:30:43Z
2020-08-20T20:30:42Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/515
2020-08-20T20:30:42Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/515
true
[]
681,256,348
514
dataset.shuffle(keep_in_memory=True) is never allowed
closed
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either `keep_in_memory` or `cache_file_name` but not both." ``` This affects both `shuffle()` as `select()` is a sub-routine, and `map()` that has the same check. I'd love to fix this myself, but unsure what the intention of the assert is given the rest of the logic in the function concerning `ccache_file_name` and `keep_in_memory`.
true
2020-08-18T18:47:40Z
2022-10-10T12:21:58Z
2022-10-10T12:21:58Z
vegarab
CONTRIBUTOR
null
null
10
0
0
0
null
false
[ "good first issue", "hacktoberfest" ]
https://github.com/huggingface/datasets/issues/514
false
[ "This seems to be fixed in #513 for the filter function, replacing `cache_file_name` with `indices_cache_file_name` in the assert. Although not for the `map()` function @thomwolf ", "Maybe I'm a bit tired but I fail to see the issue here.\r\n\r\nSince `cache_file_name` is `None` by default, if you set `keep_in_me...
681,215,612
513
[speedup] Use indices mappings instead of deepcopy for all the samples reordering methods
closed
Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`). Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests. All the samples re-ordering/selection methods should be a lot faster. The downside is that iterating on very large batch of the dataset might be a little slower when we have changed the order of the samples since with in these case we use `pyarrow.Table.take` instead of `pyarrow.Table.slice`. There is no free lunch but the speed of iterating over the dataset is rarely the bottleneck. *Backward breaking change*: the `cache_file_name` argument in all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`) is now called `indices_cache_file_name` on purpose to make it explicit to the user that this caching file is used for caching the indices mapping and not the dataset itself.
true
2020-08-18T17:36:02Z
2020-08-28T08:41:51Z
2020-08-28T08:41:50Z
thomwolf
MEMBER
https://github.com/huggingface/datasets/pull/513
2020-08-28T08:41:50Z
4
1
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/513
true
[ "Ok I fixed `concatenate_datasets` and added tests\r\nFeel free to merge if it's good for you @thomwolf ", "Ok, adding some benchmarks for map/filters and then I'll merge", "Warning from pytorch that we should maybe consider at some point @lhoestq:\r\n```\r\n/__w/nlp/nlp/src/nlp/arrow_dataset.py:648: UserWarnin...
681,137,164
512
Delete CONTRIBUTING.md
closed
true
2020-08-18T15:33:25Z
2020-08-18T15:48:21Z
2020-08-18T15:39:07Z
ChenZehong13
NONE
https://github.com/huggingface/datasets/pull/512
null
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/512
true
[ "😱", "Yeah, this is spammy behavior. I've reported the user handle." ]
681,055,553
511
dataset.shuffle() and select() resets format. Intended?
closed
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight? When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later loading the dataset object using `torch.load("dataset.pt")`, which conserves the defined format before saving. I do shuffling and selecting (for controlling dataset size) after loading the data from .pt-file, as it's convenient whenever you train multiple models with varying sizes of the same dataset. The obvious workaround for this is to set the format again after using `dataset.select()` or `dataset.shuffle()`. _I guess this is more of a discussion on the design philosophy of the functions. Please let me know if this is not the right channel for these kinds of discussions or if they are not wanted at all!_ #### How to reproduce: ```python import nlp from transformers import T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("t5-base") def create_features(batch): context_encoding = tokenizer.batch_encode_plus(batch["context"]) return {"input_ids": context_encoding["input_ids"]} dataset = nlp.load_dataset("cosmos_qa", split="train") dataset = dataset.map(create_features, batched=True) dataset.set_format(type="torch", columns=["input_ids"]) dataset[0] # {'input_ids': tensor([ 1804, 3525, 1602, ... 0, 0])} dataset = dataset.shuffle() dataset[0] # {'id': '3Q9(...)20', 'context': "Good Old War an (...) play ?', 'answer0': 'None of the above choices .', 'answer1': 'This person likes music and likes to see the show , they will see other bands play .', (...) 'input_ids': [1804, 3525, 1602, ... , 0, 0]} ```
true
2020-08-18T13:46:01Z
2020-09-14T08:45:38Z
2020-09-14T08:45:38Z
vegarab
CONTRIBUTOR
null
null
5
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/511
false
[ "Hi @vegarab yes feel free to open a discussion here.\r\n\r\nThis design choice was not very much thought about.\r\n\r\nSince `dataset.select()` (like all the method without a trailing underscore) is non-destructive and returns a new dataset it has most of its properties initialized from scratch (except the table a...
680,823,644
510
Version of numpy to use the library
closed
Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library. Thanks in advance.
true
2020-08-18T08:59:13Z
2020-08-19T18:35:56Z
2020-08-19T18:35:56Z
isspek
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/510
false
[ "Seems like this method was added in 1.17. I'll add a requirement on this.", "Thank you so much. After upgrading the numpy library, it worked." ]
679,711,585
509
Converting TensorFlow dataset example
closed
Hi, I want to use TensorFlow datasets with this repo, I noticed you made some conversion script, can you give a simple example of using it? Thanks
true
2020-08-16T08:05:20Z
2021-08-03T06:01:18Z
2021-08-03T06:01:17Z
saareliad
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/509
false
[ "Do you want to convert a dataset script to the tfds format ?\r\nIf so, we currently have a comversion script nlp/commands/convert.py but it is a conversion script that goes from tfds to nlp.\r\nI think it shouldn't be too hard to do the changes in reverse (at some manual adjustments).\r\nIf you manage to make it w...
679,705,734
508
TypeError: Receiver() takes no arguments
closed
I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner') ``` This fails in the apache beam runner. ``` Traceback (most recent call last): File "D:/ML/wikiembedding/gpt2_sv.py", line 36, in <module> dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=my_cache_dir, beam_runner='DirectRunner') File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare self._download_and_prepare( File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 969, in _download_and_prepare pipeline_results = pipeline.run() File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\pipeline.py", line 534, in run return self.runner.run_pipeline(self, self._options) .... File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 218, in process_encoded self.output(decoded_value) File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\operations.py", line 332, in output cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value) File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\Cython\Shadow.py", line 167, in cast return type(*args) TypeError: Receiver() takes no arguments ``` This is run on a Windows 10 machine with python 3.8. I get the same error loading the swedish wikipedia dump.
true
2020-08-16T07:18:16Z
2020-09-01T14:53:33Z
2020-09-01T14:49:03Z
sebastiantomac
NONE
null
null
5
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/508
false
[ "Which version of Apache Beam do you have (can you copy your full environment info here)?", "apache-beam==2.23.0\r\nnlp==0.4.0\r\n\r\nFor me this was resolved by running the same python script on Linux (or really WSL). ", "Do you manage to run a dummy beam pipeline with python on windows ? \r\nYou can test a du...
679,400,683
507
Errors when I use
closed
I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors I am using **transformers 3.0.2** code . from transformers.pipelines import pipeline from transformers.modeling_auto import AutoModelForQuestionAnswering from transformers.tokenization_auto import AutoTokenizer model_name = "deepset/roberta-base-squad2" nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) QA_input = { 'question': 'Why is model conversion important?', 'context': 'The option to convert models between FARM and transformers gives freedom to the user and let people easily switch between frameworks.' } res = nlp(QA_input) The errors are : res = nlp(QA_input) File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in __call__ for s, e, score in zip(starts, ends, scores) File ".local/lib/python3.6/site-packages/transformers/pipelines.py", line 1316, in <listcomp> for s, e, score in zip(starts, ends, scores) KeyError: 0
true
2020-08-14T21:03:57Z
2020-08-14T21:39:10Z
2020-08-14T21:39:10Z
mchari
NONE
null
null
1
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/507
false
[ "Looks like an issue with 3.0.2 transformers version. Works fine when I use \"master\" version of transformers." ]
679,164,788
506
fix dataset.map for function without outputs
closed
As noticed in #505 , giving a function that doesn't return anything in `.map` raises an error because of an unreferenced variable. I fixed that and added tests. Thanks @avloss for reporting
true
2020-08-14T13:40:22Z
2020-08-17T11:24:39Z
2020-08-17T11:24:38Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/506
2020-08-17T11:24:38Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/506
true
[]
678,791,400
505
tmp_file referenced before assignment
closed
Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file".
true
2020-08-13T23:27:33Z
2020-08-14T13:42:46Z
2020-08-14T13:42:46Z
avloss
NONE
https://github.com/huggingface/datasets/pull/505
null
2
1
1
0
false
false
[]
https://github.com/huggingface/datasets/pull/505
true
[ "Thanks for reporting the issue ! I'm creating a new PR to fix it and add tests.\r\n(I'm doing a new PR because I know there's some other place where it needs to be fixed)", "I'm closing this one as I created the other PR." ]
678,756,211
504
Added downloading to Hyperpartisan news detection
closed
Following the discussion on Slack and #349, I've updated the hyperpartisan dataset to pull directly from Zenodo rather than manual install, which should make this dataset much more accessible. Many thanks to @johanneskiesel ! Currently doesn't pass `test_load_real_dataset` - I'm using `self.config.name` which is `default` in this test. Might be related to #474
true
2020-08-13T21:53:46Z
2020-08-27T08:18:41Z
2020-08-27T08:18:41Z
ghomasHudson
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/504
2020-08-27T08:18:41Z
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/504
true
[ "Thank you @ghomasHudson for making our dataset available! This is great!", "The test passes since #527 :)" ]
678,726,538
503
CompGuessWhat?! 0.2.0
closed
We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset.
true
2020-08-13T20:51:26Z
2020-10-21T06:54:29Z
2020-10-21T06:54:29Z
aleSuglia
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/503
null
20
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/503
true
[ "I don't see any significant change in the dataset script (except the version value update), can you check that again please ?", "Hi @aleSuglia , can you check that all the changes you wanted to do are in the dataset script ?", "Hey sorry but I'm in the middle of a conference deadline. I'll let you know asap!",...
678,546,070
502
Fix tokenizers caching
closed
I've found some cases where the caching didn't work properly for tokenizers: 1. if a tokenizer has a regex pattern, then the caching would be inconsistent across sessions 2. if a tokenizer has a cache attribute that changes after some calls, the the caching would not work after cache updates 3. if a tokenizer is used inside a function, the caching of this function would result in the same cache file for different tokenizers 4. if `unique_no_split_tokens`'s attribute is not the same across sessions (after loading a tokenizer) then the caching could be inconsistent To fix that, this is what I did: 1. register a specific `save_regex` function for pickle that makes regex dumps deterministic 2. ignore cache attribute of some tokenizers before dumping 3. enable recursive dump by default for all dumps 4. make `unique_no_split_tokens` deterministic in https://github.com/huggingface/transformers/pull/6461 I also added tests to make sure that tokenizers hashing works as expected. In the future we should find a way to test if hashing also works across session (maybe using two CI jobs ? or by hardcoding a tokenizer's hash ?)
true
2020-08-13T15:53:37Z
2020-08-19T13:37:19Z
2020-08-19T13:37:18Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/502
2020-08-19T13:37:17Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/502
true
[ "This should fix #501 and also the issue you sent me on slack @sgugger ." ]
677,952,893
501
Caching doesn't work for map (non-deterministic)
closed
The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. ```python import nlp import transformers def main(): ds = nlp.load_dataset("reddit", split="train[:500]") tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2") def convert_to_features(example_batch): input_str = example_batch["body"] encodings = tokenizer(input_str, add_special_tokens=True, truncation=True) return encodings ds = ds.map(convert_to_features, batched=True) if __name__ == "__main__": main() ``` Roughly 3/10 times, this example recomputes the tokenization. Is this expected behaviour?
true
2020-08-12T20:20:07Z
2022-08-08T11:02:23Z
2020-08-24T16:34:35Z
wulu473
NONE
null
null
4
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/501
false
[ "Thanks for reporting !\r\n\r\nTo store the cache file, we compute a hash of the function given in `.map`, using our own hashing function.\r\nThe hash doesn't seem to stay the same over sessions for the tokenizer.\r\nApparently this is because of the regex at `tokenizer.pat` is not well supported by our hashing fun...
677,841,708
500
Use hnsw in wiki_dpr
closed
The HNSW faiss index is much faster that regular Flat index.
true
2020-08-12T16:58:07Z
2020-08-20T07:59:19Z
2020-08-20T07:59:18Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/500
2020-08-20T07:59:18Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/500
true
[]
677,709,938
499
Narrativeqa (with full text)
closed
Following the uploading of the full text data in #309, I've added the full text to the narrativeqa dataset. Few notes: - Had some encoding issues using the default `open` so am using `open(encoding="latin-1"...` which seems to fix it. Looks fine. - Can't get the dummy data to work. Currently putting stuff at: ``` dummy |---- 0.0.0 |- dummy_data.zip |-master.zip | |- narrativeqa-master | |- documents.csv | |- qaps.csv | |- third_party ...... | | - narrativeqa_full_text.zip | | - 001.content | | - .... ``` Not sure what I'm messing up here (probably something obvious).
true
2020-08-12T13:49:43Z
2020-12-09T11:21:02Z
2020-12-09T11:21:02Z
ghomasHudson
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/499
null
9
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/499
true
[ "I took a look at the dummy data creation for this dataset.\r\n\r\nMaybe it didn't work on your side might be because `master.zip` and `narrativeqa_full_text.zip` are supposed to be directories and not acutal zip files in the dummy data folder.\r\n\r\nI managed to make it work with this `dummy_data.zip` file:\r\nht...
677,597,479
498
dont use beam fs to save info for local cache dir
closed
If the cache dir is local, then we shouldn't use beam's filesystem to save the dataset info Fix #490
true
2020-08-12T11:00:00Z
2020-08-14T13:17:21Z
2020-08-14T13:17:20Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/498
2020-08-14T13:17:20Z
0
1
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/498
true
[]
677,057,116
497
skip header in PAWS-X
closed
This should fix #485 I also updated the `dataset_infos.json` file that is used to verify the integrity of the generated splits (the number of examples was reduced by one). Note that there are new fields in `dataset_infos.json` introduced in the latest release 0.4.0 corresponding to post processing info. I removed them in this case when I ran `nlp-cli ./datasets/xtreme --save_infos` to keep backward compatibility (versions 0.3.0 can't load these fields). I think I'll change the logic so that `nlp-cli test` doesn't create these fields for dataset with no post processing
true
2020-08-11T17:26:25Z
2020-08-19T09:50:02Z
2020-08-19T09:50:01Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/497
2020-08-19T09:50:01Z
0
1
0
1
false
false
[]
https://github.com/huggingface/datasets/pull/497
true
[]
677,016,998
496
fix bad type in overflow check
closed
When writing an arrow file and inferring the features, the overflow check could fail if the first example had a `null` field. This is because we were not using the inferred features to do this check, and we could end up with arrays that don't match because of a type mismatch (`null` vs `string` for example). This should fix #482
true
2020-08-11T16:24:58Z
2020-08-14T13:29:35Z
2020-08-14T13:29:34Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/496
2020-08-14T13:29:34Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/496
true
[]
676,959,289
495
stack vectors in pytorch and tensorflow
closed
When the format of a dataset is set to pytorch or tensorflow, and if the dataset has vectors in it, they were not stacked together as tensors when calling `dataset[i:i + batch_size][column]` or `dataset[column]`. I added support for stacked tensors for both pytorch and tensorflow. For ragged tensors, they are stacked only for tensorflow as pytorch doesn't support ragged tensors.
true
2020-08-11T15:12:53Z
2020-08-12T09:30:49Z
2020-08-12T09:30:48Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/495
2020-08-12T09:30:48Z
0
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/495
true
[]
676,886,955
494
Fix numpy stacking
closed
When getting items using a column name as a key, numpy arrays were not stacked. I fixed that and added some tests. There is another issue that still needs to be fixed though: when getting items using a column name as a key, pytorch tensors are not stacked (it outputs a list of tensors). This PR should help with the to fix this issue.
true
2020-08-11T13:40:30Z
2020-08-11T14:56:50Z
2020-08-11T13:49:52Z
lhoestq
MEMBER
https://github.com/huggingface/datasets/pull/494
2020-08-11T13:49:52Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/494
true
[ "This PR also fixed a bug where numpy arrays were returned instead of pytorch tensors when getting with a clumn as a key." ]
676,527,351
493
Fix wmt zh-en url
closed
I verified that ``` wget https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-zh.tar.gz.00 ``` runs in 2 minutes.
true
2020-08-11T02:14:52Z
2020-08-11T02:22:28Z
2020-08-11T02:22:12Z
sshleifer
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/493
null
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/493
true
[ "this doesn't work. I can decompress the file after download locally." ]
676,495,064
492
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
closed
Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("title") dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir) dset = nlp.concatenate_datasets([dset_wikipedia, dset_books]) ``` This fails because they have different schemas, despite having identical features. ```python assert dset_wikipedia.features == dset_books.features # True assert dset_wikipedia._data.schema == dset_books._data.schema # False ``` The Wikipedia dataset has 'text: string', while the BookCorpus dataset has 'text: string not null'. Currently I hack together a working schema match with the following line, but it would be better if this was handled in Features themselves. ```python dset_wikipedia._data = dset_wikipedia.data.cast(dset_books._data.schema) ```
true
2020-08-11T00:27:46Z
2020-08-26T16:17:19Z
2020-08-26T16:17:19Z
jarednielsen
CONTRIBUTOR
null
null
7
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/492
false
[ "In 0.4.0, the assertion in `concatenate_datasets ` is on the features, and not the schema.\r\nCould you try to update `nlp` ?\r\n\r\nAlso, since 0.4.0, you can use `dset_wikipedia.cast_(dset_books.features)` to avoid the schema cast hack.", "Or maybe the assertion comes from elsewhere ?", "I'm using the master...
676,486,275
491
No 0.4.0 release on GitHub
closed
0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo.
true
2020-08-10T23:59:57Z
2020-08-11T16:50:07Z
2020-08-11T16:50:07Z
jarednielsen
CONTRIBUTOR
null
null
2
1
1
0
null
false
[]
https://github.com/huggingface/datasets/issues/491
false
[ "I did the release on github, and updated the doc :)\r\nSorry for the delay", "Thanks!" ]
676,482,242
490
Loading preprocessed Wikipedia dataset requires apache_beam
closed
Running `nlp.load_dataset("wikipedia", "20200501.en", split="train", dir="/tmp/wikipedia")` gives an error if apache_beam is not installed, stemming from https://github.com/huggingface/nlp/blob/38eb2413de54ee804b0be81781bd65ac4a748ced/src/nlp/builder.py#L981-L988 This succeeded without the dependency in version 0.3.0. This seems like an unnecessary dependency to process some dataset info if you're using the already-preprocessed version. Could it be removed?
true
2020-08-10T23:46:50Z
2020-08-14T13:17:20Z
2020-08-14T13:17:20Z
jarednielsen
CONTRIBUTOR
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/490
false
[]
676,456,257
489
ug
closed
true
2020-08-10T22:33:03Z
2020-08-10T22:55:14Z
2020-08-10T22:33:40Z
timothyjlaurent
NONE
null
null
2
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/489
false
[ "whoops", "please delete this" ]
676,299,993
488
issues with downloading datasets for wmt16 and wmt19
closed
I have encountered multiple issues while trying to: ``` import nlp dataset = nlp.load_dataset('wmt16', 'ru-en') metric = nlp.load_metric('wmt16') ``` 1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and now it worked. So it must have been some outdated dependencies that `pip install -e ".[dev]" ` fixed. 2. it was downloading at 60kbs - almost 5 hours to get the dataset. It was downloading all pairs and not just the one I asked for. I tried the same code with `wmt19` in parallel and it took a few secs to download and it only fetched data for the requested pair. (but it failed too, see below) 3. my machine has crushed and when I retried I got: ``` Traceback (most recent call last): File "./download.py", line 9, in <module> dataset = nlp.load_dataset('wmt16', 'ru-en') File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 549, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 449, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/home/stas/anaconda3/envs/main/lib/python3.7/contextlib.py", line 112, in __enter__ return next(self.gen) File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/builder.py", line 422, in incomplete_dir os.makedirs(tmp_dir) File "/home/stas/anaconda3/envs/main/lib/python3.7/os.py", line 221, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/home/stas/.cache/huggingface/datasets/wmt16/ru-en/1.0.0/4d8269cdd971ed26984a9c0e4a158e0c7afc8135fac8fb8ee43ceecf38fd422d.incomplete' ``` it can't handle resumes. but neither allows a new start. Had to delete it manually. 4. and finally when it downloaded the dataset, it then failed to fetch the metrics: ``` Traceback (most recent call last): File "./download.py", line 15, in <module> metric = nlp.load_metric('wmt16') File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 442, in load_metric module_path, hash = prepare_module(path, download_config=download_config, dataset=False) File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/load.py", line 258, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 198, in cached_path local_files_only=download_config.local_files_only, File "/mnt/nvme1/code/huggingface/nlp-master/src/nlp/utils/file_utils.py", line 356, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://s3.amazonaws.com/datasets.huggingface.co/nlp/metrics/wmt16/wmt16.py ``` 5. If I run the same code with `wmt19`, it fails too: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz ```
true
2020-08-10T17:32:51Z
2022-10-04T17:46:59Z
2022-10-04T17:46:58Z
stas00
CONTRIBUTOR
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/488
false
[ "I found `UNv1.0.en-ru.tar.gz` here: https://conferences.unite.un.org/uncorpus/en/downloadoverview, so it can be reconstructed with:\r\n```\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar....
676,143,029
487
Fix elasticsearch result ids returning as strings
closed
I am using the latest elasticsearch binary and master of nlp. For me elasticsearch searches failed because the resultant "id_" returned for searches are strings, but our library assumes them to be integers.
true
2020-08-10T13:37:11Z
2020-08-31T10:42:46Z
2020-08-31T10:42:46Z
sai-prasanna
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/487
2020-08-31T10:42:46Z
1
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/487
true
[ "It looks like you need to rebase from master to fix the CI. Could you do that please ?" ]
675,649,034
486
Bookcorpus data contains pretokenized text
closed
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end quotes, respectively. On my own projects, I just run the data through NLTK's TreebankWordDetokenizer to reverse the tokenization (as best as possible). I think it would be beneficial to apply this transformation directly on your remote cached copy of the dataset. If you choose to do so, I would also suggest to use my fork of NLTK that fixes several bugs in their detokenizer (I've opened a pull-request, but they've yet to respond): https://github.com/nltk/nltk/pull/2575
true
2020-08-09T06:53:24Z
2022-10-04T17:44:33Z
2022-10-04T17:44:33Z
orsharir
CONTRIBUTOR
null
null
8
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/486
false
[ "Yes indeed it looks like some `'` and spaces are missing (for example in `dont` or `didnt`).\r\nDo you know if there exist some copies without this issue ?\r\nHow would you fix this issue on the current data exactly ? I can see that the data is raw text (not tokenized) so I'm not sure I understand how you would do...
675,595,393
485
PAWS dataset first item is header
closed
``` import nlp dataset = nlp.load_dataset('xtreme', 'PAWS-X.en') dataset['test'][0] ``` prints the following ``` {'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'} ``` dataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names to themselves. Probably just need to ignore the first row in the dataset by default or something like that.
true
2020-08-08T22:05:25Z
2020-08-19T09:50:01Z
2020-08-19T09:50:01Z
jxmorris12
CONTRIBUTOR
null
null
0
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/485
false
[]
675,088,983
484
update mirror for RT dataset
closed
true
2020-08-07T15:25:45Z
2020-08-24T13:33:37Z
2020-08-24T13:33:37Z
jxmorris12
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/484
2020-08-24T13:33:37Z
4
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/484
true
[ "Thanks for adding this mirror link :)\r\n\r\nCould you run the following command to update the json file `dataset_infos.json` used to verify the integrity of the downloaded file ?\r\n\r\n```\r\nnlp-cli test ./datasets/rotten_tomatoes --save_infos --ignore_verifications\r\n```", "done! @lhoestq ", "the build_do...
675,080,694
483
rotten tomatoes movie review dataset taken down
closed
In an interesting twist of events, the individual who created the movie review seems to have left Cornell, and their webpage has been removed, along with the movie review dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz). It's not downloadable anymore.
true
2020-08-07T15:12:01Z
2020-09-08T09:36:34Z
2020-09-08T09:36:33Z
jxmorris12
CONTRIBUTOR
null
null
3
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/483
false
[ "found a mirror: https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz", "fixed in #484 ", "Closing this one. Thanks again @jxmorris12 for taking care of this :)" ]
674,851,147
482
Bugs : dataset.map() is frozen on ELI5
closed
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, target_text`, `dataset.map` is **frozen** in the first hundreds examples. On the contrary, this works totally fine on SQUAD (80,000 examples). Both `nlp` version 0.3.0 and 0.4.0 cause frozen process . Also try various `pyarrow` versions from 0.16.0 / 0.17.0 / 1.0.0 also have the same frozen process. Reproducible code can be found on [this colab notebook ](https://colab.research.google.com/drive/14wttOTv3ky74B_c0kv5WrbgQjCF2fYQk?usp=sharing), where I also show that the same mapping function works fine on SQUAD, so the problem is likely due to ELI5 somehow. ---------------------------------------- **More Info :** instead of `map`, if I run `for` loop and apply function by myself, there's no error and can finish within 10 seconds. However, `nlp dataset` is immutable (I couldn't manually assign a new key-value to `dataset `object) I also notice that SQUAD texts are quite clean while ELI5 texts contain many special characters, not sure if this is the cause ?
true
2020-08-07T08:23:35Z
2023-04-06T09:39:59Z
2020-08-11T23:55:15Z
ratthachat
NONE
null
null
8
0
0
0
null
false
[]
https://github.com/huggingface/datasets/issues/482
false
[ "This comes from an overflow in pyarrow's array.\r\nIt is stuck inside the loop that reduces the batch size to avoid the overflow.\r\nI'll take a look", "I created a PR to fix the issue.\r\nIt was due to an overflow check that handled badly an empty list.\r\n\r\nYou can try the changes by using \r\n```\r\n!pip in...
674,567,389
481
Apply utf-8 encoding to all datasets
closed
## Description This PR applies utf-8 encoding for all instances of `with open(...) as f` to all Python files in `datasets/`. As suggested by @thomwolf in #468 , we use regular expressions and the following function ```python def apply_encoding_on_file_open(filepath: str): """Apply UTF-8 encoding for all instances where a non-binary file is opened.""" with open(filepath, 'r', encoding='utf-8') as input_file: regexp = re.compile(r"(?!.*\b(?:encoding|rb|w|wb|w+|wb+|ab|ab+)\b)(?<=\s)(open)\((.*)\)") input_text = input_file.read() match = regexp.search(input_text) if match: output = regexp.sub(lambda m: m.group()[:-1]+', encoding="utf-8")', input_text) with open(filepath, 'w', encoding='utf-8') as output_file: output_file.write(output) ``` to perform the replacement. Note: 1. I excluded all _**binary files**_ from the search since it's possible some objects are opened for which the encoding doesn't make sense. Please correct me if I'm wrong and I'll tweak the regexp accordingly 2. There were two edge cases where the regexp failed (e.g. two `open` instances on a single line), but I decided to just fix these manually in the interest of time. 3. I only applied the replacement to files in `datasets/`. Let me know if this should be extended to other places like `metrics/` 4. I have implemented a unit test that should catch missing encodings in future CI runs Closes #468 and possibly #347
true
2020-08-06T20:02:09Z
2020-08-20T08:16:08Z
2020-08-20T08:16:08Z
lewtun
MEMBER
https://github.com/huggingface/datasets/pull/481
2020-08-20T08:16:08Z
6
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/481
true
[ "Not sure why the AWS test is failing - perhaps I made too many concurrent CI builds 😒. Can someone please rerun the CI to check the error is not on my end?", "I pushed an improved docstring and the unit tests now pass, which suggests the previous failure on AWS was simply a timeout error. \r\n\r\nFor some reaso...
674,245,959
480
Column indexing hotfix
closed
As observed for example in #469 , currently `__getitem__` does not convert the data to the dataset format when indexing by column. This is a hotfix that imitates functional 0.3.0. code. In the future it'd probably be nice to have a test there.
true
2020-08-06T11:37:05Z
2023-09-24T09:49:33Z
2020-08-12T08:36:10Z
TevenLeScao
CONTRIBUTOR
https://github.com/huggingface/datasets/pull/480
null
2
0
0
0
false
false
[]
https://github.com/huggingface/datasets/pull/480
true
[ "Looks good to me as well but we'll want to add a test indeed.\r\nYou can add one if you have time @TevenLeScao.\r\nOtherwise, we'll do it when we are back with Quentin. ", "I fixed it in #494 " ]