id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
705,672,208
https://api.github.com/repos/huggingface/datasets/issues/655
https://github.com/huggingface/datasets/pull/655
655
added Winogrande debiased subset
closed
2
2020-09-21T14:51:08
2020-09-21T16:20:40
2020-09-21T16:16:04
TevenLeScao
[]
The [Winogrande](https://arxiv.org/abs/1907.10641) paper mentions a `debiased` subset that wasn't in the first release; this PR adds it.
true
705,511,058
https://api.github.com/repos/huggingface/datasets/issues/654
https://github.com/huggingface/datasets/pull/654
654
Allow empty inputs in metrics
closed
0
2020-09-21T11:26:36
2020-10-06T03:51:48
2020-09-21T16:13:38
lhoestq
[]
There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute.
true
705,482,391
https://api.github.com/repos/huggingface/datasets/issues/653
https://github.com/huggingface/datasets/pull/653
653
handle data alteration when trying type
closed
0
2020-09-21T10:41:49
2020-09-21T16:13:06
2020-09-21T16:13:05
lhoestq
[]
Fix #649 The bug came from the type inference that didn't handle a weird case in Pyarrow. Indeed this code runs without error but alters the data in arrow: ```python import pyarrow as pa type = pa.struct({"a": pa.struct({"b": pa.string()})}) array_with_altered_data = pa.array([{"a": {"b": "foo", "c": "bar"}}] * 10, type=type) print(array_with_altered_data[0].as_py()) # {'a': {'b': 'foo'}} -> the sub-field "c" is missing ``` (I don't know if this is intended in pyarrow tbh) We didn't take this case into account during type inference. By default it was keeping old features and maybe alter data. To fix that I added a line that checks that the first element of the array is not altered.
true
705,390,850
https://api.github.com/repos/huggingface/datasets/issues/652
https://github.com/huggingface/datasets/pull/652
652
handle connection error in download_prepared_from_hf_gcs
closed
0
2020-09-21T08:21:11
2020-09-21T08:28:43
2020-09-21T08:28:42
lhoestq
[]
Fix #647
true
705,212,034
https://api.github.com/repos/huggingface/datasets/issues/651
https://github.com/huggingface/datasets/issues/651
651
Problem with JSON dataset format
open
2
2020-09-20T23:57:14
2020-09-21T12:14:24
null
vikigenius
[]
I have a local json dataset with the following form. { 'id01234': {'key1': value1, 'key2': value2, 'key3': value3}, 'id01235': {'key1': value1, 'key2': value2, 'key3': value3}, . . . 'id09999': {'key1': value1, 'key2': value2, 'key3': value3} } Note that instead of a list of records it's basically a dictionary of key value pairs with the keys being the record_ids and the values being the corresponding record. Reading this with json: ``` data = datasets.load('json', data_files='path_to_local.json') ``` Throws an error and asks me to chose a field. What's the right way to handle this?
false
704,861,844
https://api.github.com/repos/huggingface/datasets/issues/650
https://github.com/huggingface/datasets/issues/650
650
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
closed
4
2020-09-19T11:07:03
2020-09-22T11:54:10
2020-09-22T11:54:09
richarddwang
[]
Hi, I recently want to add a dataset whose source data is like this ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ subset001.xz | .... ``` So I wrote `openwebtext.py` like this ``` def _split_generators(self, dl_manager): dl_dir = dl_manager.download_and_extract(_URL) owt_dir = os.path.join(dl_dir, 'openwebtext') subset_xzs = [ os.path.join(owt_dir, file_name) for file_name in os.listdir(owt_dir) if file_name.endswith('xz') # filter out ...xz.lock ] ex_dirs = dl_manager.extract(subset_xzs, num_proc=round(os.cpu_count()*0.75)) nested_txt_files = [ [ os.path.join(ex_dir,txt_file_name) for txt_file_name in os.listdir(ex_dir) if txt_file_name.endswith('txt') ] for ex_dir in ex_dirs ] txt_files = chain(*nested_txt_files) return [ datasets.SplitGenerator( name=datasets.Split.TRAIN, gen_kwargs={"txt_files": txt_files} ), ] ``` All went good, I can load and use real openwebtext, except when I try to test with dummy data. The problem is `MockDownloadManager.extract` do nothing, so `ex_dirs = dl_manager.extract(subset_xzs)` won't decompress `subset_xxx.xz`s for me. How should I do ? Or you can modify `MockDownloadManager` to make it like a real `DownloadManager` ?
false
704,838,415
https://api.github.com/repos/huggingface/datasets/issues/649
https://github.com/huggingface/datasets/issues/649
649
Inconsistent behavior in map
closed
1
2020-09-19T08:41:12
2020-09-21T16:13:05
2020-09-21T16:13:05
krandiash
[ "bug" ]
I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem. ```python import datasets # Dataset with a single feature called 'field' consisting of two examples dataset = datasets.Dataset.from_dict({'field': ['a', 'b']}) print(dataset[0]) # outputs {'field': 'a'} # Map this dataset to create another feature called 'otherfield', which is a dictionary containing a key called 'capital' dataset = dataset.map(lambda example: {'otherfield': {'capital': example['field'].capitalize()}}) print(dataset[0]) # output is okay {'field': 'a', 'otherfield': {'capital': 'A'}} # Now I want to map again to modify 'otherfield', by adding another key called 'append_x' to the dictionary under 'otherfield' print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x'}})[0]) # printing out the first example after applying the map shows that the new key 'append_x' doesn't get added # it also messes up the value stored at 'capital' {'field': 'a', 'otherfield': {'capital': None}} # Instead, I try to do the same thing by using a different mapped fn print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}})[0]) # this preserves the value under capital, but still no 'append_x' {'field': 'a', 'otherfield': {'capital': 'A'}} # Instead, I try to pass 'otherfield' to remove_columns print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['otherfield']['capital']}}, remove_columns=['otherfield'])[0]) # this still doesn't fix the problem {'field': 'a', 'otherfield': {'capital': 'A'}} # Alternately, here's what happens if I just directly map both 'capital' and 'append_x' on a fresh dataset. # Recreate the dataset dataset = datasets.Dataset.from_dict({'field': ['a', 'b']}) # Now map the entire 'otherfield' dict directly, instead of incrementally as before print(dataset.map(lambda example: {'otherfield': {'append_x': example['field'] + 'x', 'capital': example['field'].capitalize()}})[0]) # This looks good! {'field': 'a', 'otherfield': {'append_x': 'ax', 'capital': 'A'}} ``` This might be a new issue, because I didn't see this behavior in the `nlp` library. Any help is appreciated!
false
704,753,123
https://api.github.com/repos/huggingface/datasets/issues/648
https://github.com/huggingface/datasets/issues/648
648
offset overflow when multiprocessing batched map on large datasets.
closed
6
2020-09-19T02:15:11
2025-06-17T12:56:07
2020-09-19T16:46:31
richarddwang
[ "bug" ]
It only happened when "multiprocessing" + "batched" + "large dataset" at the same time. ``` def bprocess(examples): examples['len'] = [] for text in examples['text']: examples['len'].append(len(text)) return examples wiki.map(brpocess, batched=True, num_proc=8) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py", line 121, in worker result = (True, func(*args, **kwds)) File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 153, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/yisiang/datasets/src/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1486, in _map_single batch = self[i : i + batch_size] File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 1071, in __getitem__ format_kwargs=self._format_kwargs, File "/home/yisiang/datasets/src/datasets/arrow_dataset.py", line 972, in _getitem data_subset = self._data.take(indices_array) File "pyarrow/table.pxi", line 1145, in pyarrow.lib.Table.take File "/home/yisiang/miniconda3/envs/ml/lib/python3.7/site-packages/pyarrow/compute.py", line 268, in take return call_function('take', [data, indices], options) File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays """ The above exception was the direct cause of the following exception: ArrowInvalid Traceback (most recent call last) in 30 owt = datasets.load_dataset('/home/yisiang/datasets/datasets/openwebtext/openwebtext.py', cache_dir='./datasets')['train'] 31 print('load/create data from OpenWebText Corpus for ELECTRA') ---> 32 e_owt = ELECTRAProcessor(owt, apply_cleaning=False).map(cache_file_name=f"electra_owt_{c.max_length}.arrow") 33 dsets.append(e_owt) 34 ~/Reexamine_Attention/electra_pytorch/_utils/utils.py in map(self, **kwargs) 126 writer_batch_size=10**4, 127 num_proc=num_proc, --> 128 **kwargs 129 ) 130 ~/hugdatafast/hugdatafast/transform.py in my_map(self, *args, **kwargs) 21 if not cache_file_name.endswith('.arrow'): cache_file_name += '.arrow' 22 if '/' not in cache_file_name: cache_file_name = os.path.join(self.cache_directory(), cache_file_name) ---> 23 return self.map(*args, cache_file_name=cache_file_name, **kwargs) 24 25 @patch ~/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1285 logger.info("Spawning {} processes".format(num_proc)) 1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard] -> 1287 transformed_shards = [r.get() for r in results] 1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc)) 1289 result = concatenate_datasets(transformed_shards) ~/datasets/src/datasets/arrow_dataset.py in (.0) 1285 logger.info("Spawning {} processes".format(num_proc)) 1286 results = [pool.apply_async(self.__class__._map_single, kwds=kwds) for kwds in kwds_per_shard] -> 1287 transformed_shards = [r.get() for r in results] 1288 logger.info("Concatenating {} shards from multiprocessing".format(num_proc)) 1289 result = concatenate_datasets(transformed_shards) ~/miniconda3/envs/ml/lib/python3.7/multiprocessing/pool.py in get(self, timeout) 655 return self._value 656 else: --> 657 raise self._value 658 659 def _set(self, i, obj): ArrowInvalid: offset overflow while concatenating arrays ```
false
704,734,764
https://api.github.com/repos/huggingface/datasets/issues/647
https://github.com/huggingface/datasets/issues/647
647
Cannot download dataset_info.json
closed
4
2020-09-19T01:35:15
2020-09-21T08:28:42
2020-09-21T08:28:42
chiyuzhang94
[]
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text/default-53ee3045f07ba8ca/0.0.0/dataset_info.json ``` I tried to open this link manually, but I cannot access this file. How can I download this file and pass it through `dataset.load_dataset()` manually? Versions: Python version 3.7.3 PyTorch version 1.6.0 TensorFlow version 2.3.0 datasets version: 1.0.1
false
704,607,371
https://api.github.com/repos/huggingface/datasets/issues/646
https://github.com/huggingface/datasets/pull/646
646
Fix docs typos
closed
0
2020-09-18T19:32:27
2020-09-21T16:30:54
2020-09-21T16:14:12
mariosasko
[]
This PR fixes few typos in the docs and the error in the code snippet in the set_format section in docs/source/torch_tensorflow.rst. `torch.utils.data.Dataloader` expects padded batches so it throws an error due to not being able to stack the unpadded tensors. If we follow the Quick tour from the docs where they add the `truncation=True, padding='max_length'` arguments to the tokenizer before passing data to Dataloader, we can easily fix the issue.
true
704,542,234
https://api.github.com/repos/huggingface/datasets/issues/645
https://github.com/huggingface/datasets/pull/645
645
Don't use take on dataset table in pyarrow 1.0.x
closed
4
2020-09-18T17:31:34
2023-09-19T07:59:19
2020-09-19T16:46:31
lhoestq
[]
Fix #615
true
704,534,501
https://api.github.com/repos/huggingface/datasets/issues/644
https://github.com/huggingface/datasets/pull/644
644
Better windows support
closed
1
2020-09-18T17:17:36
2020-09-25T14:02:30
2020-09-25T14:02:28
lhoestq
[]
There are a few differences in the behavior of python and pyarrow on windows. For example there are restrictions when accessing/deleting files that are open Fix #590
true
704,477,164
https://api.github.com/repos/huggingface/datasets/issues/643
https://github.com/huggingface/datasets/issues/643
643
Caching processed dataset at wrong folder
closed
13
2020-09-18T15:41:26
2022-02-16T14:53:29
2022-02-16T14:53:29
mrm8488
[ "bug" ]
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = dataset.map(encode, batched=True) ``` The file is about 4 GB, so I cannot process it on the Colab HD because there is no enough space. So I decided to mount my Google Drive fs and do it on it. The dataset is cached in the right place but by processing it (applying `encode` function) seems to use a different folder because Colab HD starts to grow and it crashes when it should be done in the Drive fs. What gets me crazy, it prints it is processing/encoding the dataset in the right folder: ``` Testing the mapped function outputs Testing finished, running the mapping function on the dataset Caching processed dataset at /content/drive/My Drive/text/default-ad3e69d6242ee916/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/cache-b16341780a59747d.arrow ```
false
704,397,499
https://api.github.com/repos/huggingface/datasets/issues/642
https://github.com/huggingface/datasets/pull/642
642
Rename wnut fields
closed
0
2020-09-18T13:51:31
2020-09-18T17:18:31
2020-09-18T17:18:30
lhoestq
[]
As mentioned in #641 it would be cool to have it follow the naming of the other NER datasets
true
704,373,940
https://api.github.com/repos/huggingface/datasets/issues/641
https://github.com/huggingface/datasets/pull/641
641
Add Polyglot-NER Dataset
closed
7
2020-09-18T13:21:44
2020-09-20T03:04:43
2020-09-20T03:04:43
joeddav
[]
Adds the [Polyglot-NER dataset](https://sites.google.com/site/rmyeid/projects/polylgot-ner) with named entity tags for 40 languages. I include separate configs for each language as well as a `combined` config which lumps them all together.
true
704,311,758
https://api.github.com/repos/huggingface/datasets/issues/640
https://github.com/huggingface/datasets/pull/640
640
Make shuffle compatible with temp_seed
closed
0
2020-09-18T11:38:58
2020-09-18T11:47:51
2020-09-18T11:47:50
lhoestq
[]
This code used to return different dataset at each run ```python import dataset as ds dataset = ... with ds.temp_seed(42): shuffled = dataset.shuffle() ``` Now it returns the same one since the seed is set
true
704,217,963
https://api.github.com/repos/huggingface/datasets/issues/639
https://github.com/huggingface/datasets/pull/639
639
Update glue QQP checksum
closed
0
2020-09-18T09:08:15
2020-09-18T11:37:08
2020-09-18T11:37:07
lhoestq
[]
Fix #638
true
704,146,956
https://api.github.com/repos/huggingface/datasets/issues/638
https://github.com/huggingface/datasets/issues/638
638
GLUE/QQP dataset: NonMatchingChecksumError
closed
1
2020-09-18T07:09:10
2020-09-18T11:37:07
2020-09-18T11:37:07
richarddwang
[]
Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚 datasets version: editable install of master at 9/17 `datasets.load_dataset('glue','qqp', cache_dir='./datasets')` ``` Downloading and preparing dataset glue/qqp (download: 57.73 MiB, generated: 107.02 MiB, post-processed: Unknown size, total: 164.75 MiB) to ./datasets/glue/qqp/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4... --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) in ----> 1 datasets.load_dataset('glue','qqp', cache_dir='./datasets') ~/datasets/src/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 ~/datasets/src/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 467 if not downloaded_from_gcs: 468 self._download_and_prepare( --> 469 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 470 ) 471 # Sync info ~/datasets/src/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 527 if verify_infos: 528 verify_checksums( --> 529 self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files" 530 ) 531 ~/datasets/src/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://dl.fbaipublicfiles.com/glue/data/QQP-clean.zip'] ```
false
703,539,909
https://api.github.com/repos/huggingface/datasets/issues/637
https://github.com/huggingface/datasets/pull/637
637
Add MATINF
closed
0
2020-09-17T12:24:53
2020-09-17T13:23:18
2020-09-17T13:23:17
JetRunner
[]
true
702,883,989
https://api.github.com/repos/huggingface/datasets/issues/636
https://github.com/huggingface/datasets/pull/636
636
Consistent ner features
closed
0
2020-09-16T15:56:25
2020-09-17T09:52:59
2020-09-17T09:52:58
lhoestq
[]
As discussed in #613 , this PR aims at making NER feature names consistent across datasets. I changed the feature names of LinCE and XTREME/PAN-X
true
702,822,439
https://api.github.com/repos/huggingface/datasets/issues/635
https://github.com/huggingface/datasets/pull/635
635
Loglevel
closed
2
2020-09-16T14:37:53
2020-09-17T09:52:19
2020-09-17T09:52:18
lhoestq
[]
Continuation of #618
true
702,676,041
https://api.github.com/repos/huggingface/datasets/issues/634
https://github.com/huggingface/datasets/pull/634
634
Add ConLL-2000 dataset
closed
0
2020-09-16T11:14:11
2020-09-17T10:38:10
2020-09-17T10:38:10
vblagoje
[]
Adds ConLL-2000 dataset used for text chunking. See https://www.clips.uantwerpen.be/conll2000/chunking/ for details and [motivation](https://github.com/huggingface/transformers/pull/7041#issuecomment-692710948) behind this PR
true
702,440,484
https://api.github.com/repos/huggingface/datasets/issues/633
https://github.com/huggingface/datasets/issues/633
633
Load large text file for LM pre-training resulting in OOM
open
27
2020-09-16T04:33:15
2021-02-16T12:02:01
null
leethu2012
[]
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator used for language modeling based on DataCollatorForLazyLanguageModeling - collates batches of tensors, honoring their tokenizer's pad_token - preprocesses batches for masked language modeling """ block_size: int = 512 def __call__(self, examples: List[dict]) -> Dict[str, torch.Tensor]: examples = [example['text'] for example in examples] batch, attention_mask = self._tensorize_batch(examples) if self.mlm: inputs, labels = self.mask_tokens(batch) return {"input_ids": inputs, "labels": labels} else: labels = batch.clone().detach() if self.tokenizer.pad_token_id is not None: labels[labels == self.tokenizer.pad_token_id] = -100 return {"input_ids": batch, "labels": labels} def _tensorize_batch(self, examples: List[str]) -> Tuple[torch.Tensor, torch.Tensor]: if self.tokenizer._pad_token is None: raise ValueError( "You are attempting to pad samples but the tokenizer you are using" f" ({self.tokenizer.__class__.__name__}) does not have one." ) tensor_examples = self.tokenizer.batch_encode_plus( [ex for ex in examples if ex], max_length=self.block_size, return_tensors="pt", pad_to_max_length=True, return_attention_mask=True, truncation=True, ) input_ids, attention_mask = tensor_examples["input_ids"], tensor_examples["attention_mask"] return input_ids, attention_mask dataset = load_dataset('text', data_files='train.txt',cache_dir="./", , split='train') data_collator = DataCollatorForDatasetsLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15, block_size=tokenizer.max_len) trainer = Trainer(model=model, args=args, data_collator=data_collator, train_dataset=train_dataset, prediction_loss_only=True, ) trainer.train(model_path=model_path) ``` This train.txt is about 1.1GB and has 90k lines where each line is a sequence of 4k words. During training, the memory usage increased fast as the following graph and resulted in OOM before the finish of training. ![image](https://user-images.githubusercontent.com/29704017/93292112-5576b280-f817-11ea-8da2-b2db9bf35665.png) Could you please give me any suggestions on why this happened and how to fix it? Thanks.
false
702,358,124
https://api.github.com/repos/huggingface/datasets/issues/632
https://github.com/huggingface/datasets/pull/632
632
Fix typos in the loading datasets docs
closed
1
2020-09-16T00:27:41
2020-09-21T16:31:11
2020-09-16T06:52:44
mariosasko
[]
This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function.
true
701,711,255
https://api.github.com/repos/huggingface/datasets/issues/631
https://github.com/huggingface/datasets/pull/631
631
Fix text delimiter
closed
5
2020-09-15T08:08:42
2020-09-22T15:03:06
2020-09-15T08:26:25
lhoestq
[]
I changed the delimiter in the `text` dataset script. It should fix the `pyarrow.lib.ArrowInvalid: CSV parse error` from #622 I changed the delimiter to an unused ascii character that is not present in text files : `\b`
true
701,636,350
https://api.github.com/repos/huggingface/datasets/issues/630
https://github.com/huggingface/datasets/issues/630
630
Text dataset not working with large files
closed
11
2020-09-15T06:02:36
2020-09-25T22:21:43
2020-09-25T22:21:43
ksjae
[]
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None File "examples/language-modeling/run_language_modeling.py", line 144, in get_dataset dataset = load_dataset("text", data_files=file_path, split='train+test') File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 469, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 546, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/ksjae/.local/lib/python3.7/site-packages/datasets/builder.py", line 888, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/home/ksjae/.local/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/home/ksjae/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py", line 104, in _generate_tables convert_options=self.config.convert_options, File "pyarrow/_csv.pyx", line 714, in pyarrow._csv.read_csv File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status ``` **pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** It gives the same message for both 200MB, 10GB .tx files but not for 700MB file. Can't upload due to size & copyright problem. sorry.
false
701,517,550
https://api.github.com/repos/huggingface/datasets/issues/629
https://github.com/huggingface/datasets/issues/629
629
straddling object straddles two block boundaries
closed
1
2020-09-15T00:30:46
2020-09-15T00:36:17
2020-09-15T00:32:17
bharaniabhishek123
[]
I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below : I tried calling read_json with readOptions but no luck . ``` table = json.read_json(fn) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pyarrow/_json.pyx", line 246, in pyarrow._json.read_json File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ```
false
701,496,053
https://api.github.com/repos/huggingface/datasets/issues/628
https://github.com/huggingface/datasets/pull/628
628
Update docs links in the contribution guideline
closed
1
2020-09-14T23:27:19
2020-11-02T21:03:23
2020-09-15T06:19:35
M-Salti
[]
Fixed the `add a dataset` and `share a dataset` links in the contribution guideline to refer to the new docs website.
true
701,411,661
https://api.github.com/repos/huggingface/datasets/issues/627
https://github.com/huggingface/datasets/pull/627
627
fix (#619) MLQA features names
closed
0
2020-09-14T20:41:59
2020-11-02T21:04:32
2020-09-16T06:54:11
M-Salti
[]
Fixed the features names as suggested in (#619) in the `_generate_examples` and `_info` methods in the MLQA loading script and also changed the names in the `dataset_infos.json` file.
true
701,352,605
https://api.github.com/repos/huggingface/datasets/issues/626
https://github.com/huggingface/datasets/pull/626
626
Update GLUE URLs (now hosted on FB)
closed
0
2020-09-14T19:05:39
2020-09-16T06:53:18
2020-09-16T06:53:18
jeswan
[]
NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112. Note: rebased on huggingface/datasets
true
701,057,799
https://api.github.com/repos/huggingface/datasets/issues/625
https://github.com/huggingface/datasets/issues/625
625
dtype of tensors should be preserved
closed
9
2020-09-14T12:38:05
2021-08-17T08:30:04
2021-08-17T08:30:04
BramVanroy
[]
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-have-the-same-dtype-float32/96221)). As a user I did not expect this bug. I have a `map` function that I call on the Dataset that looks like this: ```python def preprocess(sentences: List[str]): token_ids = [[vocab.to_index(t) for t in s.split()] for s in sentences] sembeddings = stransformer.encode(sentences) print(sembeddings.dtype) return {"input_ids": token_ids, "sembedding": sembeddings} ``` Given a list of `sentences` (`List[str]`), it converts those into token_ids on the one hand (list of lists of ints; `List[List[int]]`) and into sentence embeddings on the other (Tensor of dtype `torch.float32`). That means that I actually set the column "sembedding" to a tensor that I as a user expect to be a float32. It appears though that behind the scenes, this tensor is converted into a **list**. I did not find this documented anywhere but I might have missed it. From a user's perspective this is incredibly important though, because it means you cannot do any data_type or tensor casting yourself in a mapping function! Furthermore, this can lead to issues, as was my case. My model expected float32 precision, which I thought `sembedding` was because that is what `stransformer.encode` outputs. But behind the scenes this tensor is first cast to a list, and when we then set its format, as below, this column is cast not to float32 but to double precision float64. ```python dataset.set_format(type="torch", columns=["input_ids", "sembedding"]) ``` This happens because apparently there is an intermediate step of casting to a **numpy** array (?) **whose dtype creation/deduction is different from torch dtypes** (see the snippet below). As you can see, this means that the dtype is not preserved: if I got it right, the dataset goes from torch.float32 -> list -> float64 (numpy) -> torch.float64. ```python import torch import numpy as np l = [-0.03010837361216545, -0.035979013890028, -0.016949838027358055] torch_tensor = torch.tensor(l) np_array = np.array(l) np_to_torch = torch.from_numpy(np_array) print(torch_tensor.dtype) # torch.float32 print(np_array.dtype) # float64 print(np_to_torch.dtype) # torch.float64 ``` This might lead to unwanted behaviour. I understand that the whole library is probably built around casting from numpy to other frameworks, so this might be difficult to solve. Perhaps `set_format` should include a `dtypes` option where for each input column the user can specify the wanted precision. The alternative is that the user needs to cast manually after loading data from the dataset but that does not seem user-friendly, makes the dataset less portable, and might use more space in memory as well as on disk than is actually needed.
false
700,541,628
https://api.github.com/repos/huggingface/datasets/issues/624
https://github.com/huggingface/datasets/issues/624
624
Add learningq dataset
open
0
2020-09-13T10:20:27
2020-09-14T09:50:02
null
krrishdholakia
[ "dataset request" ]
Hi, Thank you again for this amazing repo. Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
false
700,235,308
https://api.github.com/repos/huggingface/datasets/issues/623
https://github.com/huggingface/datasets/issues/623
623
Custom feature types in `load_dataset` from CSV
closed
7
2020-09-12T13:21:34
2020-09-30T19:51:43
2020-09-30T08:39:54
lvwerra
[ "enhancement" ]
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotion dataset. To get the data you can use the following code: ```Python from pathlib import Path import wget EMOTION_PATH = Path("./data/emotion") DOWNLOAD_URLS = [ "https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1", "https://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt?dl=1", "https://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt?dl=1", ] if not Path.is_dir(EMOTION_PATH): Path.mkdir(EMOTION_PATH) for url in DOWNLOAD_URLS: wget.download(url, str(EMOTION_PATH)) ``` The first five lines of the train set are: ``` i didnt feel humiliated;sadness i can go from feeling so hopeless to so damned hopeful just from being around someone who cares and is awake;sadness im grabbing a minute to post i feel greedy wrong;anger i am ever feeling nostalgic about the fireplace i will know that it is still on the property;love i am feeling grouchy;anger ``` Here the code to reproduce the issue: ```Python from datasets import Features, Value, ClassLabel, load_dataset class_names = ["sadness", "joy", "love", "anger", "fear", "surprise"] emotion_features = Features({'text': Value('string'), 'label': ClassLabel(names=class_names)}) file_dict = {'train': EMOTION_PATH/'train.txt'} dataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label'], features=emotion_features) ``` **Observed behaviour:** ```Python dataset['train'].features ``` ```Python {'text': Value(dtype='string', id=None), 'label': Value(dtype='string', id=None)} ``` **Expected behaviour:** ```Python dataset['train'].features ``` ```Python {'text': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=6, names=['sadness', 'joy', 'love', 'anger', 'fear', 'surprise'], names_file=None, id=None)} ``` **Things I've tried:** - deleting the cache - trying other types such as `int64` Am I missing anything? Thanks for any pointer in the right direction.
false
700,225,826
https://api.github.com/repos/huggingface/datasets/issues/622
https://github.com/huggingface/datasets/issues/622
622
load_dataset for text files not working
closed
41
2020-09-12T12:49:28
2020-10-28T11:07:31
2020-10-28T11:07:30
BramVanroy
[ "dataset bug" ]
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that you can use a string as input for data_files, but the signature is `Union[Dict, List]`.) The problem on Linux is that the script crashes with a CSV error (even though it isn't a CSV file). On Windows the script just seems to freeze or get stuck after loading the config file. Linux stack trace: ``` PyTorch version 1.6.0+cu101 available. Checking /home/bram/.cache/huggingface/datasets/b1d50a0e74da9a7b9822cea8ff4e4f217dd892e09eb14f6274a2169e5436e2ea.30c25842cda32b0540d88b7195147decf9671ee442f4bc2fb6ad74016852978e.py for additional imports. Found main folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7 Found script file from https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py to /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py Couldn't find dataset infos file at https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/dataset_infos.json Found metadata file for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at /home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.json Using custom data configuration default Generating dataset text (/home/bram/.cache/huggingface/datasets/text/default-0907112cc6cd2a38/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7) Downloading and preparing dataset text/default-0907112cc6cd2a38 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/bram/.cache/huggingface/datasets/text/default-0907112cc6cd2a38/0.0.0/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7... Dataset not on Hf google storage. Downloading and preparing it from source Downloading took 0.0 min Checksum Computation took 0.0 min Unable to verify checksums. Generating split train Traceback (most recent call last): File "/home/bram/Python/projects/dutch-simplification/utils.py", line 45, in prepare_data dataset = load_dataset("text", data_files=dataset_f) File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/load.py", line 608, in load_dataset builder_instance.download_and_prepare( File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/builder.py", line 468, in download_and_prepare self._download_and_prepare( File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/builder.py", line 546, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/datasets/builder.py", line 888, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/home/bram/.local/share/virtualenvs/dutch-simplification-NcpPZtDF/lib/python3.8/site-packages/tqdm/std.py", line 1130, in __iter__ for obj in iterable: File "/home/bram/.cache/huggingface/modules/datasets_modules/datasets/text/7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7/text.py", line 100, in _generate_tables pa_table = pac.read_csv( File "pyarrow/_csv.pyx", line 714, in pyarrow._csv.read_csv File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: CSV parse error: Expected 1 columns, got 2 ``` Windows just seems to get stuck. Even with a tiny dataset of 10 lines, it has been stuck for 15 minutes already at this message: ``` Checking C:\Users\bramv\.cache\huggingface\datasets\b1d50a0e74da9a7b9822cea8ff4e4f217dd892e09eb14f6274a2169e5436e2ea.30c25842cda32b0540d88b7195147decf9671ee442f4bc2fb6ad74016852978e.py for additional imports. Found main folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7 Found script file from https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py to C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\text.py Couldn't find dataset infos file at https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text\dataset_infos.json Found metadata file for dataset https://raw.githubusercontent.com/huggingface/datasets/1.0.1/datasets/text/text.py at C:\Users\bramv\.cache\huggingface\modules\datasets_modules\datasets\text\7e13bc0fa76783d4ef197f079dc8acfe54c3efda980f2c9adfab046ede2f0ff7\text.json Using custom data configuration default ```
false
700,171,097
https://api.github.com/repos/huggingface/datasets/issues/621
https://github.com/huggingface/datasets/pull/621
621
[docs] Index: The native emoji looks kinda ugly in large size
closed
0
2020-09-12T09:48:40
2020-09-15T06:20:03
2020-09-15T06:20:02
julien-c
[]
true
699,815,135
https://api.github.com/repos/huggingface/datasets/issues/620
https://github.com/huggingface/datasets/issues/620
620
map/filter multiprocessing raises errors and corrupts datasets
closed
22
2020-09-11T22:30:06
2020-10-08T16:31:47
2020-10-08T16:31:46
timothyjlaurent
[ "bug" ]
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) rel_ds_dict["validation"] = rel_ds_dict["test"] return ner_ds_dict, rel_ds_dict ``` The first train_test_split, `ner_ds`/`ner_ds_dict`, returns a `train` and `test` split that are iterable. The second, `rel_ds`/`rel_ds_dict` in this case, returns a Dataset dict that has rows but if selected from or sliced into into returns an empty dictionary. eg `rel_ds_dict['train'][0] == {}` and `rel_ds_dict['train'][0:100] == {}`. Ok I think I know the problem -- the rel_ds was mapped though a mapper with `num_proc=12`. If I remove `num_proc`. The dataset loads. I also see errors with other map and filter functions when `num_proc` is set. ``` Done writing 67 indices in 536 bytes . Done writing 67 indices in 536 bytes . Fatal Python error: PyCOND_WAIT(gil_cond) failed ```
false
699,733,612
https://api.github.com/repos/huggingface/datasets/issues/619
https://github.com/huggingface/datasets/issues/619
619
Mistakes in MLQA features names
closed
1
2020-09-11T20:46:23
2020-09-16T06:59:19
2020-09-16T06:59:19
M-Salti
[]
I think the following features in MLQA shouldn't be named the way they are: 1. `questions` (should be `question`) 2. `ids` (should be `id`) 3. `start` (should be `answer_start`) The reasons I'm suggesting these features be renamed are: * To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA etc. and hence make it easier to concatenate multiple QA datasets. * The features names are not the same as the ones provided in the original MLQA datasets (it uses the names I suggested). I know these columns can be renamed using using `Dataset.rename_column_`, `questions` and `ids` can be easily renamed but `start` on the other hand is annoying to rename since it's nested inside the feature `answers`.
false
699,684,831
https://api.github.com/repos/huggingface/datasets/issues/618
https://github.com/huggingface/datasets/pull/618
618
sync logging utils with transformers
closed
12
2020-09-11T19:46:13
2020-09-17T15:40:59
2020-09-17T09:53:47
stas00
[]
sync the docs/code with the recent changes in transformers' `logging` utils: 1. change the default level to `WARNING` 2. add `DATASETS_VERBOSITY` env var 3. expand docs
true
699,472,596
https://api.github.com/repos/huggingface/datasets/issues/617
https://github.com/huggingface/datasets/issues/617
617
Compare different Rouge implementations
closed
7
2020-09-11T15:49:32
2023-03-22T12:08:44
2020-10-02T09:52:18
ibeltagy
[]
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example. Can you make sure the google-research implementation you are using matches the official perl implementation? There are a couple of python wrappers around the perl implementation, [this](https://pypi.org/project/pyrouge/) has been commonly used, and [this](https://github.com/pltrdy/files2rouge) is used in fairseq). There's also a python reimplementation [here](https://github.com/pltrdy/rouge) but its RougeL numbers are way off.
false
699,462,293
https://api.github.com/repos/huggingface/datasets/issues/616
https://github.com/huggingface/datasets/issues/616
616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
open
14
2020-09-11T15:39:16
2021-07-22T21:12:21
null
BramVanroy
[]
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this strange Userwarning without a stack trace: > Set __getitem__(key) output type to torch for ['input_ids', 'sembedding'] columns (when key is int or slice) and don't output other (un-formatted) columns. > C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\datasets\arrow_dataset.py:835: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly non-writeable) NumPy array using the tensor. You may want to copy the array to protect its data or make it writeable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:141.) > return torch.tensor(x, **format_kwargs) The first one might not be related to the warning, but it is odd that it is shown, too. It is unclear whether that is something that I should do or something that that the program is doing at that moment. Snippet: ``` dataset = Dataset.from_dict(torch.load("data/dummy.pt.pt")) print(dataset) tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") keys_to_retain = {"input_ids", "sembedding"} dataset = dataset.map(lambda example: tokenizer(example["text"], padding='max_length'), batched=True) dataset.remove_columns_(set(dataset.column_names) - keys_to_retain) dataset.set_format(type="torch", columns=["input_ids", "sembedding"]) dataloader = torch.utils.data.DataLoader(dataset, batch_size=2) print(next(iter(dataloader))) ``` PS: the input type for `remove_columns_` should probably be an Iterable rather than just a List.
false
699,410,773
https://api.github.com/repos/huggingface/datasets/issues/615
https://github.com/huggingface/datasets/issues/615
615
Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
closed
16
2020-09-11T14:50:38
2024-05-02T06:53:15
2020-09-19T16:46:31
lhoestq
[]
How to reproduce: ```python from datasets import load_dataset wiki = load_dataset("wikipedia", "20200501.en", split="train") wiki[[0]] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <ipython-input-13-381aedc9811b> in <module> ----> 1 wikipedia[[0]] ~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in __getitem__(self, key) 1069 format_columns=self._format_columns, 1070 output_all_columns=self._output_all_columns, -> 1071 format_kwargs=self._format_kwargs, 1072 ) 1073 ~/Desktop/hf/nlp/src/datasets/arrow_dataset.py in _getitem(self, key, format_type, format_columns, output_all_columns, format_kwargs) 1037 ) 1038 else: -> 1039 data_subset = self._data.take(indices_array) 1040 1041 if format_type is not None: ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.take() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/compute.py in take(data, indices, boundscheck) 266 """ 267 options = TakeOptions(boundscheck) --> 268 return call_function('take', [data, indices], options) 269 270 ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.call_function() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/_compute.pyx in pyarrow._compute.Function.call() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowInvalid: offset overflow while concatenating arrays ``` It seems to work fine with small datasets or with pyarrow 0.17.1
false
699,177,110
https://api.github.com/repos/huggingface/datasets/issues/614
https://github.com/huggingface/datasets/pull/614
614
[doc] Update deploy.sh
closed
0
2020-09-11T11:06:13
2020-09-14T08:49:19
2020-09-14T08:49:17
thomwolf
[]
true
699,117,070
https://api.github.com/repos/huggingface/datasets/issues/613
https://github.com/huggingface/datasets/pull/613
613
Add CoNLL-2003 shared task dataset
closed
7
2020-09-11T10:02:30
2020-10-05T10:43:05
2020-09-17T10:36:38
vblagoje
[]
Please consider adding CoNLL-2003 shared task dataset as it's beneficial for token classification tasks. The motivation behind this PR is the [PR](https://github.com/huggingface/transformers/pull/7041) in the transformers project. This dataset would be not only useful for the usual run-of-the-mill NER tasks but also for syntactic chunking and part-of-speech (POS) tagging.
true
699,008,644
https://api.github.com/repos/huggingface/datasets/issues/612
https://github.com/huggingface/datasets/pull/612
612
add multi-proc to dataset dict
closed
0
2020-09-11T08:18:13
2020-09-11T10:20:13
2020-09-11T10:20:11
thomwolf
[]
Add multi-proc to `DatasetDict`
true
698,863,988
https://api.github.com/repos/huggingface/datasets/issues/611
https://github.com/huggingface/datasets/issues/611
611
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
closed
6
2020-09-11T05:29:12
2022-06-01T15:11:43
2022-06-01T15:11:43
sangyx
[]
Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most recent call last) <ipython-input-7-146b6b495963> in <module> ----> 1 dataset = Dataset.from_pandas(emb) ~/miniconda3/envs/dev/lib/python3.7/site-packages/nlp/arrow_dataset.py in from_pandas(cls, df, features, info, split) 223 info.features = features 224 pa_table: pa.Table = pa.Table.from_pandas( --> 225 df=df, schema=pa.schema(features.type) if features is not None else None 226 ) 227 return cls(pa_table, info=info, split=split) ~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pandas() ~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/pandas_compat.py in dataframe_to_arrays(df, schema, preserve_index, nthreads, columns, safe) 591 for i, maybe_fut in enumerate(arrays): 592 if isinstance(maybe_fut, futures.Future): --> 593 arrays[i] = maybe_fut.result() 594 595 types = [x.type for x in arrays] ~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/_base.py in result(self, timeout) 426 raise CancelledError() 427 elif self._state == FINISHED: --> 428 return self.__get_result() 429 430 self._condition.wait(timeout) ~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/_base.py in __get_result(self) 382 def __get_result(self): 383 if self._exception: --> 384 raise self._exception 385 else: 386 return self._result ~/miniconda3/envs/dev/lib/python3.7/concurrent/futures/thread.py in run(self) 55 56 try: ---> 57 result = self.fn(*self.args, **self.kwargs) 58 except BaseException as exc: 59 self.future.set_exception(exc) ~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/pandas_compat.py in convert_column(col, field) 557 558 try: --> 559 result = pa.array(col, type=type_, from_pandas=True, safe=safe) 560 except (pa.ArrowInvalid, 561 pa.ArrowNotImplementedError, ~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.array() ~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib._ndarray_to_array() ~/miniconda3/envs/dev/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 ``` My code is : ```python from nlp import Dataset dataset = Dataset.from_pandas(emb) ```
false
698,349,388
https://api.github.com/repos/huggingface/datasets/issues/610
https://github.com/huggingface/datasets/issues/610
610
Load text file for RoBERTa pre-training.
closed
43
2020-09-10T18:41:38
2022-11-22T13:51:24
2022-11-22T13:51:23
chiyuzhang94
[]
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file. This test.txt is a simple sample where each line is a sentence. ``` from datasets import load_dataset dataset = load_dataset('text', data_files='test.txt',cache_dir="./") dataset.set_format(type='torch',columns=["text"]) dataloader = torch.utils.data.DataLoader(dataset, batch_size=8) next(iter(dataloader)) ``` But dataload cannot yield sample and error is: ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-12-388aca337e2f> in <module> ----> 1 next(iter(dataloader)) /Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in __next__(self) 361 362 def __next__(self): --> 363 data = self._next_data() 364 self._num_yielded += 1 365 if self._dataset_kind == _DatasetKind.Iterable and \ /Library/Python/3.7/site-packages/torch/utils/data/dataloader.py in _next_data(self) 401 def _next_data(self): 402 index = self._next_index() # may raise StopIteration --> 403 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 404 if self._pin_memory: 405 data = _utils.pin_memory.pin_memory(data) /Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] /Library/Python/3.7/site-packages/torch/utils/data/_utils/fetch.py in <listcomp>(.0) 42 def fetch(self, possibly_batched_index): 43 if self.auto_collation: ---> 44 data = [self.dataset[idx] for idx in possibly_batched_index] 45 else: 46 data = self.dataset[possibly_batched_index] KeyError: 0 ``` `dataset.set_format(type='torch',columns=["text"])` returns a log says: ``` Set __getitem__(key) output type to torch for ['text'] columns (when key is int or slice) and don't output other (un-formatted) columns. ``` I noticed the dataset is `DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None)}, num_rows: 44)})`. Each sample can be accessed by `dataset["train"]["text"]` instead of `dataset["text"]`. Could you please give me any suggestions on how to modify this code to load the text file? Versions: Python version 3.7.3 PyTorch version 1.6.0 TensorFlow version 2.3.0 datasets version: 1.0.1
false
698,323,989
https://api.github.com/repos/huggingface/datasets/issues/609
https://github.com/huggingface/datasets/pull/609
609
Update GLUE URLs (now hosted on FB)
closed
2
2020-09-10T18:16:32
2020-09-14T19:06:02
2020-09-14T19:06:01
jeswan
[]
NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112.
true
698,291,156
https://api.github.com/repos/huggingface/datasets/issues/608
https://github.com/huggingface/datasets/issues/608
608
Don't use the old NYU GLUE dataset URLs
closed
1
2020-09-10T17:47:02
2020-09-16T06:53:18
2020-09-16T06:53:18
jeswan
[]
NYU is switching dataset hosting from Google to FB. Initial changes to `datasets` are in https://github.com/jeswan/nlp/commit/b7d4a071d432592ded971e30ef73330529de25ce. What tests do you suggest I run before opening a PR? See: https://github.com/jiant-dev/jiant/issues/161 and https://github.com/nyu-mll/jiant/pull/1112
false
698,094,442
https://api.github.com/repos/huggingface/datasets/issues/607
https://github.com/huggingface/datasets/pull/607
607
Add transmit_format wrapper and tests
closed
0
2020-09-10T15:03:50
2020-09-10T15:21:48
2020-09-10T15:21:47
lhoestq
[]
Same as #605 but using a decorator on-top of dataset transforms that are not in place
true
698,050,442
https://api.github.com/repos/huggingface/datasets/issues/606
https://github.com/huggingface/datasets/pull/606
606
Quick fix :)
closed
1
2020-09-10T14:32:06
2020-09-10T16:18:32
2020-09-10T16:18:30
thomwolf
[]
`nlp` => `datasets`
true
697,887,401
https://api.github.com/repos/huggingface/datasets/issues/605
https://github.com/huggingface/datasets/pull/605
605
[Datasets] Transmit format to children
closed
1
2020-09-10T12:30:18
2023-09-24T09:49:47
2020-09-10T16:15:21
thomwolf
[]
Transmit format to children obtained when processing a dataset. Added a test. When concatenating datasets, if the formats are disparate, the concatenated dataset has a format reset to defaults.
true
697,774,581
https://api.github.com/repos/huggingface/datasets/issues/604
https://github.com/huggingface/datasets/pull/604
604
Update bucket prefix
closed
0
2020-09-10T11:01:13
2020-09-10T12:45:33
2020-09-10T12:45:32
lhoestq
[]
cc @julien-c
true
697,758,750
https://api.github.com/repos/huggingface/datasets/issues/603
https://github.com/huggingface/datasets/pull/603
603
Set scripts version to master
closed
0
2020-09-10T10:47:44
2020-09-10T11:02:05
2020-09-10T11:02:04
lhoestq
[]
By default the scripts version is master, so that if the library is installed with ``` pip install git+http://github.com/huggingface/nlp.git ``` or ``` git clone http://github.com/huggingface/nlp.git pip install -e ./nlp ``` will use the latest scripts, and not the ones from the previous version.
true
697,636,605
https://api.github.com/repos/huggingface/datasets/issues/602
https://github.com/huggingface/datasets/pull/602
602
apply offset to indices in multiprocessed map
closed
0
2020-09-10T08:54:30
2020-09-10T11:03:39
2020-09-10T11:03:37
lhoestq
[]
Fix #597 I fixed the indices by applying an offset. I added the case to our tests to make sure it doesn't happen again. I also added the message proposed by @thomwolf in #597 ```python >>> d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2, load_from_cache_file=False) Done writing 10 indices in 80 bytes . Testing the mapped function outputs [0, 1] Testing finished, running the mapping function on the dataset Done writing 5 indices in 41 bytes . Done writing 5 indices in 41 bytes . Spawning 2 processes [0, 1, 2, 3, 4] [5, 6, 7, 8, 9] #0: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 377.90ba/s] #1: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 378.92ba/s] Concatenating 2 shards from multiprocessing # Dataset(features: {'label': ClassLabel(num_classes=2, names=['neg', 'pos'], names_file=None, id=None), 'text': Value(dtype='string', id=None)}, num_rows: 10) ```
true
697,574,848
https://api.github.com/repos/huggingface/datasets/issues/601
https://github.com/huggingface/datasets/pull/601
601
check if trasnformers has PreTrainedTokenizerBase
closed
0
2020-09-10T07:54:56
2020-09-10T11:01:37
2020-09-10T11:01:36
lhoestq
[]
Fix #598
true
697,496,913
https://api.github.com/repos/huggingface/datasets/issues/600
https://github.com/huggingface/datasets/issues/600
600
Pickling error when loading dataset
closed
5
2020-09-10T06:28:08
2020-09-25T14:31:54
2020-09-25T14:31:54
kandorm
[]
Hi, I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as: ``` # line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size) dataset = load_dataset("text", data_files=file_path, split="train") dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True) dataset.set_format(type='torch', columns=['input_ids']) return dataset ``` When I run this with transformers (3.1.0) and nlp (0.4.0), I get the following error: ``` Traceback (most recent call last): File "src/run_language_modeling.py", line 319, in <module> main() File "src/run_language_modeling.py", line 248, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None File "src/run_language_modeling.py", line 139, in get_dataset dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True) File "/data/nlp/src/nlp/arrow_dataset.py", line 1136, in map new_fingerprint=new_fingerprint, File "/data/nlp/src/nlp/fingerprint.py", line 158, in wrapper self._fingerprint, transform, kwargs_for_fingerprint File "/data/nlp/src/nlp/fingerprint.py", line 105, in update_fingerprint hasher.update(transform_args[key]) File "/data/nlp/src/nlp/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/data/nlp/src/nlp/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/data/nlp/src/nlp/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/data/nlp/src/nlp/utils/py_utils.py", line 362, in dumps dump(obj, file) File "/data/nlp/src/nlp/utils/py_utils.py", line 339, in dump Pickler(file, recurse=True).dump(obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 446, in dump StockPickler.dump(self, obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 409, in dump self.save(obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1438, in save_function obj.__dict__, fkwdefaults), obj=obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1170, in save_cell pickler.save_reduce(_create_cell, (f,), obj=obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 736, in save_tuple save(element) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 521, in save self.save_reduce(obj=obj, *rv) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 605, in save_reduce save(cls) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 1365, in save_type obj.__bases__, _dict), obj=obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 610, in save_reduce save(args) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 751, in save_tuple save(element) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict StockPickler.save_dict(pickler, obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 476, in save f(self, obj) # Call unbound method with explicit self File "/root/miniconda3/envs/py3.6/lib/python3.6/site-packages/dill/_dill.py", line 933, in save_module_dict StockPickler.save_dict(pickler, obj) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 821, in save_dict self._batch_setitems(obj.items()) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 847, in _batch_setitems save(v) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 507, in save self.save_global(obj, rv) File "/root/miniconda3/envs/py3.6/lib/python3.6/pickle.py", line 927, in save_global (obj, module_name, name)) _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union ```
false
697,377,786
https://api.github.com/repos/huggingface/datasets/issues/599
https://github.com/huggingface/datasets/pull/599
599
Add MATINF dataset
closed
2
2020-09-10T03:31:09
2023-09-24T09:50:08
2020-09-17T12:17:25
JetRunner
[]
@lhoestq The command to create metadata failed. I guess it's because the zip is not downloaded from a remote address? How to solve that? Also the CI fails and I don't know how to fix that :(
true
697,156,501
https://api.github.com/repos/huggingface/datasets/issues/598
https://github.com/huggingface/datasets/issues/598
598
The current version of the package on github has an error when loading dataset
closed
3
2020-09-09T21:03:23
2020-09-10T06:25:21
2020-09-09T22:57:28
zeyuyun1
[]
Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine): To recreate the error: First, installing nlp directly from source: ``` git clone https://github.com/huggingface/nlp.git cd nlp pip install -e . ``` Then run: ``` from nlp import load_dataset dataset = load_dataset('wikitext', 'wikitext-2-v1',split = 'train') ``` will give error: ``` >>> dataset = load_dataset('wikitext', 'wikitext-2-v1',split = 'train') Checking /home/zeyuy/.cache/huggingface/datasets/84a754b488511b109e2904672d809c041008416ae74e38f9ee0c80a8dffa1383.2e21f48d63b5572d19c97e441fbb802257cf6a4c03fbc5ed8fae3d2c2273f59e.py for additional imports. Found main folder for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext Found specific version folder for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d Found script file from https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py to /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/wikitext.py Found dataset infos file from https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/dataset_infos.json to /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/dataset_infos.json Found metadata file for dataset https://raw.githubusercontent.com/huggingface/nlp/0.4.0/datasets/wikitext/wikitext.py at /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d/wikitext.json Loading Dataset Infos from /home/zeyuy/.cache/huggingface/modules/nlp_modules/datasets/wikitext/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d Overwrite dataset info from restored data version. Loading Dataset info from /home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d Reusing dataset wikitext (/home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d) Constructing Dataset for split train, from /home/zeyuy/.cache/huggingface/datasets/wikitext/wikitext-2-v1/1.0.0/5de6e79516446f747fcccc09aa2614fa159053b75909594d28d262395f72d89d Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/load.py", line 600, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 611, in as_dataset datasets = utils.map_nested( File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 216, in map_nested return function(data_struct) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 631, in _build_single_dataset ds = self._as_dataset( File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/builder.py", line 704, in _as_dataset return Dataset(**dataset_kwargs) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/arrow_dataset.py", line 188, in __init__ self._fingerprint = generate_fingerprint(self) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 91, in generate_fingerprint hasher.update(key) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 361, in dumps with _no_cache_fields(obj): File "/home/zeyuy/miniconda3/lib/python3.8/contextlib.py", line 113, in __enter__ return next(self.gen) File "/home/zeyuy/transformers/examples/language-modeling/nlp/src/nlp/utils/py_utils.py", line 348, in _no_cache_fields if isinstance(obj, tr.PreTrainedTokenizerBase) and hasattr(obj, "cache") and isinstance(obj.cache, dict): AttributeError: module 'transformers' has no attribute 'PreTrainedTokenizerBase' ```
false
697,112,029
https://api.github.com/repos/huggingface/datasets/issues/597
https://github.com/huggingface/datasets/issues/597
597
Indices incorrect with multiprocessing
closed
2
2020-09-09T19:50:56
2020-09-10T11:03:37
2020-09-10T11:03:37
joeddav
[]
When `num_proc` > 1, the indices argument passed to the map function is incorrect: ```python d = load_dataset('imdb', split='test[:1%]') def fn(x, inds): print(inds) return x d.select(range(10)).map(fn, with_indices=True, batched=True) # [0, 1] # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2) # [0, 1] # [0, 1] # [0, 1, 2, 3, 4] # [0, 1, 2, 3, 4] ``` As you can see, the subset passed to each thread is indexed from 0 to N which doesn't reflect their positions in `d`.
false
696,928,139
https://api.github.com/repos/huggingface/datasets/issues/596
https://github.com/huggingface/datasets/pull/596
596
[style/quality] Moving to isort 5.0.0 + style/quality on datasets and metrics
closed
1
2020-09-09T15:47:21
2020-09-10T10:05:04
2020-09-10T10:05:03
thomwolf
[]
Move the repo to isort 5.0.0. Also start testing style/quality on datasets and metrics. Specific rule: we allow F401 (unused imports) in metrics to be able to add imports to detect early on missing dependencies. Maybe we could add this in datasets but while cleaning this I've seen many example of really unused imports in dataset so maybe it's better to have it as a line-by-line nova instead of a general rule like in metrics.
true
696,892,304
https://api.github.com/repos/huggingface/datasets/issues/595
https://github.com/huggingface/datasets/issues/595
595
`Dataset`/`DatasetDict` has no attribute 'save_to_disk'
closed
2
2020-09-09T15:01:52
2020-09-09T16:20:19
2020-09-09T16:20:18
sudarshan85
[]
Hi, As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.py` which is saved after `pip install nlp -U` in my `conda` environment DOES NOT contain the `save_to_disk` method. I even tried `pip install git+https://github.com/huggingface/nlp.git ` and still no luck. Do I need to install the library in another way?
false
696,816,893
https://api.github.com/repos/huggingface/datasets/issues/594
https://github.com/huggingface/datasets/pull/594
594
Fix germeval url
closed
0
2020-09-09T13:29:35
2020-09-09T13:34:35
2020-09-09T13:34:34
lhoestq
[]
Continuation of #593 but without the dummy data hack
true
696,679,182
https://api.github.com/repos/huggingface/datasets/issues/593
https://github.com/huggingface/datasets/pull/593
593
GermEval 2014: new download urls
closed
5
2020-09-09T10:07:29
2020-09-09T14:16:54
2020-09-09T13:35:15
stefan-it
[]
Hi, unfortunately, the download links for the GermEval 2014 dataset have changed: they're now located on a Google Drive. I changed the URLs and bump version from 1.0.0 to 2.0.0.
true
696,619,986
https://api.github.com/repos/huggingface/datasets/issues/592
https://github.com/huggingface/datasets/pull/592
592
Test in memory and on disk
closed
0
2020-09-09T08:59:30
2020-09-09T13:50:04
2020-09-09T13:50:03
lhoestq
[]
I added test parameters to do every test both in memory and on disk. I also found a bug in concatenate_dataset thanks to the new tests and fixed it.
true
696,530,413
https://api.github.com/repos/huggingface/datasets/issues/591
https://github.com/huggingface/datasets/pull/591
591
fix #589 (backward compat)
closed
0
2020-09-09T07:33:13
2020-09-09T08:57:56
2020-09-09T08:57:55
thomwolf
[]
Fix #589
true
696,501,827
https://api.github.com/repos/huggingface/datasets/issues/590
https://github.com/huggingface/datasets/issues/590
590
The process cannot access the file because it is being used by another process (windows)
closed
7
2020-09-09T07:01:36
2020-09-25T14:02:28
2020-09-25T14:02:28
saareliad
[]
Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map shutil.move(tmp_file.name, cache_file_name) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\shutil.py", line 803, in move os.unlink(src) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\saareliad\\.cache\\huggingface\\datasets\\squad\\plain_text\\1.0.0\\408a8fa46a1e2805445b793f1022e743428ca739a34809fce872f0c7f17b44ab\\tmpsau1bep1' ```
false
696,488,447
https://api.github.com/repos/huggingface/datasets/issues/589
https://github.com/huggingface/datasets/issues/589
589
Cannot use nlp.load_dataset text, AttributeError: module 'nlp.utils' has no attribute 'logging'
closed
0
2020-09-09T06:46:53
2020-09-09T08:57:54
2020-09-09T08:57:54
ksjae
[]
``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 533, in load_dataset builder_cls = import_main_class(module_path, dataset=True) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 61, in import_main_class module = importlib.import_module(module_path) File "/root/anaconda3/envs/pytorch/lib/python3.7/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1006, in _gcd_import File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 677, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 728, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/datasets/text/5dc629379536c4037d9c2063e1caa829a1676cf795f8e030cd90a537eba20c08/text.py", line 9, in <module> logger = nlp.utils.logging.get_logger(__name__) AttributeError: module 'nlp.utils' has no attribute 'logging' ``` Occurs on the following code, or any code including the load_dataset('text'): ``` dataset = load_dataset("text", data_files=file_path, split="train") dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_special_tokens=True, truncation=True, max_length=args.block_size), batched=True) dataset.set_format(type='torch', columns=['input_ids']) return dataset ```
false
695,249,809
https://api.github.com/repos/huggingface/datasets/issues/588
https://github.com/huggingface/datasets/pull/588
588
Support pathlike obj in load dataset
closed
0
2020-09-07T16:13:21
2020-09-08T07:45:19
2020-09-08T07:45:18
lhoestq
[]
Fix #582 (I recreated the PR, I got an issue with git)
true
695,246,018
https://api.github.com/repos/huggingface/datasets/issues/587
https://github.com/huggingface/datasets/pull/587
587
Support pathlike obj in load dataset
closed
0
2020-09-07T16:09:16
2020-09-07T16:10:35
2020-09-07T16:10:35
lhoestq
[]
Fix #582
true
695,237,999
https://api.github.com/repos/huggingface/datasets/issues/586
https://github.com/huggingface/datasets/pull/586
586
Better message when data files is empty
closed
0
2020-09-07T15:59:57
2020-09-09T09:00:09
2020-09-09T09:00:08
lhoestq
[]
Fix #581
true
695,191,209
https://api.github.com/repos/huggingface/datasets/issues/585
https://github.com/huggingface/datasets/pull/585
585
Fix select for pyarrow < 1.0.0
closed
0
2020-09-07T15:02:52
2020-09-08T07:43:17
2020-09-08T07:43:15
lhoestq
[]
Fix #583
true
695,186,652
https://api.github.com/repos/huggingface/datasets/issues/584
https://github.com/huggingface/datasets/pull/584
584
Use github versioning
closed
1
2020-09-07T14:58:15
2020-09-09T13:37:35
2020-09-09T13:37:34
lhoestq
[]
Right now dataset scripts and metrics are downloaded from S3 which is in sync with master. It means that it's not currently possible to pin the dataset/metric script version. To fix that I changed the download url from S3 to github, and adding a `version` parameter in `load_dataset` and `load_metric` to pin a certain version of the lib, as in #562
true
695,166,265
https://api.github.com/repos/huggingface/datasets/issues/583
https://github.com/huggingface/datasets/issues/583
583
ArrowIndexError on Dataset.select
closed
0
2020-09-07T14:36:29
2020-09-08T07:43:15
2020-09-08T07:43:15
lhoestq
[]
If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0 Example: ```python from nlp import load_dataset mnli = load_dataset("glue", "mnli", split="train") shuffled = mnli.shuffle(seed=42) mnli.select(list(range(len(mnli)))) ``` raises: ```python --------------------------------------------------------------------------- ArrowIndexError Traceback (most recent call last) <ipython-input-64-006a5d38d418> in <module> ----> 1 mnli.shuffle(seed=42).select(list(range(len(mnli)))) ~/Desktop/hf/nlp/src/nlp/fingerprint.py in wrapper(*args, **kwargs) 161 # Call actual function 162 --> 163 out = func(self, *args, **kwargs) 164 165 # Update fingerprint of in-place transforms + update in-place history of transforms ~/Desktop/hf/nlp/src/nlp/arrow_dataset.py in select(self, indices, keep_in_memory, indices_cache_file_name, writer_batch_size, new_fingerprint) 1653 if self._indices is not None: 1654 if PYARROW_V0: -> 1655 indices_array = self._indices.column(0).chunk(0).take(indices_array) 1656 else: 1657 indices_array = self._indices.column(0).take(indices_array) ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.Array.take() ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() ArrowIndexError: take index out of bounds ``` This is because the `take` method is only done on the first chunk which only contains 1000 elements by default (mnli has ~400 000 elements). Shall we change that to use ```python pa.concat_tables(self._indices._indices.slice(i, 1) for i in indices_array) ``` instead of `take` ? @thomwolf
false
695,126,456
https://api.github.com/repos/huggingface/datasets/issues/582
https://github.com/huggingface/datasets/issues/582
582
Allow for PathLike objects
closed
0
2020-09-07T13:54:51
2020-09-08T07:45:17
2020-09-08T07:45:17
BramVanroy
[]
Using PathLike objects as input for `load_dataset` does not seem to work. The following will throw an error. ```python files = list(Path(r"D:\corpora\yourcorpus").glob("*.txt")) dataset = load_dataset("text", data_files=files) ``` Traceback: ``` Traceback (most recent call last): File "C:/dev/python/dutch-simplification/main.py", line 7, in <module> dataset = load_dataset("text", data_files=files) File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 470, in download_and_prepare self._save_info() File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 564, in _save_info self.info.write_to_directory(self._cache_dir) File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\info.py", line 149, in write_to_directory self._dump_info(f) File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\info.py", line 156, in _dump_info file.write(json.dumps(asdict(self)).encode("utf-8")) File "c:\users\bramv\appdata\local\programs\python\python38\lib\json\__init__.py", line 231, in dumps return _default_encoder.encode(obj) File "c:\users\bramv\appdata\local\programs\python\python38\lib\json\encoder.py", line 199, in encode chunks = self.iterencode(o, _one_shot=True) File "c:\users\bramv\appdata\local\programs\python\python38\lib\json\encoder.py", line 257, in iterencode return _iterencode(o, 0) TypeError: keys must be str, int, float, bool or None, not WindowsPath ``` We have to cast to a string explicitly to make this work. It would be nicer if we could actually use PathLike objects. ```python files = [str(f) for f in Path(r"D:\corpora\wablieft").glob("*.txt")] ```
false
695,120,517
https://api.github.com/repos/huggingface/datasets/issues/581
https://github.com/huggingface/datasets/issues/581
581
Better error message when input file does not exist
closed
0
2020-09-07T13:47:59
2020-09-09T09:00:07
2020-09-09T09:00:07
BramVanroy
[]
In the following scenario, when `data_files` is an empty list, the stack trace and error message could be improved. This can probably be solved by checking for each file whether it actually exists and/or whether the argument is not false-y. ```python dataset = load_dataset("text", data_files=[]) ``` Example error trace. ``` Using custom data configuration default Downloading and preparing dataset text/default-d18f9b6611eb8e16 (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to C:\Users\bramv\.cache\huggingface\datasets\text\default-d18f9b6611eb8e16\0.0.0\3a79870d85f1982d6a2af884fde86a71c771747b4b161fd302d28ad22adf985b... Traceback (most recent call last): File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 424, in incomplete_dir yield tmp_dir File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare self._download_and_prepare( File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 537, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 813, in _prepare_split num_examples, num_bytes = writer.finalize() File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\arrow_writer.py", line 217, in finalize self.pa_writer.close() AttributeError: 'NoneType' object has no attribute 'close' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:/dev/python/dutch-simplification/main.py", line 7, in <module> dataset = load_dataset("text", data_files=files) File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\load.py", line 548, in load_dataset builder_instance.download_and_prepare( File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 470, in download_and_prepare self._save_info() File "c:\users\bramv\appdata\local\programs\python\python38\lib\contextlib.py", line 131, in __exit__ self.gen.throw(type, value, traceback) File "C:\Users\bramv\.virtualenvs\dutch-simplification-nbNdqK9u\lib\site-packages\nlp\builder.py", line 430, in incomplete_dir shutil.rmtree(tmp_dir) File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 737, in rmtree return _rmtree_unsafe(path, onerror) File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 615, in _rmtree_unsafe onerror(os.unlink, fullname, sys.exc_info()) File "c:\users\bramv\appdata\local\programs\python\python38\lib\shutil.py", line 613, in _rmtree_unsafe os.unlink(fullname) PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\bramv\\.cache\\huggingface\\datasets\\text\\default-d18f9b6611eb8e16\\0.0.0\\3a79870d85f1982d6a2af884fde86a71c771747b4b161fd302d28ad22adf985b.incomplete\\text-train.arrow' ```
false
694,954,551
https://api.github.com/repos/huggingface/datasets/issues/580
https://github.com/huggingface/datasets/issues/580
580
nlp re-creates already-there caches when using a script, but not within a shell
closed
2
2020-09-07T10:23:50
2020-09-07T15:19:09
2020-09-07T14:26:41
TevenLeScao
[]
`nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell. Example: try running ``` import nlp hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0) hans_hard_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 1) ``` twice. If launched from a `file.py` script, the cache will be re-created the second time. If launched as 3 shell/`ipython` commands, `nlp` will correctly re-use the cache. As observed with @lhoestq.
false
694,947,599
https://api.github.com/repos/huggingface/datasets/issues/579
https://github.com/huggingface/datasets/pull/579
579
Doc metrics
closed
0
2020-09-07T10:15:24
2020-09-10T13:06:11
2020-09-10T13:06:10
thomwolf
[]
Adding documentation on metrics loading/using/sharing
true
694,849,940
https://api.github.com/repos/huggingface/datasets/issues/578
https://github.com/huggingface/datasets/pull/578
578
Add CommonGen Dataset
closed
0
2020-09-07T08:17:17
2020-09-07T11:50:29
2020-09-07T11:49:07
JetRunner
[]
CC Authors: @yuchenlin @MichaelZhouwang
true
694,607,148
https://api.github.com/repos/huggingface/datasets/issues/577
https://github.com/huggingface/datasets/issues/577
577
Some languages in wikipedia dataset are not loading
closed
16
2020-09-07T01:16:29
2023-04-11T22:50:48
2022-10-11T11:16:04
gaguilar
[]
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', 'an'] for lang in langs: data = nlp.load_dataset('wikipedia', f'20200501.{lang}', beam_runner='DirectRunner', split='train') print(lang, len(data)) ``` Here's what I see for 'ar' (it gets stuck there): ``` Downloading and preparing dataset wikipedia/20200501.ar (download: Unknown size, generated: Unknown size, post-processed: Unknown sizetotal: Unknown size) to /home/gaguilar/.cache/huggingface/datasets/wikipedia/20200501.ar/1.0.0/7be7f4324255faf70687be8692de57cf79197afdc33ff08d6a04ed602df32d50... ``` Note that those languages are indeed in the list of expected languages. Any suggestions on how to work around this? Thanks!
false
694,348,645
https://api.github.com/repos/huggingface/datasets/issues/576
https://github.com/huggingface/datasets/pull/576
576
Fix the code block in doc
closed
1
2020-09-06T11:40:55
2020-09-07T07:37:32
2020-09-07T07:37:18
JetRunner
[]
true
693,691,611
https://api.github.com/repos/huggingface/datasets/issues/575
https://github.com/huggingface/datasets/issues/575
575
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading.
closed
6
2020-09-04T21:46:25
2020-09-22T10:41:36
2020-09-22T10:41:36
sudarshan85
[]
Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ``` However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the last few lines): ``` /net/vaosl01/opt/NFS/su0/miniconda3/envs/hf/lib/python3.7/site-packages/nlp/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only) 354 " to False." 355 ) --> 356 raise ConnectionError("Couldn't reach {}".format(url)) 357 358 # From now on, connected is True. ConnectionError: Couldn't reach https://firebasestorage.googleapis.com/v0/b/mtl-sentence-representations.appspot.com/o/data%2Fmrpc_dev_ids.tsv?alt=media&token=ec5c0836-31d5-48f4-b431-7480817f1adc ``` I tried glue with cola and sst2. I got the same error, just instead of mrpc in the URL, it was replaced with cola and sst2. Since this was not working, I thought I'll try another dataset. So I tried downloading the imdb dataset: ``` ds = load_dataset('imdb', split='train') ``` This downloads the data, but it just blocks after that: ``` Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4.56k/4.56k [00:00<00:00, 1.38MB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.07k/2.07k [00:00<00:00, 1.15MB/s] Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown sizetotal: 207.28 MiB) to /net/vaosl01/opt/NFS/su0/huggingface/datasets/imdb/plain_text/1.0.0/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743... Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 84.1M/84.1M [00:07<00:00, 11.1MB/s] ``` I checked the folder `$HF_HOME/datasets/downloads/extracted/<id>/aclImdb`. This folder is constantly growing in size. When I navigated to the train folder within, there was no file. However, the test folder seemed to be populating. The last time I checked it was 327M. I thought the Imdb dataset was smaller than that. My questions are: 1. Why is it still blocking? Is it still downloading? 2. I specified split as train, so why is the test folder being populated? 3. I read somewhere that after downloading, `nlp` converts the text files into some sort of `arrow` files, which will also take a while. Is this also happening here? Thanks.
false
693,364,853
https://api.github.com/repos/huggingface/datasets/issues/574
https://github.com/huggingface/datasets/pull/574
574
Add modules cache
closed
2
2020-09-04T16:30:03
2020-09-22T10:27:08
2020-09-07T09:01:35
lhoestq
[]
As discusses in #554 , we should use a module cache directory outside of the python packages directory since we may not have write permissions. I added a new HF_MODULES_PATH directory that is added to the python path when doing `import nlp`. In this directory, a module `nlp_modules` is created so that datasets can be added to `nlp_modules.datasets` and metrics to `nlp_modules.metrics`. `nlp_modules` doesn't exist on Pypi. If someone using cloudpickle still wants to have the downloaded dataset/metrics scripts to be inside the nlp directory, it is still possible to change the environment variable HF_MODULES_CACHE to be a path inside the nlp lib.
true
693,091,790
https://api.github.com/repos/huggingface/datasets/issues/573
https://github.com/huggingface/datasets/pull/573
573
Faster caching for text dataset
closed
0
2020-09-04T11:58:34
2020-09-04T12:53:24
2020-09-04T12:53:23
lhoestq
[]
As mentioned in #546 and #548 , hashing `data_files` contents to get the cache directory name for a text dataset can take a long time. To make it faster I changed the hashing so that it takes into account the `path` and the `last modified timestamp` of each data file, instead of iterating through the content of each file to get a hash.
true
692,598,231
https://api.github.com/repos/huggingface/datasets/issues/572
https://github.com/huggingface/datasets/pull/572
572
Add CLUE Benchmark (11 datasets)
closed
3
2020-09-04T01:57:40
2020-09-07T09:59:11
2020-09-07T09:59:10
JetRunner
[]
Add 11 tasks of [CLUE](https://github.com/CLUEbenchmark/CLUE).
true
692,109,287
https://api.github.com/repos/huggingface/datasets/issues/571
https://github.com/huggingface/datasets/pull/571
571
Serialization
closed
4
2020-09-03T16:21:38
2020-09-07T07:46:08
2020-09-07T07:46:07
lhoestq
[]
I added `save` and `load` method to serialize/deserialize a dataset object in a folder. It moves the arrow files there (or write them if the tables were in memory), and saves the pickle state in a json file `state.json`, except the info that are in a separate file `dataset_info.json`. Example: ```python import nlp squad = nlp.load_dataset("squad", split="train") squad.save("tmp/squad") squad = nlp.Dataset.load("tmp/squad") ``` `ls tmp/squad` ``` dataset_info.json squad-train.arrow state.json ``` `cat tmp/squad/state.json` ```json { "_data": null, "_data_files": [ { "filename": "squad-train.arrow", "skip": 0, "take": 87599 } ], "_fingerprint": "61f452797a686bc1", "_format_columns": null, "_format_kwargs": {}, "_format_type": null, "_indexes": {}, "_indices": null, "_indices_data_files": [], "_inplace_history": [ { "transforms": [] } ], "_output_all_columns": false, "_split": "train" } ``` `cat tmp/squad/dataset_info.json` ```json { "builder_name": "squad", "citation": "@article{2016arXiv160605250R,\n author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},\n Konstantin and {Liang}, Percy},\n title = \"{SQuAD: 100,000+ Questions for Machine Comprehension of Text}\",\n journal = {arXiv e-prints},\n year = 2016,\n eid = {arXiv:1606.05250},\n pages = {arXiv:1606.05250},\narchivePrefix = {arXiv},\n eprint = {1606.05250},\n}\n", "config_name": "plain_text", "dataset_size": 89789763, "description": "Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.\n", "download_checksums": { "https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json": { "checksum": "95aa6a52d5d6a735563366753ca50492a658031da74f301ac5238b03966972c9", "num_bytes": 4854279 }, "https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json": { "checksum": "3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955", "num_bytes": 30288272 } }, "download_size": 35142551, "features": { "answers": { "_type": "Sequence", "feature": { "answer_start": { "_type": "Value", "dtype": "int32", "id": null }, "text": { "_type": "Value", "dtype": "string", "id": null } }, "id": null, "length": -1 }, "context": { "_type": "Value", "dtype": "string", "id": null }, "id": { "_type": "Value", "dtype": "string", "id": null }, "question": { "_type": "Value", "dtype": "string", "id": null }, "title": { "_type": "Value", "dtype": "string", "id": null } }, "homepage": "https://rajpurkar.github.io/SQuAD-explorer/", "license": "", "post_processed": { "features": null, "resources_checksums": { "train": {}, "train[:10%]": {} } }, "post_processing_size": 0, "size_in_bytes": 124932314, "splits": { "train": { "dataset_name": "squad", "name": "train", "num_bytes": 79317110, "num_examples": 87599 }, "validation": { "dataset_name": "squad", "name": "validation", "num_bytes": 10472653, "num_examples": 10570 } }, "supervised_keys": null, "version": { "description": "New split API (https://tensorflow.org/datasets/splits)", "major": 1, "minor": 0, "nlp_version_to_prepare": null, "patch": 0, "version_str": "1.0.0" } } ```
true
691,846,397
https://api.github.com/repos/huggingface/datasets/issues/570
https://github.com/huggingface/datasets/pull/570
570
add reuters21578 dataset
closed
0
2020-09-03T10:25:47
2020-09-03T10:46:52
2020-09-03T10:46:51
jplu
[]
Reopen a PR this the merge.
true
691,832,720
https://api.github.com/repos/huggingface/datasets/issues/569
https://github.com/huggingface/datasets/pull/569
569
Revert "add reuters21578 dataset"
closed
0
2020-09-03T10:06:16
2020-09-03T10:07:13
2020-09-03T10:07:12
jplu
[]
Reverts huggingface/nlp#471
true
691,638,656
https://api.github.com/repos/huggingface/datasets/issues/568
https://github.com/huggingface/datasets/issues/568
568
`metric.compute` throws `ArrowInvalid` error
closed
3
2020-09-03T04:56:57
2020-10-05T16:33:53
2020-10-05T16:33:53
ibeltagy
[]
I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0` ``` File "/home/beltagy/trainer.py", line 92, in validation_step rouge_scores = rouge.compute(predictions=generated_str, references=gold_str, rouge_types=['rouge2', 'rouge1', 'rougeL']) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/metric.py", line 224, in compute self.finalize(timeout=timeout) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/metric.py", line 213, in finalize self.data = Dataset(**reader.read_files(node_files)) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 217, in read_files dataset_kwargs = self._read_files(files=files, info=self._info, original_instructions=original_instructions) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 162, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 276, in _get_dataset_from_filename f = pa.ipc.open_stream(mmap) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/pyarrow/ipc.py", line 173, in open_stream return RecordBatchStreamReader(source) File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/pyarrow/ipc.py", line 64, in __init__ self._open(source) File "pyarrow/ipc.pxi", line 469, in pyarrow.lib._RecordBatchStreamReader._open File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Tried reading schema message, was null or length 0 ```
false
691,430,245
https://api.github.com/repos/huggingface/datasets/issues/567
https://github.com/huggingface/datasets/pull/567
567
Fix BLEURT metrics for backward compatibility
closed
0
2020-09-02T21:22:35
2020-09-03T07:29:52
2020-09-03T07:29:50
thomwolf
[]
Fix #565
true
691,160,208
https://api.github.com/repos/huggingface/datasets/issues/566
https://github.com/huggingface/datasets/pull/566
566
Remove logger pickling to fix gg colab issues
closed
0
2020-09-02T16:16:21
2020-09-03T16:31:53
2020-09-03T16:31:52
lhoestq
[]
A `logger` objects are not picklable in google colab, contrary to `logger` objects in jupyter notebooks or in python shells. It creates some issues in google colab right now. Indeed by calling any `Dataset` method, the fingerprint update pickles the transform function, and as the logger comes with it, it results in an error (full stacktrace [here](http://pastebin.fr/64330)): ```python /usr/local/lib/python3.6/dist-packages/zmq/backend/cython/socket.cpython-36m-x86_64-linux-gnu.so in zmq.backend.cython.socket.Socket.__reduce_cython__() TypeError: no default __reduce__ due to non-trivial __cinit__ ``` To fix that I no longer dump the transform (`_map_single`, `select`, etc.), but the full name only (`nlp.arrow_dataset.Dataset._map_single`, `nlp.arrow_dataset.Dataset.select`, etc.)
true
691,039,121
https://api.github.com/repos/huggingface/datasets/issues/565
https://github.com/huggingface/datasets/issues/565
565
No module named 'nlp.logging'
closed
2
2020-09-02T13:49:50
2020-09-03T07:29:50
2020-09-03T07:29:50
melody-ju
[]
Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing? ``` >>> import nlp 2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 >>> bleurt = nlp.load_metric("bleurt") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/load.py", line 443, in load_metric metric_cls = import_main_class(module_path, dataset=False) File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/load.py", line 61, in import_main_class module = importlib.import_module(module_path) File "/home/melody/anaconda3/envs/transformers/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/home/melody/anaconda3/envs/transformers/lib/python3.6/site-packages/nlp/metrics/bleurt/43448cf2959ea81d3ae0e71c5c8ee31dc15eed9932f197f5f50673cbcecff2b5/bleurt.py", line 20, in <module> from nlp.logging import get_logger ModuleNotFoundError: No module named 'nlp.logging' ``` Just to show once again that I can't import the logging module: ``` >>> import nlp 2020-09-02 13:48:38.190621: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 >>> nlp.__version__ '0.4.0' >>> from nlp.logging import get_logger Traceback (most recent call last): File "<stdin>", line 1, in <module> ModuleNotFoundError: No module named 'nlp.logging' ```
false
691,000,020
https://api.github.com/repos/huggingface/datasets/issues/564
https://github.com/huggingface/datasets/pull/564
564
Wait for writing in distributed metrics
closed
7
2020-09-02T12:58:50
2020-09-09T09:13:23
2020-09-09T09:13:22
lhoestq
[]
There were CI bugs where a distributed metric would try to read all the files in process 0 while the other processes haven't started writing. To fix that I added a custom locking mechanism that waits for the file to exist before trying to read it
true
690,908,674
https://api.github.com/repos/huggingface/datasets/issues/563
https://github.com/huggingface/datasets/pull/563
563
[Large datasets] Speed up download and processing
closed
2
2020-09-02T10:31:54
2020-09-09T09:03:33
2020-09-09T09:03:32
thomwolf
[]
Various improvements to speed-up creation and processing of large scale datasets. Currently: - distributed downloads - remove etag from datafiles hashes to spare a request when restarting a failed download
true
690,907,604
https://api.github.com/repos/huggingface/datasets/issues/562
https://github.com/huggingface/datasets/pull/562
562
[Reproductibility] Allow to pin versions of datasets/metrics
closed
1
2020-09-02T10:30:13
2023-09-24T09:49:42
2020-09-09T13:04:54
thomwolf
[]
Repurpose the `version` attribute in datasets and metrics to let the user pin a specific version of datasets and metric scripts: ``` dataset = nlp.load_dataset('squad', version='1.0.0') metric = nlp.load_metric('squad', version='1.0.0') ``` Notes: - version number are the release version of the library - currently only possible for canonical datasets/metrics, ie. integrated in the GitHub repo of the library
true
690,871,415
https://api.github.com/repos/huggingface/datasets/issues/561
https://github.com/huggingface/datasets/pull/561
561
Made `share_dataset` more readable
closed
0
2020-09-02T09:34:48
2020-09-03T09:00:30
2020-09-03T09:00:29
TevenLeScao
[]
true
690,488,764
https://api.github.com/repos/huggingface/datasets/issues/560
https://github.com/huggingface/datasets/issues/560
560
Using custom DownloadConfig results in an error
closed
6
2020-09-01T22:23:02
2022-10-04T17:23:45
2022-10-04T17:23:45
ynouri
[]
## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error. ## How to reproduce ### Example without DownloadConfig --> works ```python import os os.environ["HF_HOME"] = "/data/hf-test-without-dl-config-01/" import logging import nlp logging.basicConfig(level=logging.INFO) if __name__ == "__main__": imdb = nlp.load_dataset(path="imdb") ``` ### Example with DownloadConfig --> doesn't work ```python import os os.environ["HF_HOME"] = "/data/hf-test-with-dl-config-01/" import logging import nlp from nlp.utils import DownloadConfig logging.basicConfig(level=logging.INFO) if __name__ == "__main__": download_config = DownloadConfig() imdb = nlp.load_dataset(path="imdb", download_config=download_config) ``` Error traceback: ``` Traceback (most recent call last): File "/.../example_with_dl_config.py", line 13, in <module> imdb = nlp.load_dataset(path="imdb", download_config=download_config) File "/.../python3.6/python3.6/site-packages/nlp/load.py", line 549, in load_dataset download_config=download_config, download_mode=download_mode, ignore_verifications=ignore_verifications, File "/.../python3.6/python3.6/site-packages/nlp/builder.py", line 463, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/.../python3.6/python3.6/site-packages/nlp/builder.py", line 518, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/.../python3.6/python3.6/site-packages/nlp/datasets/imdb/76cdbd7249ea3548c928bbf304258dab44d09cd3638d9da8d42480d1d1be3743/imdb.py", line 86, in _split_generators arch_path = dl_manager.download_and_extract(_DOWNLOAD_URL) File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 220, in download_and_extract return self.extract(self.download(url_or_urls)) File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 158, in download self._record_sizes_checksums(url_or_urls, downloaded_path_or_paths) File "/.../python3.6/python3.6/site-packages/nlp/utils/download_manager.py", line 108, in _record_sizes_checksums self._recorded_sizes_checksums[url] = get_size_checksum_dict(path) File "/.../python3.6/python3.6/site-packages/nlp/utils/info_utils.py", line 79, in get_size_checksum_dict with open(path, "rb") as f: IsADirectoryError: [Errno 21] Is a directory: '/data/hf-test-with-dl-config-01/datasets/extracted/b6802c5b61824b2c1f7dbf7cda6696b5f2e22214e18d171ce1ed3be90c931ce5' ```
false
690,411,263
https://api.github.com/repos/huggingface/datasets/issues/559
https://github.com/huggingface/datasets/pull/559
559
Adding the KILT knowledge source and tasks
closed
1
2020-09-01T20:05:13
2020-09-04T18:05:47
2020-09-04T18:05:47
yjernite
[]
This adds Wikipedia pre-processed for KILT, as well as the task data. Only the question IDs are provided for TriviaQA, but they can easily be mapped back with: ``` import nlp kilt_wikipedia = nlp.load_dataset('kilt_wikipedia') kilt_tasks = nlp.load_dataset('kilt_tasks') triviaqa = nlp.load_dataset('trivia_qa', 'unfiltered.nocontext') triviaqa_map = {} for k in ['train', 'validation', 'test']: triviaqa_map = dict([(q_id, i) for i, q_id in enumerate(triviaqa[k]['question_id'])]) kilt_tasks[k + '_triviaqa'] = kilt_tasks[k + '_triviaqa'].filter(lambda x: x['id'] in triviaqa_map) kilt_tasks[k + '_triviaqa'].map(lambda x: {'input': triviaqa[split][triviaqa_map[x['id']]]['question']}) ``` It would be great to have the dataset by Monday, which is when the paper should land on Arxiv and @fabiopetroni is planning on tweeting about the paper and `facebookresearch` repository for the datasett
true
690,318,105
https://api.github.com/repos/huggingface/datasets/issues/558
https://github.com/huggingface/datasets/pull/558
558
Rerun pip install -e
closed
0
2020-09-01T17:24:39
2020-09-01T17:24:51
2020-09-01T17:24:50
lhoestq
[]
Hopefully it fixes the github actions
true
690,220,135
https://api.github.com/repos/huggingface/datasets/issues/557
https://github.com/huggingface/datasets/pull/557
557
Fix a few typos
closed
0
2020-09-01T15:03:24
2020-09-02T07:39:08
2020-09-02T07:39:07
julien-c
[]
true
690,218,423
https://api.github.com/repos/huggingface/datasets/issues/556
https://github.com/huggingface/datasets/pull/556
556
Add DailyDialog
closed
0
2020-09-01T15:01:15
2020-09-03T15:42:03
2020-09-03T15:38:39
julien-c
[]
http://yanran.li/dailydialog.html https://arxiv.org/pdf/1710.03957.pdf
true