html_url
stringlengths
48
51
title
stringlengths
5
155
comments
stringlengths
63
15.7k
body
stringlengths
0
17.7k
comment_length
int64
16
949
text
stringlengths
164
23.7k
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
@lhoestq, we have two Pull Requests to implement: - Dataset.add_item: #1870 - Dataset.add_column: #2145 which add a single row or column, repectively. The request here is to implement the concatenation of *multiple* rows/columns. Am I right? We should agree on the API: - `concatenate_datasets` with `axis`? - other Dataset method name?
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
51
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) @lhoestq, we have two Pull Requests to implement: - Dataset.add_item: #1870 - Dataset.add_column: #2145 which add a single row or column, repectively. The request here is to implement the concatenation of *multiple* rows/columns. Am I right? We should agree on the API: - `concatenate_datasets` with `axis`? - other Dataset method name?
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
For the API, I like `concatenate_datasets` with `axis` personally :) From a list of `Dataset` objects, it would concatenate them to a new `Dataset` object backed by a `ConcatenationTable`, that is the concatenation of the tables of each input dataset. The concatenation is either on axis=0 (append rows) or on axis=1 (append columns). Regarding what we need to implement: The axis=0 is already supported and is the current behavior of `concatenate_datasets`. Also `add_item` is not needed to implement axis=1 (though it's an awesome addition to this library). To implement axis=1, we either need `add_column` or a `ConcatenationTable` constructor to concatenate tables horizontally. I have a preference for using a `ConcatenationTable` constructor because this way we can end up with a `ConcatenationTable` with only 1 additional block per table, while `add_column` would add 1 block per new column. Maybe we can simply have an equivalent of `ConcatenationTable.from_tables` but for axis=1 ? `axis` could also be an argument of `ConcatenationTable.from_tables`
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
158
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) For the API, I like `concatenate_datasets` with `axis` personally :) From a list of `Dataset` objects, it would concatenate them to a new `Dataset` object backed by a `ConcatenationTable`, that is the concatenation of the tables of each input dataset. The concatenation is either on axis=0 (append rows) or on axis=1 (append columns). Regarding what we need to implement: The axis=0 is already supported and is the current behavior of `concatenate_datasets`. Also `add_item` is not needed to implement axis=1 (though it's an awesome addition to this library). To implement axis=1, we either need `add_column` or a `ConcatenationTable` constructor to concatenate tables horizontally. I have a preference for using a `ConcatenationTable` constructor because this way we can end up with a `ConcatenationTable` with only 1 additional block per table, while `add_column` would add 1 block per new column. Maybe we can simply have an equivalent of `ConcatenationTable.from_tables` but for axis=1 ? `axis` could also be an argument of `ConcatenationTable.from_tables`
https://github.com/huggingface/datasets/issues/849
Load amazon dataset
Thanks for reporting ! We plan to show information about the different configs of the datasets on the website, with the corresponding `load_dataset` calls. Also I think the bullet points formatting has been fixed
Hi, I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset. Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews) ``` from datasets import load_dataset dataset = load_dataset("amazon_us_reviews") ``` How it is when I tried (the error generated does point me to the right direction though) ``` from datasets import load_dataset dataset = load_dataset("amazon_us_reviews", 'Books_v1_00') ``` Also, there is some issue with formatting as it's not showing bullet list in description with new line. Can I work on it?
34
Load amazon dataset Hi, I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset. Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews) ``` from datasets import load_dataset dataset = load_dataset("amazon_us_reviews") ``` How it is when I tried (the error generated does point me to the right direction though) ``` from datasets import load_dataset dataset = load_dataset("amazon_us_reviews", 'Books_v1_00') ``` Also, there is some issue with formatting as it's not showing bullet list in description with new line. Can I work on it? Thanks for reporting ! We plan to show information about the different configs of the datasets on the website, with the corresponding `load_dataset` calls. Also I think the bullet points formatting has been fixed
https://github.com/huggingface/datasets/issues/848
Error when concatenate_datasets
As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory. The indices mapping correspond to a mapping on top of the data table that is used to re-order/select a sample of the original data table. For example if you do `dataset.train_test_split`, then the resulting train and test datasets will have both an indices mapping to tell which examples are in train and which ones in test. Before saving your datasets on disk, you should call `dataset.flatten_indices()` to remove the indices mapping. It should fix your issue. Under the hood it will create a new data table using the indices mapping. The new data table is going to be a subset of the old one (for example taking only the test set examples), and since the indices mapping will be gone you'll be able to concatenate your datasets.
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-38-74fa525512ca> in <module> ----> 1 train_dataset = concatenate_datasets([trn_dataset, test_dataset]) /opt/miniconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py in concatenate_datasets(dsets, info, split) 2547 "However datasets' indices {} come from memory and datasets' indices {} come from disk.".format( 2548 [i for i in range(len(dsets)) if indices_mappings_in_memory[i]], -> 2549 [i for i in range(len(dsets)) if not indices_mappings_in_memory[i]], 2550 ) 2551 ) ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` But it's curious both of my datasets loading from disk, so I check the source code in `arrow_dataset.py` about the Error: ``` trn_dataset._data_files # output [{'filename': 'data/train_dataset/csv-train.arrow', 'skip': 0, 'take': 593264}] test_dataset._data_files # output [{'filename': 'data/test_dataset/csv-test.arrow', 'skip': 0, 'take': 424383}] print([not dset._data_files for dset in [trn_dataset, test_dataset]]) # [False, False] # And I tested the code the same as arrow_dataset, but nothing happened dsets = [trn_dataset, test_dataset] dsets_in_memory = [not dset._data_files for dset in dsets] if any(dset_in_memory != dsets_in_memory[0] for dset_in_memory in dsets_in_memory): raise ValueError( "Datasets should ALL come from memory, or should ALL come from disk.\n" "However datasets {} come from memory and datasets {} come from disk.".format( [i for i in range(len(dsets)) if dsets_in_memory[i]], [i for i in range(len(dsets)) if not dsets_in_memory[i]], ) ) ``` Any suggestions would be greatly appreciated! Thanks!
172
Error when concatenate_datasets Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-38-74fa525512ca> in <module> ----> 1 train_dataset = concatenate_datasets([trn_dataset, test_dataset]) /opt/miniconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py in concatenate_datasets(dsets, info, split) 2547 "However datasets' indices {} come from memory and datasets' indices {} come from disk.".format( 2548 [i for i in range(len(dsets)) if indices_mappings_in_memory[i]], -> 2549 [i for i in range(len(dsets)) if not indices_mappings_in_memory[i]], 2550 ) 2551 ) ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` But it's curious both of my datasets loading from disk, so I check the source code in `arrow_dataset.py` about the Error: ``` trn_dataset._data_files # output [{'filename': 'data/train_dataset/csv-train.arrow', 'skip': 0, 'take': 593264}] test_dataset._data_files # output [{'filename': 'data/test_dataset/csv-test.arrow', 'skip': 0, 'take': 424383}] print([not dset._data_files for dset in [trn_dataset, test_dataset]]) # [False, False] # And I tested the code the same as arrow_dataset, but nothing happened dsets = [trn_dataset, test_dataset] dsets_in_memory = [not dset._data_files for dset in dsets] if any(dset_in_memory != dsets_in_memory[0] for dset_in_memory in dsets_in_memory): raise ValueError( "Datasets should ALL come from memory, or should ALL come from disk.\n" "However datasets {} come from memory and datasets {} come from disk.".format( [i for i in range(len(dsets)) if dsets_in_memory[i]], [i for i in range(len(dsets)) if not dsets_in_memory[i]], ) ) ``` Any suggestions would be greatly appreciated! Thanks! As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory. The indices mapping correspond to a mapping on top of the data table that is used to re-order/select a sample of the original data table. For example if you do `dataset.train_test_split`, then the resulting train and test datasets will have both an indices mapping to tell which examples are in train and which ones in test. Before saving your datasets on disk, you should call `dataset.flatten_indices()` to remove the indices mapping. It should fix your issue. Under the hood it will create a new data table using the indices mapping. The new data table is going to be a subset of the old one (for example taking only the test set examples), and since the indices mapping will be gone you'll be able to concatenate your datasets.
https://github.com/huggingface/datasets/issues/848
Error when concatenate_datasets
> As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory. > > The indices mapping correspond to a mapping on top of the data table that is used to re-order/select a sample of the original data table. For example if you do `dataset.train_test_split`, then the resulting train and test datasets will have both an indices mapping to tell which examples are in train and which ones in test. > > Before saving your datasets on disk, you should call `dataset.flatten_indices()` to remove the indices mapping. It should fix your issue. Under the hood it will create a new data table using the indices mapping. The new data table is going to be a subset of the old one (for example taking only the test set examples), and since the indices mapping will be gone you'll be able to concatenate your datasets. `dataset.flatten_indices()` solved my problem, thanks so much!
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-38-74fa525512ca> in <module> ----> 1 train_dataset = concatenate_datasets([trn_dataset, test_dataset]) /opt/miniconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py in concatenate_datasets(dsets, info, split) 2547 "However datasets' indices {} come from memory and datasets' indices {} come from disk.".format( 2548 [i for i in range(len(dsets)) if indices_mappings_in_memory[i]], -> 2549 [i for i in range(len(dsets)) if not indices_mappings_in_memory[i]], 2550 ) 2551 ) ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` But it's curious both of my datasets loading from disk, so I check the source code in `arrow_dataset.py` about the Error: ``` trn_dataset._data_files # output [{'filename': 'data/train_dataset/csv-train.arrow', 'skip': 0, 'take': 593264}] test_dataset._data_files # output [{'filename': 'data/test_dataset/csv-test.arrow', 'skip': 0, 'take': 424383}] print([not dset._data_files for dset in [trn_dataset, test_dataset]]) # [False, False] # And I tested the code the same as arrow_dataset, but nothing happened dsets = [trn_dataset, test_dataset] dsets_in_memory = [not dset._data_files for dset in dsets] if any(dset_in_memory != dsets_in_memory[0] for dset_in_memory in dsets_in_memory): raise ValueError( "Datasets should ALL come from memory, or should ALL come from disk.\n" "However datasets {} come from memory and datasets {} come from disk.".format( [i for i in range(len(dsets)) if dsets_in_memory[i]], [i for i in range(len(dsets)) if not dsets_in_memory[i]], ) ) ``` Any suggestions would be greatly appreciated! Thanks!
184
Error when concatenate_datasets Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-38-74fa525512ca> in <module> ----> 1 train_dataset = concatenate_datasets([trn_dataset, test_dataset]) /opt/miniconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py in concatenate_datasets(dsets, info, split) 2547 "However datasets' indices {} come from memory and datasets' indices {} come from disk.".format( 2548 [i for i in range(len(dsets)) if indices_mappings_in_memory[i]], -> 2549 [i for i in range(len(dsets)) if not indices_mappings_in_memory[i]], 2550 ) 2551 ) ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` But it's curious both of my datasets loading from disk, so I check the source code in `arrow_dataset.py` about the Error: ``` trn_dataset._data_files # output [{'filename': 'data/train_dataset/csv-train.arrow', 'skip': 0, 'take': 593264}] test_dataset._data_files # output [{'filename': 'data/test_dataset/csv-test.arrow', 'skip': 0, 'take': 424383}] print([not dset._data_files for dset in [trn_dataset, test_dataset]]) # [False, False] # And I tested the code the same as arrow_dataset, but nothing happened dsets = [trn_dataset, test_dataset] dsets_in_memory = [not dset._data_files for dset in dsets] if any(dset_in_memory != dsets_in_memory[0] for dset_in_memory in dsets_in_memory): raise ValueError( "Datasets should ALL come from memory, or should ALL come from disk.\n" "However datasets {} come from memory and datasets {} come from disk.".format( [i for i in range(len(dsets)) if dsets_in_memory[i]], [i for i in range(len(dsets)) if not dsets_in_memory[i]], ) ) ``` Any suggestions would be greatly appreciated! Thanks! > As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory. > > The indices mapping correspond to a mapping on top of the data table that is used to re-order/select a sample of the original data table. For example if you do `dataset.train_test_split`, then the resulting train and test datasets will have both an indices mapping to tell which examples are in train and which ones in test. > > Before saving your datasets on disk, you should call `dataset.flatten_indices()` to remove the indices mapping. It should fix your issue. Under the hood it will create a new data table using the indices mapping. The new data table is going to be a subset of the old one (for example taking only the test set examples), and since the indices mapping will be gone you'll be able to concatenate your datasets. `dataset.flatten_indices()` solved my problem, thanks so much!
https://github.com/huggingface/datasets/issues/848
Error when concatenate_datasets
@lhoestq we can add a mention of `dataset.flatten_indices()` in the error message (no rush, just put it on your TODO list or I can do it when I come at it)
Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-38-74fa525512ca> in <module> ----> 1 train_dataset = concatenate_datasets([trn_dataset, test_dataset]) /opt/miniconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py in concatenate_datasets(dsets, info, split) 2547 "However datasets' indices {} come from memory and datasets' indices {} come from disk.".format( 2548 [i for i in range(len(dsets)) if indices_mappings_in_memory[i]], -> 2549 [i for i in range(len(dsets)) if not indices_mappings_in_memory[i]], 2550 ) 2551 ) ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` But it's curious both of my datasets loading from disk, so I check the source code in `arrow_dataset.py` about the Error: ``` trn_dataset._data_files # output [{'filename': 'data/train_dataset/csv-train.arrow', 'skip': 0, 'take': 593264}] test_dataset._data_files # output [{'filename': 'data/test_dataset/csv-test.arrow', 'skip': 0, 'take': 424383}] print([not dset._data_files for dset in [trn_dataset, test_dataset]]) # [False, False] # And I tested the code the same as arrow_dataset, but nothing happened dsets = [trn_dataset, test_dataset] dsets_in_memory = [not dset._data_files for dset in dsets] if any(dset_in_memory != dsets_in_memory[0] for dset_in_memory in dsets_in_memory): raise ValueError( "Datasets should ALL come from memory, or should ALL come from disk.\n" "However datasets {} come from memory and datasets {} come from disk.".format( [i for i in range(len(dsets)) if dsets_in_memory[i]], [i for i in range(len(dsets)) if not dsets_in_memory[i]], ) ) ``` Any suggestions would be greatly appreciated! Thanks!
31
Error when concatenate_datasets Hello, when I concatenate two dataset loading from disk, I encountered a problem: ``` test_dataset = load_from_disk('data/test_dataset') trn_dataset = load_from_disk('data/train_dataset') train_dataset = concatenate_datasets([trn_dataset, test_dataset]) ``` And it reported ValueError blow: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-38-74fa525512ca> in <module> ----> 1 train_dataset = concatenate_datasets([trn_dataset, test_dataset]) /opt/miniconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py in concatenate_datasets(dsets, info, split) 2547 "However datasets' indices {} come from memory and datasets' indices {} come from disk.".format( 2548 [i for i in range(len(dsets)) if indices_mappings_in_memory[i]], -> 2549 [i for i in range(len(dsets)) if not indices_mappings_in_memory[i]], 2550 ) 2551 ) ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` But it's curious both of my datasets loading from disk, so I check the source code in `arrow_dataset.py` about the Error: ``` trn_dataset._data_files # output [{'filename': 'data/train_dataset/csv-train.arrow', 'skip': 0, 'take': 593264}] test_dataset._data_files # output [{'filename': 'data/test_dataset/csv-test.arrow', 'skip': 0, 'take': 424383}] print([not dset._data_files for dset in [trn_dataset, test_dataset]]) # [False, False] # And I tested the code the same as arrow_dataset, but nothing happened dsets = [trn_dataset, test_dataset] dsets_in_memory = [not dset._data_files for dset in dsets] if any(dset_in_memory != dsets_in_memory[0] for dset_in_memory in dsets_in_memory): raise ValueError( "Datasets should ALL come from memory, or should ALL come from disk.\n" "However datasets {} come from memory and datasets {} come from disk.".format( [i for i in range(len(dsets)) if dsets_in_memory[i]], [i for i in range(len(dsets)) if not dsets_in_memory[i]], ) ) ``` Any suggestions would be greatly appreciated! Thanks! @lhoestq we can add a mention of `dataset.flatten_indices()` in the error message (no rush, just put it on your TODO list or I can do it when I come at it)
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
It looks like an issue with wandb/tqdm here. We're using the `multiprocess` library instead of the `multiprocessing` builtin python package to support various types of mapping functions. Maybe there's some sort of incompatibility. Could you make a minimal script to reproduce or a google colab ?
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ```
46
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ``` It looks like an issue with wandb/tqdm here. We're using the `multiprocess` library instead of the `multiprocessing` builtin python package to support various types of mapping functions. Maybe there's some sort of incompatibility. Could you make a minimal script to reproduce or a google colab ?
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
hi facing the same issue here - `AssertionError: Caught AssertionError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/lib/python3.6/logging/__init__.py", line 996, in emit stream.write(msg) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py", line 100, in new_write cb(name, data) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py", line 723, in _console_callback self._backend.interface.publish_output(name, data) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py", line 153, in publish_output self._publish_output(o) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py", line 158, in _publish_output self._publish(rec) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py", line 456, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "<ipython-input-8-a4d9a08d114e>", line 20, in __getitem__ return_token_type_ids=True File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2405, in encode_plus **kwargs, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2125, in _get_padding_truncation_strategies "Truncation was not explicitly activated but `max_length` is provided a specific value, " File "/usr/lib/python3.6/logging/__init__.py", line 1320, in warning self._log(WARNING, msg, args, **kwargs) File "/usr/lib/python3.6/logging/__init__.py", line 1444, in _log self.handle(record) File "/usr/lib/python3.6/logging/__init__.py", line 1454, in handle self.callHandlers(record) File "/usr/lib/python3.6/logging/__init__.py", line 1516, in callHandlers hdlr.handle(record) File "/usr/lib/python3.6/logging/__init__.py", line 865, in handle self.emit(record) File "/usr/lib/python3.6/logging/__init__.py", line 1000, in emit self.handleError(record) File "/usr/lib/python3.6/logging/__init__.py", line 917, in handleError sys.stderr.write('--- Logging error ---\n') File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py", line 100, in new_write cb(name, data) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py", line 723, in _console_callback self._backend.interface.publish_output(name, data) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py", line 153, in publish_output self._publish_output(o) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py", line 158, in _publish_output self._publish(rec) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py", line 456, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process`
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ```
293
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ``` hi facing the same issue here - `AssertionError: Caught AssertionError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/lib/python3.6/logging/__init__.py", line 996, in emit stream.write(msg) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py", line 100, in new_write cb(name, data) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py", line 723, in _console_callback self._backend.interface.publish_output(name, data) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py", line 153, in publish_output self._publish_output(o) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py", line 158, in _publish_output self._publish(rec) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py", line 456, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.6/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "<ipython-input-8-a4d9a08d114e>", line 20, in __getitem__ return_token_type_ids=True File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2405, in encode_plus **kwargs, File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 2125, in _get_padding_truncation_strategies "Truncation was not explicitly activated but `max_length` is provided a specific value, " File "/usr/lib/python3.6/logging/__init__.py", line 1320, in warning self._log(WARNING, msg, args, **kwargs) File "/usr/lib/python3.6/logging/__init__.py", line 1444, in _log self.handle(record) File "/usr/lib/python3.6/logging/__init__.py", line 1454, in handle self.callHandlers(record) File "/usr/lib/python3.6/logging/__init__.py", line 1516, in callHandlers hdlr.handle(record) File "/usr/lib/python3.6/logging/__init__.py", line 865, in handle self.emit(record) File "/usr/lib/python3.6/logging/__init__.py", line 1000, in emit self.handleError(record) File "/usr/lib/python3.6/logging/__init__.py", line 917, in handleError sys.stderr.write('--- Logging error ---\n') File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py", line 100, in new_write cb(name, data) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py", line 723, in _console_callback self._backend.interface.publish_output(name, data) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py", line 153, in publish_output self._publish_output(o) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py", line 158, in _publish_output self._publish(rec) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/interface/interface.py", line 456, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process`
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
It looks like this warning : "Truncation was not explicitly activated but max_length is provided a specific value, " is not handled well by wandb. The error occurs when calling the tokenizer. Maybe you can try to specify `truncation=True` when calling the tokenizer to remove the warning ? Otherwise I don't know why wandb would fail on a warning. Maybe one of its logging handlers have some issues with the logging of tokenizers. Maybe @n1t0 knows more about this ?
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ```
80
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ``` It looks like this warning : "Truncation was not explicitly activated but max_length is provided a specific value, " is not handled well by wandb. The error occurs when calling the tokenizer. Maybe you can try to specify `truncation=True` when calling the tokenizer to remove the warning ? Otherwise I don't know why wandb would fail on a warning. Maybe one of its logging handlers have some issues with the logging of tokenizers. Maybe @n1t0 knows more about this ?
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
I'm having a similar issue but when I try to do multiprocessing with the `DataLoader` Code to reproduce: ``` from datasets import load_dataset book_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]') book_corpus = book_corpus.map(encode, batched=True, num_proc=20, load_from_cache_file=True, batch_size=5000) book_corpus.set_format(type='torch', columns=['text', "input_ids", "attention_mask", "token_type_ids"]) from transformers import DataCollatorForWholeWordMask from transformers import Trainer, TrainingArguments data_collator = DataCollatorForWholeWordMask( tokenizer=tokenizer, mlm=True, mlm_probability=0.15) training_args = TrainingArguments( output_dir="./mobile_linear_att_8L_128_128_03layerdrop_shared", overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=64, save_steps=50, save_total_limit=2, logging_first_step=True, warmup_steps=100, logging_steps=50, gradient_accumulation_steps=1, fp16=True, **dataloader_num_workers=10**, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=book_corpus, tokenizer=tokenizer) trainer.train() ``` ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <timed eval> in <module> ~/anaconda3/envs/tfm/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path, trial) 869 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control) 870 --> 871 for step, inputs in enumerate(epoch_iterator): 872 873 # Skip past any already trained steps if resuming training ~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self) 433 if self._sampler_iter is None: 434 self._reset() --> 435 data = self._next_data() 436 self._num_yielded += 1 437 if self._dataset_kind == _DatasetKind.Iterable and \ ~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _next_data(self) 1083 else: 1084 del self._task_info[idx] -> 1085 return self._process_data(data) 1086 1087 def _try_put_index(self): ~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _process_data(self, data) 1109 self._try_put_index() 1110 if isinstance(data, ExceptionWrapper): -> 1111 data.reraise() 1112 return data 1113 ~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/_utils.py in reraise(self) 426 # have message field 427 raise self.exc_type(message=msg) --> 428 raise self.exc_type(msg) 429 430 AssertionError: Caught AssertionError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop data = fetcher.fetch(index) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1087, in __getitem__ format_kwargs=self._format_kwargs, File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1074, in _getitem format_kwargs=format_kwargs, File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 890, in _convert_outputs v = map_nested(command, v, **map_nested_kwargs) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 851, in command return torch.tensor(x, **format_kwargs) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/warnings.py", line 101, in _showwarnmsg _showwarnmsg_impl(msg) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/warnings.py", line 30, in _showwarnmsg_impl file.write(text) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 100, in new_write cb(name, data) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 723, in _console_callback self._backend.interface.publish_output(name, data) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 153, in publish_output self._publish_output(o) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 158, in _publish_output self._publish(rec) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 456, in _publish if self._process and not self._process.is_alive(): File "/home/ad/anaconda3/envs/tfm/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process ``` As a workaround I have commented line 456 and 457 in `/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py`
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ```
383
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ``` I'm having a similar issue but when I try to do multiprocessing with the `DataLoader` Code to reproduce: ``` from datasets import load_dataset book_corpus = load_dataset('bookcorpus', 'plain_text', cache_dir='/home/ad/Desktop/bookcorpus', split='train[:1%]') book_corpus = book_corpus.map(encode, batched=True, num_proc=20, load_from_cache_file=True, batch_size=5000) book_corpus.set_format(type='torch', columns=['text', "input_ids", "attention_mask", "token_type_ids"]) from transformers import DataCollatorForWholeWordMask from transformers import Trainer, TrainingArguments data_collator = DataCollatorForWholeWordMask( tokenizer=tokenizer, mlm=True, mlm_probability=0.15) training_args = TrainingArguments( output_dir="./mobile_linear_att_8L_128_128_03layerdrop_shared", overwrite_output_dir=True, num_train_epochs=1, per_device_train_batch_size=64, save_steps=50, save_total_limit=2, logging_first_step=True, warmup_steps=100, logging_steps=50, gradient_accumulation_steps=1, fp16=True, **dataloader_num_workers=10**, ) trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=book_corpus, tokenizer=tokenizer) trainer.train() ``` ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <timed eval> in <module> ~/anaconda3/envs/tfm/lib/python3.6/site-packages/transformers/trainer.py in train(self, model_path, trial) 869 self.control = self.callback_handler.on_epoch_begin(self.args, self.state, self.control) 870 --> 871 for step, inputs in enumerate(epoch_iterator): 872 873 # Skip past any already trained steps if resuming training ~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self) 433 if self._sampler_iter is None: 434 self._reset() --> 435 data = self._next_data() 436 self._num_yielded += 1 437 if self._dataset_kind == _DatasetKind.Iterable and \ ~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _next_data(self) 1083 else: 1084 del self._task_info[idx] -> 1085 return self._process_data(data) 1086 1087 def _try_put_index(self): ~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _process_data(self, data) 1109 self._try_put_index() 1110 if isinstance(data, ExceptionWrapper): -> 1111 data.reraise() 1112 return data 1113 ~/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/_utils.py in reraise(self) 426 # have message field 427 raise self.exc_type(message=msg) --> 428 raise self.exc_type(msg) 429 430 AssertionError: Caught AssertionError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 198, in _worker_loop data = fetcher.fetch(index) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp> data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1087, in __getitem__ format_kwargs=self._format_kwargs, File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1074, in _getitem format_kwargs=format_kwargs, File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 890, in _convert_outputs v = map_nested(command, v, **map_nested_kwargs) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 851, in command return torch.tensor(x, **format_kwargs) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/warnings.py", line 101, in _showwarnmsg _showwarnmsg_impl(msg) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/warnings.py", line 30, in _showwarnmsg_impl file.write(text) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 100, in new_write cb(name, data) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 723, in _console_callback self._backend.interface.publish_output(name, data) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 153, in publish_output self._publish_output(o) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 158, in _publish_output self._publish(rec) File "/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 456, in _publish if self._process and not self._process.is_alive(): File "/home/ad/anaconda3/envs/tfm/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process ``` As a workaround I have commented line 456 and 457 in `/home/ad/anaconda3/envs/tfm/lib/python3.6/site-packages/wandb/sdk/interface/interface.py`
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
Isn't it more the pytorch warning on the use of non-writable memory for tensor that trigger this here @lhoestq? (since it seems to be a warning triggered in `torch.tensor()`
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ```
29
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ``` Isn't it more the pytorch warning on the use of non-writable memory for tensor that trigger this here @lhoestq? (since it seems to be a warning triggered in `torch.tensor()`
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
Yep this time this is a warning from pytorch that causes wandb to not work properly. Could this by a wandb issue ?
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ```
23
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ``` Yep this time this is a warning from pytorch that causes wandb to not work properly. Could this by a wandb issue ?
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
Hi @timothyjlaurent @gaceladri If you're running `transformers` from `master` you can try setting the env var `WAND_DISABLE=true` (from https://github.com/huggingface/transformers/pull/9896) and try again ? This issue might be related to https://github.com/huggingface/transformers/issues/9623
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ```
30
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ``` Hi @timothyjlaurent @gaceladri If you're running `transformers` from `master` you can try setting the env var `WAND_DISABLE=true` (from https://github.com/huggingface/transformers/pull/9896) and try again ? This issue might be related to https://github.com/huggingface/transformers/issues/9623
https://github.com/huggingface/datasets/issues/847
multiprocessing in dataset map "can only test a child process"
I have commented the lines that cause my code break. I'm now seeing my reports on Wandb and my code does not break. I am training now, so I will check probably in 6 hours. I suppose that setting wandb disable will work as well.
Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ```
45
multiprocessing in dataset map "can only test a child process" Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook. ``` def tokenizer_fn(example): return tokenizer.batch_encode_plus(example['text']) ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text']) ``` ``` --------------------------------------------------------------------------- RemoteTraceback Traceback (most recent call last) RemoteTraceback: """ Traceback (most recent call last): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker result = (True, func(*args, **kwds)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper out = func(self, *args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single for i in pbar: File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__ for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs): File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__ self.close() File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close super(tqdm_notebook, self).close(*args, **kwargs) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close fp_write('') File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write self.fp.write(_unicode(s)) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write cb(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback self._backend.interface.publish_output(name, data) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output self._publish_output(o) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output self._publish(rec) File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish if self._process and not self._process.is_alive(): File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive assert self._parent_pid == os.getpid(), 'can only test a child process' AssertionError: can only test a child process """ ``` I have commented the lines that cause my code break. I'm now seeing my reports on Wandb and my code does not break. I am training now, so I will check probably in 6 hours. I suppose that setting wandb disable will work as well.
https://github.com/huggingface/datasets/issues/846
Add HoVer multi-hop fact verification dataset
Hi @yjernite I'm new but wanted to contribute. Has anyone already taken this problem and do you think it is suitable for newbies?
## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction benchmarks (HotpotQA, which dataset wase based off, notwithstanding) Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
23
Add HoVer multi-hop fact verification dataset ## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction benchmarks (HotpotQA, which dataset wase based off, notwithstanding) Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). Hi @yjernite I'm new but wanted to contribute. Has anyone already taken this problem and do you think it is suitable for newbies?
https://github.com/huggingface/datasets/issues/846
Add HoVer multi-hop fact verification dataset
Hi @tenjjin! This dataset is still up for grabs! Here's the link with the guide to add it. You should play around with the library first (download and look at a few datasets), then follow the steps here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md
## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction benchmarks (HotpotQA, which dataset wase based off, notwithstanding) Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
39
Add HoVer multi-hop fact verification dataset ## Adding a Dataset - **Name:** HoVer - **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples - **Paper:** https://arxiv.org/abs/2011.03088 - **Data:** https://hover-nlp.github.io/ - **Motivation:** There are still few multi-hop information extraction benchmarks (HotpotQA, which dataset wase based off, notwithstanding) Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). Hi @tenjjin! This dataset is still up for grabs! Here's the link with the guide to add it. You should play around with the library first (download and look at a few datasets), then follow the steps here: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md
https://github.com/huggingface/datasets/issues/843
use_custom_baseline still produces errors for bertscore
Thanks for reporting ! That's a bug indeed If you want to contribute, feel free to fix this issue and open a PR :)
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'` Adding 'use_custom_baseline = False' as an argument produces this error `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'` This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2
24
use_custom_baseline still produces errors for bertscore `metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'` Adding 'use_custom_baseline = False' as an argument produces this error `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'` This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2 Thanks for reporting ! That's a bug indeed If you want to contribute, feel free to fix this issue and open a PR :)
https://github.com/huggingface/datasets/issues/843
use_custom_baseline still produces errors for bertscore
This error is because of a mismatch between `datasets` and `bert_score`. With `datasets=1.1.2` and `bert_score>=0.3.6` it works ok. So `pip install -U bert_score` should fix the problem.
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'` Adding 'use_custom_baseline = False' as an argument produces this error `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'` This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2
27
use_custom_baseline still produces errors for bertscore `metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'` Adding 'use_custom_baseline = False' as an argument produces this error `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'` This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2 This error is because of a mismatch between `datasets` and `bert_score`. With `datasets=1.1.2` and `bert_score>=0.3.6` it works ok. So `pip install -U bert_score` should fix the problem.
https://github.com/huggingface/datasets/issues/843
use_custom_baseline still produces errors for bertscore
Hello everyone, I think the problem is not solved: ``` from datasets import load_metric metric=load_metric('bertscore') metric.compute( predictions=predictions, references=references, lang='fr', rescale_with_baseline=True ) TypeError: get_hash() missing 2 required positional arguments: 'use_custom_baseline' and 'use_fast_tokenizer' ``` This code is produced using `Python 3.6.9 datasets==1.1.2 and bert_score==0.3.10`
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'` Adding 'use_custom_baseline = False' as an argument produces this error `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'` This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2
42
use_custom_baseline still produces errors for bertscore `metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'` Adding 'use_custom_baseline = False' as an argument produces this error `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'` This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2 Hello everyone, I think the problem is not solved: ``` from datasets import load_metric metric=load_metric('bertscore') metric.compute( predictions=predictions, references=references, lang='fr', rescale_with_baseline=True ) TypeError: get_hash() missing 2 required positional arguments: 'use_custom_baseline' and 'use_fast_tokenizer' ``` This code is produced using `Python 3.6.9 datasets==1.1.2 and bert_score==0.3.10`
https://github.com/huggingface/datasets/issues/843
use_custom_baseline still produces errors for bertscore
Hi ! This has been fixed by https://github.com/huggingface/datasets/pull/2770, we'll do a new release soon to make the fix available :) In the meantime please use an older version of `bert_score`
`metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'` Adding 'use_custom_baseline = False' as an argument produces this error `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'` This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2
30
use_custom_baseline still produces errors for bertscore `metric = load_metric('bertscore')` `a1 = "random sentences"` `b1 = "random sentences"` `metric.compute(predictions = [a1], references = [b1], lang = 'en')` `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/stephen_chan/.cache/huggingface/modules/datasets_modules/metrics/bertscore/361e597a01a41d6cf95d94bbfb01dea16261687abc0c6c74cc9930f80488f363/bertscore.py", line 108, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() missing 1 required positional argument: 'use_custom_baseline'` Adding 'use_custom_baseline = False' as an argument produces this error `Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py", line 393, in compute output = self._compute(predictions=predictions, references=references, **kwargs) TypeError: _compute() got an unexpected keyword argument 'use_custom_baseline'` This is on Ubuntu 18.04, Python 3.6.9, datasets version 1.1.2 Hi ! This has been fixed by https://github.com/huggingface/datasets/pull/2770, we'll do a new release soon to make the fix available :) In the meantime please use an older version of `bert_score`
https://github.com/huggingface/datasets/issues/842
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
Right now multiprocessing only runs on single node. However it's probably possible to extend it to support multi nodes. Indeed we're using the `multiprocess` library from the `pathos` project to do multiprocessing in `datasets`, and `pathos` is made to support parallelism on several nodes. More info about pathos [on the pathos repo](https://github.com/uqfoundation/pathos). If you're familiar with pathos or if you want to give it a try, it could be a nice addition to the library :)
Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other node is waiting for it to finish? Thanks!
76
How to enable `.map()` pre-processing pipelines to support multi-node parallelism? Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other node is waiting for it to finish? Thanks! Right now multiprocessing only runs on single node. However it's probably possible to extend it to support multi nodes. Indeed we're using the `multiprocess` library from the `pathos` project to do multiprocessing in `datasets`, and `pathos` is made to support parallelism on several nodes. More info about pathos [on the pathos repo](https://github.com/uqfoundation/pathos). If you're familiar with pathos or if you want to give it a try, it could be a nice addition to the library :)
https://github.com/huggingface/datasets/issues/842
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
Curious to hear if anything on that side changed or if you suggestions to do it changed @lhoestq :) For our use-case, we are entering the regime where trading a few more instances to save a few days would be nice :)
Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other node is waiting for it to finish? Thanks!
42
How to enable `.map()` pre-processing pipelines to support multi-node parallelism? Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other node is waiting for it to finish? Thanks! Curious to hear if anything on that side changed or if you suggestions to do it changed @lhoestq :) For our use-case, we are entering the regime where trading a few more instances to save a few days would be nice :)
https://github.com/huggingface/datasets/issues/842
How to enable `.map()` pre-processing pipelines to support multi-node parallelism?
Currently for multi-node setups we're mostly going towards a nice integration with Dask. But I wouldn't exclude exploring `pathos` more at one point
Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other node is waiting for it to finish? Thanks!
23
How to enable `.map()` pre-processing pipelines to support multi-node parallelism? Hi, Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other node is waiting for it to finish? Thanks! Currently for multi-node setups we're mostly going towards a nice integration with Dask. But I wouldn't exclude exploring `pathos` more at one point
https://github.com/huggingface/datasets/issues/841
Can not reuse datasets already downloaded
It seems the process needs '/datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py' Where and how to assign this ```wikipedia.py``` after I manually download it ?
Hello, I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on). I successfully downloaded and reuse the wikipedia datasets in a frontal node. When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but failed and end with time out error. On frontal node: ``` >>> from datasets import load_dataset >>> dataset = load_dataset('wikipedia', '20200501.en') Reusing dataset wikipedia (/linkhome/rech/genini01/uua34ms/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/f92599dfccab29832c442b82870fa8f6983e5b4ebbf5e6e2dcbe894e325339cd) /linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.) return torch._C._cuda_getDeviceCount() > 0 ``` On gpu node: ``` >>> from datasets import load_dataset >>> dataset = load_dataset('wikipedia', '20200501.en') Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 160, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 84, in create_connection raise err File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen chunked=chunked, File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn conn.connect() File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 309, in connect conn = self._new_conn() File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 172, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 727, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/retry.py", line 446, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 590, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 264, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return requests.head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',)) ``` Any advice?Thanks!
19
Can not reuse datasets already downloaded Hello, I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on). I successfully downloaded and reuse the wikipedia datasets in a frontal node. When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but failed and end with time out error. On frontal node: ``` >>> from datasets import load_dataset >>> dataset = load_dataset('wikipedia', '20200501.en') Reusing dataset wikipedia (/linkhome/rech/genini01/uua34ms/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/f92599dfccab29832c442b82870fa8f6983e5b4ebbf5e6e2dcbe894e325339cd) /linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at /pytorch/c10/cuda/CUDAFunctions.cpp:100.) return torch._C._cuda_getDeviceCount() > 0 ``` On gpu node: ``` >>> from datasets import load_dataset >>> dataset = load_dataset('wikipedia', '20200501.en') Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 160, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 84, in create_connection raise err File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 677, in urlopen chunked=chunked, File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 381, in _make_request self._validate_conn(conn) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 978, in _validate_conn conn.connect() File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 309, in connect conn = self._new_conn() File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connection.py", line 172, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 449, in send timeout=timeout File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/connectionpool.py", line 727, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/urllib3/util/retry.py", line 446, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 590, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/load.py", line 264, in prepare_module head_hf_s3(path, filename=name, dataset=dataset) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 200, in head_hf_s3 return requests.head(hf_bucket_url(identifier=identifier, filename=filename, use_cdn=use_cdn, dataset=dataset)) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 104, in head return request('head', url, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/linkhome/rech/genini01/uua34ms/work/anaconda3/envs/pytorch_pip170_cuda102/lib/python3.6/site-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x14b7b73e4908>: Failed to establish a new connection: [Errno 110] Connection timed out',)) ``` Any advice?Thanks! It seems the process needs '/datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py' Where and how to assign this ```wikipedia.py``` after I manually download it ?
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Which version of pyarrow do you have ? Could you try to update pyarrow and try again ?
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas?
18
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas? Which version of pyarrow do you have ? Could you try to update pyarrow and try again ?
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Thanks for the fast response. I have the latest version '2.0.0' (I tried to update) I am working with Python 3.8.5
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas?
21
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas? Thanks for the fast response. I have the latest version '2.0.0' (I tried to update) I am working with Python 3.8.5
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
I think that the issue is similar to this one:https://issues.apache.org/jira/browse/ARROW-9612 The problem is in arrow when the column data contains long strings. Any ideas on how to bypass this?
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas?
29
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas? I think that the issue is similar to this one:https://issues.apache.org/jira/browse/ARROW-9612 The problem is in arrow when the column data contains long strings. Any ideas on how to bypass this?
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py). In the meantime you can specify yourself the `ReadOptions` config like this: ```python import pyarrow.csv as pac # PyArrow is installed with `datasets` read_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case dataset = load_dataset('csv', data_files=files, read_options=read_options) ```
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas?
56
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas? We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py). In the meantime you can specify yourself the `ReadOptions` config like this: ```python import pyarrow.csv as pac # PyArrow is installed with `datasets` read_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case dataset = load_dataset('csv', data_files=files, read_options=read_options) ```
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
This did help to load the data. But the problem now is that I get: ArrowInvalid: CSV parse error: Expected 5 columns, got 187 It seems that this change the parsing so I changed the table to tab-separated and tried to load it directly from pyarrow But I got a similar error, again it loaded fine in pandas so I am not sure what to do.
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas?
66
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas? This did help to load the data. But the problem now is that I get: ArrowInvalid: CSV parse error: Expected 5 columns, got 187 It seems that this change the parsing so I changed the table to tab-separated and tried to load it directly from pyarrow But I got a similar error, again it loaded fine in pandas so I am not sure what to do.
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Got almost the same error loading a ~5GB TSV file, first got the same error as OP, then tried giving it my own ReadOptions and also got the same CSV parse error.
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas?
32
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas? Got almost the same error loading a ~5GB TSV file, first got the same error as OP, then tried giving it my own ReadOptions and also got the same CSV parse error.
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
> We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py). > > In the meantime you can specify yourself the `ReadOptions` config like this: > > ```python > import pyarrow.csv as pac # PyArrow is installed with `datasets` > > read_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case > dataset = load_dataset('csv', data_files=files, read_options=read_options) > ``` This did not work for me, I got `TypeError: __init__() got an unexpected keyword argument 'read_options'`
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas?
82
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas? > We should expose the [`block_size` argument](https://arrow.apache.org/docs/python/generated/pyarrow.csv.ReadOptions.html#pyarrow.csv.ReadOptions) of Apache Arrow csv `ReadOptions` in the [script](https://github.com/huggingface/datasets/blob/master/datasets/csv/csv.py). > > In the meantime you can specify yourself the `ReadOptions` config like this: > > ```python > import pyarrow.csv as pac # PyArrow is installed with `datasets` > > read_options = pac.ReadOptions(block_size=1e9) # try to find the right value for your use-case > dataset = load_dataset('csv', data_files=files, read_options=read_options) > ``` This did not work for me, I got `TypeError: __init__() got an unexpected keyword argument 'read_options'`
https://github.com/huggingface/datasets/issues/836
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas
Hi ! Yes because of issues with PyArrow's CSV reader we switched to using the Pandas CSV reader. In particular the `read_options` argument is not supported anymore, but you can pass any parameter of Pandas' `read_csv` function (see the list here in [Pandas documentation](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html))
Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas?
44
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas Hi All I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly: dataset = load_dataset('csv', data_files=files) When I run it I get: Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) tocache/huggingface/datasets/csv/default-35575a1051604c88/0.0.0/49187751790fa4d820300fd4d0707896e5b941f1a9c644652645b866716a4ac4... I am getting this error: 6a4ac4/csv.py in _generate_tables(self, files) 78 def _generate_tables(self, files): 79 for i, file in enumerate(files): ---> 80 pa_table = pac.read_csv( 81 file, 82 read_options=self.config.pa_read_options, ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/_csv.pyx in pyarrow._csv.read_csv() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status() ~/anaconda2/envs/nlp/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status() **ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)** The size of the file is 3.5 GB. When I try smaller files I do not have an issue. When I load it with 'text' parser I can see all data but it is not what I need. There is no issue reading the file with pandas. any idea what could be the issue? When I am running a different CSV I do not get this line: (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) Any ideas? Hi ! Yes because of issues with PyArrow's CSV reader we switched to using the Pandas CSV reader. In particular the `read_options` argument is not supported anymore, but you can pass any parameter of Pandas' `read_csv` function (see the list here in [Pandas documentation](https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html))
https://github.com/huggingface/datasets/issues/835
Wikipedia postprocessing
Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https://github.com/earwig/mwparserfromhell) which is pretty good but not perfect. As an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool
Hi, thanks for this library! Running this code: ```py import datasets wikipedia = datasets.load_dataset("wikipedia", "20200501.de") print(wikipedia['train']['text'][0]) ``` I get: ``` mini|Ricardo Flores Magón mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfirio Diaz, Ausschnitt des Gemälde „Tierra y Libertad“ von Idelfonso Carrara (?) von 1930. Ricardo Flores Magón (* 16. September 1874 in San Antonio Eloxochitlán im mexikanischen Bundesstaat Oaxaca; † 22. November 1922 im Bundesgefängnis Leavenworth im US-amerikanischen Bundesstaat Kansas) war als Journalist, Gewerkschafter und Literat ein führender anarchistischer Theoretiker und Aktivist, der die revolutionäre mexikanische Bewegung radikal beeinflusste. Magón war Gründer der Partido Liberal Mexicano und Mitglied der Industrial Workers of the World. Politische Biografie Journalistisch und politisch kämpfte er und sein Bruder sehr kompromisslos gegen die Diktatur Porfirio Diaz. Philosophisch und politisch orientiert an radikal anarchistischen Idealen und den Erfahrungen seiner indigenen Vorfahren bei der gemeinschaftlichen Bewirtschaftung des Gemeindelandes, machte er die Forderung „Land und Freiheit“ (Tierra y Libertad) populär. Besonders Francisco Villa und Emiliano Zapata griffen die Forderung Land und Freiheit auf. Seine Philosophie hatte großen Einfluss auf die Landarbeiter. 1904 floh er in die USA und gründete 1906 die Partido Liberal Mexicano. Im Exil lernte er u. a. Emma Goldman kennen. Er verbrachte die meiste Zeit seines Lebens in Gefängnissen und im Exil und wurde 1918 in den USA wegen „Behinderung der Kriegsanstrengungen“ zu zwanzig Jahren Gefängnis verurteilt. Zu seinem Tod gibt es drei verschiedene Theorien. Offiziell starb er an Herzversagen. Librado Rivera, der die Leiche mit eigenen Augen gesehen hat, geht davon aus, dass Magón von einem Mitgefangenen erdrosselt wurde. Die staatstreue Gewerkschaftszeitung CROM veröffentlichte 1923 einen Beitrag, nachdem Magón von einem Gefängniswärter erschlagen wurde. mini|Die Brüder Ricardo (links) und Enrique Flores Magón (rechts) vor dem Los Angeles County Jail, 1917 [...] ``` so some Markup like `mini|` is still left. Should I run another parser on this text before feeding it to an ML model or is this a known imperfection of parsing Wiki markup? Apologies if this has been asked before.
38
Wikipedia postprocessing Hi, thanks for this library! Running this code: ```py import datasets wikipedia = datasets.load_dataset("wikipedia", "20200501.de") print(wikipedia['train']['text'][0]) ``` I get: ``` mini|Ricardo Flores Magón mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfirio Diaz, Ausschnitt des Gemälde „Tierra y Libertad“ von Idelfonso Carrara (?) von 1930. Ricardo Flores Magón (* 16. September 1874 in San Antonio Eloxochitlán im mexikanischen Bundesstaat Oaxaca; † 22. November 1922 im Bundesgefängnis Leavenworth im US-amerikanischen Bundesstaat Kansas) war als Journalist, Gewerkschafter und Literat ein führender anarchistischer Theoretiker und Aktivist, der die revolutionäre mexikanische Bewegung radikal beeinflusste. Magón war Gründer der Partido Liberal Mexicano und Mitglied der Industrial Workers of the World. Politische Biografie Journalistisch und politisch kämpfte er und sein Bruder sehr kompromisslos gegen die Diktatur Porfirio Diaz. Philosophisch und politisch orientiert an radikal anarchistischen Idealen und den Erfahrungen seiner indigenen Vorfahren bei der gemeinschaftlichen Bewirtschaftung des Gemeindelandes, machte er die Forderung „Land und Freiheit“ (Tierra y Libertad) populär. Besonders Francisco Villa und Emiliano Zapata griffen die Forderung Land und Freiheit auf. Seine Philosophie hatte großen Einfluss auf die Landarbeiter. 1904 floh er in die USA und gründete 1906 die Partido Liberal Mexicano. Im Exil lernte er u. a. Emma Goldman kennen. Er verbrachte die meiste Zeit seines Lebens in Gefängnissen und im Exil und wurde 1918 in den USA wegen „Behinderung der Kriegsanstrengungen“ zu zwanzig Jahren Gefängnis verurteilt. Zu seinem Tod gibt es drei verschiedene Theorien. Offiziell starb er an Herzversagen. Librado Rivera, der die Leiche mit eigenen Augen gesehen hat, geht davon aus, dass Magón von einem Mitgefangenen erdrosselt wurde. Die staatstreue Gewerkschaftszeitung CROM veröffentlichte 1923 einen Beitrag, nachdem Magón von einem Gefängniswärter erschlagen wurde. mini|Die Brüder Ricardo (links) und Enrique Flores Magón (rechts) vor dem Los Angeles County Jail, 1917 [...] ``` so some Markup like `mini|` is still left. Should I run another parser on this text before feeding it to an ML model or is this a known imperfection of parsing Wiki markup? Apologies if this has been asked before. Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https://github.com/earwig/mwparserfromhell) which is pretty good but not perfect. As an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool
https://github.com/huggingface/datasets/issues/834
[GEM] add WikiLingua cross-lingual abstractive summarization dataset
Hey @yjernite. This is a very interesting dataset. Would love to work on adding it but I see that the link to the data is to a gdrive folder. Can I just confirm wether dlmanager can handle gdrive urls or would this have to be a manual dl?
## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** https://arxiv.org/pdf/2010.03093.pdf - **Data:** https://github.com/esdurmus/Wikilingua - **Motivation:** Included in the GEM shared task. Multilingual. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
48
[GEM] add WikiLingua cross-lingual abstractive summarization dataset ## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** https://arxiv.org/pdf/2010.03093.pdf - **Data:** https://github.com/esdurmus/Wikilingua - **Motivation:** Included in the GEM shared task. Multilingual. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). Hey @yjernite. This is a very interesting dataset. Would love to work on adding it but I see that the link to the data is to a gdrive folder. Can I just confirm wether dlmanager can handle gdrive urls or would this have to be a manual dl?
https://github.com/huggingface/datasets/issues/834
[GEM] add WikiLingua cross-lingual abstractive summarization dataset
Hi @KMFODA ! A version of WikiLingua is actually already accessible in the [GEM dataset](https://huggingface.co/datasets/gem) You can use it for example to load the French to English translation with: ```python from datasets import load_dataset wikilingua = load_dataset("gem", "wiki_lingua_french_fr") ``` Closed by https://github.com/huggingface/datasets/pull/1807
## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** https://arxiv.org/pdf/2010.03093.pdf - **Data:** https://github.com/esdurmus/Wikilingua - **Motivation:** Included in the GEM shared task. Multilingual. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
42
[GEM] add WikiLingua cross-lingual abstractive summarization dataset ## Adding a Dataset - **Name:** WikiLingua - **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article. - **Paper:** https://arxiv.org/pdf/2010.03093.pdf - **Data:** https://github.com/esdurmus/Wikilingua - **Motivation:** Included in the GEM shared task. Multilingual. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). Hi @KMFODA ! A version of WikiLingua is actually already accessible in the [GEM dataset](https://huggingface.co/datasets/gem) You can use it for example to load the French to English translation with: ```python from datasets import load_dataset wikilingua = load_dataset("gem", "wiki_lingua_french_fr") ``` Closed by https://github.com/huggingface/datasets/pull/1807
https://github.com/huggingface/datasets/issues/827
[GEM] MultiWOZ dialogue dataset
Hi @yjernite can I help in adding this dataset? I am excited about this because this will be my first contribution to the datasets library as well as to hugginface.
## Adding a Dataset - **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz) - **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user side. - **Paper:** https://arxiv.org/pdf/2007.12720.pdf - **Data:** https://github.com/budzianowski/multiwoz - **Motivation:** Will likely be part of the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
30
[GEM] MultiWOZ dialogue dataset ## Adding a Dataset - **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz) - **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user side. - **Paper:** https://arxiv.org/pdf/2007.12720.pdf - **Data:** https://github.com/budzianowski/multiwoz - **Motivation:** Will likely be part of the GEM shared task Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). Hi @yjernite can I help in adding this dataset? I am excited about this because this will be my first contribution to the datasets library as well as to hugginface.
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
I think it would be very cool. I'm currently working on a cluster from Compute Canada, and I have internet access only when I'm not in the nodes where I run the scripts. So I was expecting to be able to use the wmt14 dataset until I realized I needed internet connection even if I downloaded the data already. I'm going to try option 2 you mention for now though! Thanks ;)
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks)
72
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks) I think it would be very cool. I'm currently working on a cluster from Compute Canada, and I have internet access only when I'm not in the nodes where I run the scripts. So I was expecting to be able to use the wmt14 dataset until I realized I needed internet connection even if I downloaded the data already. I'm going to try option 2 you mention for now though! Thanks ;)
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
Requiring online connection is a deal breaker in some cases unfortunately so it'd be great if offline mode is added similar to how `transformers` loads models offline fine. @mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like?
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks)
57
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks) Requiring online connection is a deal breaker in some cases unfortunately so it'd be great if offline mode is added similar to how `transformers` loads models offline fine. @mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like?
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
here is my way to load a dataset offline, but it **requires** an online machine 1. (online machine) ``` import datasets data = datasets.load_dataset(...) data.save_to_disk(/YOUR/DATASET/DIR) ``` 2. copy the dir from online to the offline machine 3. (offline machine) ``` import datasets data = datasets.load_from_disk(/SAVED/DATA/DIR) ``` HTH.
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks)
47
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks) here is my way to load a dataset offline, but it **requires** an online machine 1. (online machine) ``` import datasets data = datasets.load_dataset(...) data.save_to_disk(/YOUR/DATASET/DIR) ``` 2. copy the dir from online to the offline machine 3. (offline machine) ``` import datasets data = datasets.load_from_disk(/SAVED/DATA/DIR) ``` HTH.
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
> here is my way to load a dataset offline, but it **requires** an online machine > > 1. (online machine) > > ``` > > import datasets > > data = datasets.load_dataset(...) > > data.save_to_disk(/YOUR/DATASET/DIR) > > ``` > > 2. copy the dir from online to the offline machine > > 3. (offline machine) > > ``` > > import datasets > > data = datasets.load_from_disk(/SAVED/DATA/DIR) > > ``` > > > > HTH.
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks)
76
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks) > here is my way to load a dataset offline, but it **requires** an online machine > > 1. (online machine) > > ``` > > import datasets > > data = datasets.load_dataset(...) > > data.save_to_disk(/YOUR/DATASET/DIR) > > ``` > > 2. copy the dir from online to the offline machine > > 3. (offline machine) > > ``` > > import datasets > > data = datasets.load_from_disk(/SAVED/DATA/DIR) > > ``` > > > > HTH.
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
I opened a PR that allows to reload modules that have already been loaded once even if there's no internet. Let me know if you know other ways that can make the offline mode experience better. I'd be happy to add them :) I already note the "freeze" modules option, to prevent local modules updates. It would be a cool feature. ---------- > @mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like? Indeed `load_dataset` allows to load remote dataset script (squad, glue, etc.) but also you own local ones. For example if you have a dataset script at `./my_dataset/my_dataset.py` then you can do ```python load_dataset("./my_dataset") ``` and the dataset script will generate your dataset once and for all. ---------- About I'm looking into having `csv`, `json`, `text`, `pandas` dataset builders already included in the `datasets` package, so that they are available offline by default, as opposed to the other datasets that require the script to be downloaded. cf #1724
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks)
179
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks) I opened a PR that allows to reload modules that have already been loaded once even if there's no internet. Let me know if you know other ways that can make the offline mode experience better. I'd be happy to add them :) I already note the "freeze" modules option, to prevent local modules updates. It would be a cool feature. ---------- > @mandubian's second bullet point suggests that there's a workaround allowing you to use your offline (custom?) dataset with `datasets`. Could you please elaborate on how that should look like? Indeed `load_dataset` allows to load remote dataset script (squad, glue, etc.) but also you own local ones. For example if you have a dataset script at `./my_dataset/my_dataset.py` then you can do ```python load_dataset("./my_dataset") ``` and the dataset script will generate your dataset once and for all. ---------- About I'm looking into having `csv`, `json`, `text`, `pandas` dataset builders already included in the `datasets` package, so that they are available offline by default, as opposed to the other datasets that require the script to be downloaded. cf #1724
https://github.com/huggingface/datasets/issues/824
Discussion using datasets in offline mode
The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :) You can now use them offline ```python datasets = load_dataset('text', data_files=data_files) ``` We'll do a new release soon
`datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks)
38
Discussion using datasets in offline mode `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too. I create this ticket to discuss a bit and gather what you have in mind or other propositions. Here are some points to open discussion: - if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine. - AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally. - I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet. WDYT? (thks) The local dataset builders (csv, text , json and pandas) are now part of the `datasets` package since #1726 :) You can now use them offline ```python datasets = load_dataset('text', data_files=data_files) ``` We'll do a new release soon
https://github.com/huggingface/datasets/issues/823
how processing in batch works in datasets
Hi I don’t think this is a request for a dataset like you labeled it. I also think this would be better suited for the forum at https://discuss.huggingface.co. we try to keep the issue for the repo for bug reports and new features/dataset requests and have usage questions discussed on the forum. Thanks.
Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented max_source_length: str = NotImplemented max_target_length: str = NotImplemented # TODO: should not be a task item, but cannot see other ways. tpu_num_cores: int = None # The arguments set are for all tasks and needs to be kept common. def __init__(self, config): self.max_source_length = config['max_source_length'] self.max_target_length = config['max_target_length'] self.tokenizer = config['tokenizer'] self.tpu_num_cores = config['tpu_num_cores'] def _encode(self, batch) -> Dict[str, torch.Tensor]: batch_encoding = self.tokenizer.prepare_seq2seq_batch( [x["src_texts"] for x in batch], tgt_texts=[x["tgt_texts"] for x in batch], max_length=self.max_source_length, max_target_length=self.max_target_length, padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack return_tensors="pt" ) return batch_encoding.data def data_split(self, split): return self.split_to_data_split[split] def get_dataset(self, split, n_obs=None): split = self.data_split(split) if n_obs is not None: split = split+"[:{}]".format(n_obs) dataset = load_dataset(self.task_name, split=split) dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names) dataset = dataset.map(lambda batch: self._encode(batch), batched=True) dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label']) return dataset ``` I call it like `AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train) ` This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks File "finetune_multitask_trainer.py", line 192, in main if training_args.do_train else None File "finetune_multitask_trainer.py", line 191, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda> dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode [x["src_texts"] for x in batch], File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp> [x["src_texts"] for x in batch], TypeError: string indices must be integers
53
how processing in batch works in datasets Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented max_source_length: str = NotImplemented max_target_length: str = NotImplemented # TODO: should not be a task item, but cannot see other ways. tpu_num_cores: int = None # The arguments set are for all tasks and needs to be kept common. def __init__(self, config): self.max_source_length = config['max_source_length'] self.max_target_length = config['max_target_length'] self.tokenizer = config['tokenizer'] self.tpu_num_cores = config['tpu_num_cores'] def _encode(self, batch) -> Dict[str, torch.Tensor]: batch_encoding = self.tokenizer.prepare_seq2seq_batch( [x["src_texts"] for x in batch], tgt_texts=[x["tgt_texts"] for x in batch], max_length=self.max_source_length, max_target_length=self.max_target_length, padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack return_tensors="pt" ) return batch_encoding.data def data_split(self, split): return self.split_to_data_split[split] def get_dataset(self, split, n_obs=None): split = self.data_split(split) if n_obs is not None: split = split+"[:{}]".format(n_obs) dataset = load_dataset(self.task_name, split=split) dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names) dataset = dataset.map(lambda batch: self._encode(batch), batched=True) dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label']) return dataset ``` I call it like `AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train) ` This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks File "finetune_multitask_trainer.py", line 192, in main if training_args.do_train else None File "finetune_multitask_trainer.py", line 191, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda> dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode [x["src_texts"] for x in batch], File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp> [x["src_texts"] for x in batch], TypeError: string indices must be integers Hi I don’t think this is a request for a dataset like you labeled it. I also think this would be better suited for the forum at https://discuss.huggingface.co. we try to keep the issue for the repo for bug reports and new features/dataset requests and have usage questions discussed on the forum. Thanks.
https://github.com/huggingface/datasets/issues/823
how processing in batch works in datasets
Hi Thomas, what I do not get from documentation is that why when you set batched=True, this is processed in batch, while data is not divided to batched beforehand, basically this is a question on the documentation and I do not get the batched=True, but sure, if you think this is more appropriate in forum I will post it there. thanks Best Rabeeh On Tue, Nov 10, 2020 at 12:21 PM Thomas Wolf <notifications@github.com> wrote: > Hi I don’t think this is a request for a dataset like you labeled it. > > I also think this would be better suited for the forum at > https://discuss.huggingface.co. we try to keep the issue for the repo for > bug reports and new features/dataset requests and have usage questions > discussed on the forum. Thanks. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/datasets/issues/823#issuecomment-724639476>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ARPXHH4FIPFHVVUHANAE4F3SPEO2JANCNFSM4TQQVEXQ> > . >
Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented max_source_length: str = NotImplemented max_target_length: str = NotImplemented # TODO: should not be a task item, but cannot see other ways. tpu_num_cores: int = None # The arguments set are for all tasks and needs to be kept common. def __init__(self, config): self.max_source_length = config['max_source_length'] self.max_target_length = config['max_target_length'] self.tokenizer = config['tokenizer'] self.tpu_num_cores = config['tpu_num_cores'] def _encode(self, batch) -> Dict[str, torch.Tensor]: batch_encoding = self.tokenizer.prepare_seq2seq_batch( [x["src_texts"] for x in batch], tgt_texts=[x["tgt_texts"] for x in batch], max_length=self.max_source_length, max_target_length=self.max_target_length, padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack return_tensors="pt" ) return batch_encoding.data def data_split(self, split): return self.split_to_data_split[split] def get_dataset(self, split, n_obs=None): split = self.data_split(split) if n_obs is not None: split = split+"[:{}]".format(n_obs) dataset = load_dataset(self.task_name, split=split) dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names) dataset = dataset.map(lambda batch: self._encode(batch), batched=True) dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label']) return dataset ``` I call it like `AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train) ` This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks File "finetune_multitask_trainer.py", line 192, in main if training_args.do_train else None File "finetune_multitask_trainer.py", line 191, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda> dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode [x["src_texts"] for x in batch], File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp> [x["src_texts"] for x in batch], TypeError: string indices must be integers
167
how processing in batch works in datasets Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented max_source_length: str = NotImplemented max_target_length: str = NotImplemented # TODO: should not be a task item, but cannot see other ways. tpu_num_cores: int = None # The arguments set are for all tasks and needs to be kept common. def __init__(self, config): self.max_source_length = config['max_source_length'] self.max_target_length = config['max_target_length'] self.tokenizer = config['tokenizer'] self.tpu_num_cores = config['tpu_num_cores'] def _encode(self, batch) -> Dict[str, torch.Tensor]: batch_encoding = self.tokenizer.prepare_seq2seq_batch( [x["src_texts"] for x in batch], tgt_texts=[x["tgt_texts"] for x in batch], max_length=self.max_source_length, max_target_length=self.max_target_length, padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack return_tensors="pt" ) return batch_encoding.data def data_split(self, split): return self.split_to_data_split[split] def get_dataset(self, split, n_obs=None): split = self.data_split(split) if n_obs is not None: split = split+"[:{}]".format(n_obs) dataset = load_dataset(self.task_name, split=split) dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names) dataset = dataset.map(lambda batch: self._encode(batch), batched=True) dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label']) return dataset ``` I call it like `AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train) ` This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks File "finetune_multitask_trainer.py", line 192, in main if training_args.do_train else None File "finetune_multitask_trainer.py", line 191, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda> dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode [x["src_texts"] for x in batch], File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp> [x["src_texts"] for x in batch], TypeError: string indices must be integers Hi Thomas, what I do not get from documentation is that why when you set batched=True, this is processed in batch, while data is not divided to batched beforehand, basically this is a question on the documentation and I do not get the batched=True, but sure, if you think this is more appropriate in forum I will post it there. thanks Best Rabeeh On Tue, Nov 10, 2020 at 12:21 PM Thomas Wolf <notifications@github.com> wrote: > Hi I don’t think this is a request for a dataset like you labeled it. > > I also think this would be better suited for the forum at > https://discuss.huggingface.co. we try to keep the issue for the repo for > bug reports and new features/dataset requests and have usage questions > discussed on the forum. Thanks. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/datasets/issues/823#issuecomment-724639476>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ARPXHH4FIPFHVVUHANAE4F3SPEO2JANCNFSM4TQQVEXQ> > . >
https://github.com/huggingface/datasets/issues/823
how processing in batch works in datasets
Yes the forum is perfect for that. You can post in the `datasets` section. Thanks a lot!
Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented max_source_length: str = NotImplemented max_target_length: str = NotImplemented # TODO: should not be a task item, but cannot see other ways. tpu_num_cores: int = None # The arguments set are for all tasks and needs to be kept common. def __init__(self, config): self.max_source_length = config['max_source_length'] self.max_target_length = config['max_target_length'] self.tokenizer = config['tokenizer'] self.tpu_num_cores = config['tpu_num_cores'] def _encode(self, batch) -> Dict[str, torch.Tensor]: batch_encoding = self.tokenizer.prepare_seq2seq_batch( [x["src_texts"] for x in batch], tgt_texts=[x["tgt_texts"] for x in batch], max_length=self.max_source_length, max_target_length=self.max_target_length, padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack return_tensors="pt" ) return batch_encoding.data def data_split(self, split): return self.split_to_data_split[split] def get_dataset(self, split, n_obs=None): split = self.data_split(split) if n_obs is not None: split = split+"[:{}]".format(n_obs) dataset = load_dataset(self.task_name, split=split) dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names) dataset = dataset.map(lambda batch: self._encode(batch), batched=True) dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label']) return dataset ``` I call it like `AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train) ` This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks File "finetune_multitask_trainer.py", line 192, in main if training_args.do_train else None File "finetune_multitask_trainer.py", line 191, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda> dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode [x["src_texts"] for x in batch], File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp> [x["src_texts"] for x in batch], TypeError: string indices must be integers
17
how processing in batch works in datasets Hi, I need to process my datasets before it is passed to dataloader in batch, here is my codes ``` class AbstractTask(ABC): task_name: str = NotImplemented preprocessor: Callable = NotImplemented split_to_data_split: Mapping[str, str] = NotImplemented tokenizer: Callable = NotImplemented max_source_length: str = NotImplemented max_target_length: str = NotImplemented # TODO: should not be a task item, but cannot see other ways. tpu_num_cores: int = None # The arguments set are for all tasks and needs to be kept common. def __init__(self, config): self.max_source_length = config['max_source_length'] self.max_target_length = config['max_target_length'] self.tokenizer = config['tokenizer'] self.tpu_num_cores = config['tpu_num_cores'] def _encode(self, batch) -> Dict[str, torch.Tensor]: batch_encoding = self.tokenizer.prepare_seq2seq_batch( [x["src_texts"] for x in batch], tgt_texts=[x["tgt_texts"] for x in batch], max_length=self.max_source_length, max_target_length=self.max_target_length, padding="max_length" if self.tpu_num_cores is not None else "longest", # TPU hack return_tensors="pt" ) return batch_encoding.data def data_split(self, split): return self.split_to_data_split[split] def get_dataset(self, split, n_obs=None): split = self.data_split(split) if n_obs is not None: split = split+"[:{}]".format(n_obs) dataset = load_dataset(self.task_name, split=split) dataset = dataset.map(self.preprocessor, remove_columns=dataset.column_names) dataset = dataset.map(lambda batch: self._encode(batch), batched=True) dataset.set_format(type="torch", columns=['input_ids', 'token_type_ids', 'attention_mask', 'label']) return dataset ``` I call it like `AutoTask.get(task, train_dataset_config).get_dataset(split="train", n_obs=data_args.n_train) ` This gives the following error, to me because the data inside the dataset = dataset.map(lambda batch: self._encode(batch), batched=True) is not processed in batch, could you tell me how I can process dataset in batch inside my function? thanks File "finetune_multitask_trainer.py", line 192, in main if training_args.do_train else None File "finetune_multitask_trainer.py", line 191, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in get_dataset dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1236, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1207, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 56, in <lambda> dataset = dataset.map(lambda batch: self._encode(batch), batched=True) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in _encode [x["src_texts"] for x in batch], File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks.py", line 37, in <listcomp> [x["src_texts"] for x in batch], TypeError: string indices must be integers Yes the forum is perfect for that. You can post in the `datasets` section. Thanks a lot!
https://github.com/huggingface/datasets/issues/822
datasets freezes
Pytorch is unable to convert strings to tensors unfortunately. You can use `set_format(type="torch")` on columns that can be converted to tensors, such as token ids. This makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text columns
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question']) dataset2 = load_dataset("imdb", split="train[:10]") dataset2 = dataset2.set_format(type="torch", columns=["text", "label"]) print(len(dataset1))
52
datasets freezes Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question']) dataset2 = load_dataset("imdb", split="train[:10]") dataset2 = dataset2.set_format(type="torch", columns=["text", "label"]) print(len(dataset1)) Pytorch is unable to convert strings to tensors unfortunately. You can use `set_format(type="torch")` on columns that can be converted to tensors, such as token ids. This makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text columns
https://github.com/huggingface/datasets/issues/822
datasets freezes
Ultimately, we decided to return a list instead of an error when formatting a string column with the format type `"torch"`. If you think an error would be more appropriate, please open a new issue.
Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question']) dataset2 = load_dataset("imdb", split="train[:10]") dataset2 = dataset2.set_format(type="torch", columns=["text", "label"]) print(len(dataset1))
35
datasets freezes Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question']) dataset2 = load_dataset("imdb", split="train[:10]") dataset2 = dataset2.set_format(type="torch", columns=["text", "label"]) print(len(dataset1)) Ultimately, we decided to return a list instead of an error when formatting a string column with the format type `"torch"`. If you think an error would be more appropriate, please open a new issue.
https://github.com/huggingface/datasets/issues/816
[Caching] Dill globalvars() output order is not deterministic and can cause cache issues.
To show the issue: ``` python -c "from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))" ``` doesn't always return the same ouput since `globs` is a dictionary with "a" and "len" as keys but sometimes not in the same order
Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues. To fix that one could register an implementation of dill's `save_function` in the `datasets` pickler that sorts the globals keys before dumping a function.
43
[Caching] Dill globalvars() output order is not deterministic and can cause cache issues. Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues. To fix that one could register an implementation of dill's `save_function` in the `datasets` pickler that sorts the globals keys before dumping a function. To show the issue: ``` python -c "from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))" ``` doesn't always return the same ouput since `globs` is a dictionary with "a" and "len" as keys but sometimes not in the same order
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
Hello ! Could you give more details ? If you mean iter through one dataset then yes, `Dataset` object does implement the `__iter__` method so you can use ```python for example in dataset: # do something ``` If you want to iter through several datasets you can first concatenate them ```python from datasets import concatenate_datasets new_dataset = concatenate_datasets([dataset1, dataset2]) ``` Let me know if this helps !
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
67
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks Hello ! Could you give more details ? If you mean iter through one dataset then yes, `Dataset` object does implement the `__iter__` method so you can use ```python for example in dataset: # do something ``` If you want to iter through several datasets you can first concatenate them ```python from datasets import concatenate_datasets new_dataset = concatenate_datasets([dataset1, dataset2]) ``` Let me know if this helps !
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
Hi Huggingface/Datasets team, I want to use the datasets inside Seq2SeqDataset here https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py and there I need to return back each line from the datasets and I am not sure how to access each line and implement this? It seems it also has get_item attribute? so I was not sure if this is iterative dataset? or if this is non-iterable datasets? thanks. On Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <notifications@github.com> wrote: > Hello ! > Could you give more details ? > > If you mean iter through one dataset then yes, Dataset object does > implement the __iter__ method so you can use > > for example in dataset: > # do something > > If you want to iter through several datasets you can first concatenate them > > from datasets import concatenate_datasets > new_dataset = concatenate_datasets([dataset1, dataset2]) > > Let me know if this helps ! > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/datasets/issues/815#issuecomment-723881199>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA> > . >
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
185
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks Hi Huggingface/Datasets team, I want to use the datasets inside Seq2SeqDataset here https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py and there I need to return back each line from the datasets and I am not sure how to access each line and implement this? It seems it also has get_item attribute? so I was not sure if this is iterative dataset? or if this is non-iterable datasets? thanks. On Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <notifications@github.com> wrote: > Hello ! > Could you give more details ? > > If you mean iter through one dataset then yes, Dataset object does > implement the __iter__ method so you can use > > for example in dataset: > # do something > > If you want to iter through several datasets you can first concatenate them > > from datasets import concatenate_datasets > new_dataset = concatenate_datasets([dataset1, dataset2]) > > Let me know if this helps ! > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/datasets/issues/815#issuecomment-723881199>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA> > . >
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
could you tell me please if datasets also has __getitem__ any idea on how to integrate it with Seq2SeqDataset is appreciated thanks On Mon, Nov 9, 2020 at 10:22 AM Rabeeh Karimi Mahabadi <rabeeh@google.com> wrote: > Hi Huggingface/Datasets team, > I want to use the datasets inside Seq2SeqDataset here > https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py > and there I need to return back each line from the datasets and I am not > sure how to access each line and implement this? > It seems it also has get_item attribute? so I was not sure if this is > iterative dataset? or if this is non-iterable datasets? > thanks. > > > > On Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <notifications@github.com> > wrote: > >> Hello ! >> Could you give more details ? >> >> If you mean iter through one dataset then yes, Dataset object does >> implement the __iter__ method so you can use >> >> for example in dataset: >> # do something >> >> If you want to iter through several datasets you can first concatenate >> them >> >> from datasets import concatenate_datasets >> new_dataset = concatenate_datasets([dataset1, dataset2]) >> >> Let me know if this helps ! >> >> — >> You are receiving this because you authored the thread. >> Reply to this email directly, view it on GitHub >> <https://github.com/huggingface/datasets/issues/815#issuecomment-723881199>, >> or unsubscribe >> <https://github.com/notifications/unsubscribe-auth/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA> >> . >> >
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
236
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks could you tell me please if datasets also has __getitem__ any idea on how to integrate it with Seq2SeqDataset is appreciated thanks On Mon, Nov 9, 2020 at 10:22 AM Rabeeh Karimi Mahabadi <rabeeh@google.com> wrote: > Hi Huggingface/Datasets team, > I want to use the datasets inside Seq2SeqDataset here > https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py > and there I need to return back each line from the datasets and I am not > sure how to access each line and implement this? > It seems it also has get_item attribute? so I was not sure if this is > iterative dataset? or if this is non-iterable datasets? > thanks. > > > > On Mon, Nov 9, 2020 at 10:18 AM Quentin Lhoest <notifications@github.com> > wrote: > >> Hello ! >> Could you give more details ? >> >> If you mean iter through one dataset then yes, Dataset object does >> implement the __iter__ method so you can use >> >> for example in dataset: >> # do something >> >> If you want to iter through several datasets you can first concatenate >> them >> >> from datasets import concatenate_datasets >> new_dataset = concatenate_datasets([dataset1, dataset2]) >> >> Let me know if this helps ! >> >> — >> You are receiving this because you authored the thread. >> Reply to this email directly, view it on GitHub >> <https://github.com/huggingface/datasets/issues/815#issuecomment-723881199>, >> or unsubscribe >> <https://github.com/notifications/unsubscribe-auth/ARPXHHYRLSSYW6NZN2HYDBTSO6XV5ANCNFSM4TPB7OWA> >> . >> >
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
`datasets.Dataset` objects implement indeed `__getitem__`. It returns a dictionary with one field per column. We've not added the integration of the datasets library for the seq2seq utilities yet. The current seq2seq utilities are based on text files. However as soon as you have a `datasets.Dataset` with columns "tgt_texts" (str), "src_texts" (str), and "id" (int) you should be able to implement your own Seq2SeqDataset class that wraps your dataset object. Does that make sense to you ?
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
76
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks `datasets.Dataset` objects implement indeed `__getitem__`. It returns a dictionary with one field per column. We've not added the integration of the datasets library for the seq2seq utilities yet. The current seq2seq utilities are based on text files. However as soon as you have a `datasets.Dataset` with columns "tgt_texts" (str), "src_texts" (str), and "id" (int) you should be able to implement your own Seq2SeqDataset class that wraps your dataset object. Does that make sense to you ?
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
Hi I am sorry for asking it multiple times but I am not getting the dataloader type, could you confirm if the dataset library returns back an iterable type dataloader or a mapping type one where one has access to __getitem__, in the former case, one can iterate with __iter__, and how I can configure it to return the data back as the iterative type? I am dealing with large-scale datasets and I do not want to bring all in memory thanks for your help Best regards Rabeeh On Mon, Nov 9, 2020 at 11:17 AM Quentin Lhoest <notifications@github.com> wrote: > datasets.Dataset objects implement indeed __getitem__. It returns a > dictionary with one field per column. > > We've not added the integration of the datasets library for the seq2seq > utilities yet. The current seq2seq utilities are based on text files. > > However as soon as you have a datasets.Dataset with columns "tgt_texts" > (str), "src_texts" (str), and "id" (int) you should be able to implement > your own Seq2SeqDataset class that wraps your dataset object. Does that > make sense ? > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/datasets/issues/815#issuecomment-723915556>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ARPXHHYOC22EM7F666BZSOTSO66R3ANCNFSM4TPB7OWA> > . >
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
217
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks Hi I am sorry for asking it multiple times but I am not getting the dataloader type, could you confirm if the dataset library returns back an iterable type dataloader or a mapping type one where one has access to __getitem__, in the former case, one can iterate with __iter__, and how I can configure it to return the data back as the iterative type? I am dealing with large-scale datasets and I do not want to bring all in memory thanks for your help Best regards Rabeeh On Mon, Nov 9, 2020 at 11:17 AM Quentin Lhoest <notifications@github.com> wrote: > datasets.Dataset objects implement indeed __getitem__. It returns a > dictionary with one field per column. > > We've not added the integration of the datasets library for the seq2seq > utilities yet. The current seq2seq utilities are based on text files. > > However as soon as you have a datasets.Dataset with columns "tgt_texts" > (str), "src_texts" (str), and "id" (int) you should be able to implement > your own Seq2SeqDataset class that wraps your dataset object. Does that > make sense ? > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/datasets/issues/815#issuecomment-723915556>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ARPXHHYOC22EM7F666BZSOTSO66R3ANCNFSM4TPB7OWA> > . >
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
`datasets.Dataset` objects are both iterative and mapping types: it has both `__iter__` and `__getitem__` For example you can do ```python for example in dataset: # do something ``` or ```python for i in range(len(dataset)): example = dataset[i] # do something ``` When you do that, one and only one example is loaded into memory at a time.
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
57
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks `datasets.Dataset` objects are both iterative and mapping types: it has both `__iter__` and `__getitem__` For example you can do ```python for example in dataset: # do something ``` or ```python for i in range(len(dataset)): example = dataset[i] # do something ``` When you do that, one and only one example is loaded into memory at a time.
https://github.com/huggingface/datasets/issues/815
Is dataset iterative or not?
Hi there, Here is what I am trying, this is not working for me in map-style datasets, could you please tell me how to use datasets with being able to access ___getitem__ ? could you assist me please correcting this example? I need map-style datasets which is formed from concatenation of two datasets from your library. thanks ``` import datasets dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.map(lambda example: {"src_texts": "question: {0} context: {1} ".format( example["question"], example["context"]), "tgt_texts": example["answers"]["text"][0]}, remove_columns=dataset1.column_names) dataset2 = load_dataset("imdb", split="train[:10]") dataset2 = dataset2.map(lambda example: {"src_texts": "imdb: " + example["text"], "tgt_texts": str(example["label"])}, remove_columns=dataset2.column_names) train_dataset = datasets.concatenate_datasets([dataset1, dataset2]) train_dataset.set_format(type='torch', columns=['src_texts', 'tgt_texts']) dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32) for id, batch in enumerate(dataloader): print(batch) ```
Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks
113
Is dataset iterative or not? Hi I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not? could you provide me with example how I can use datasets as iterative datasets? thanks Hi there, Here is what I am trying, this is not working for me in map-style datasets, could you please tell me how to use datasets with being able to access ___getitem__ ? could you assist me please correcting this example? I need map-style datasets which is formed from concatenation of two datasets from your library. thanks ``` import datasets dataset1 = load_dataset("squad", split="train[:10]") dataset1 = dataset1.map(lambda example: {"src_texts": "question: {0} context: {1} ".format( example["question"], example["context"]), "tgt_texts": example["answers"]["text"][0]}, remove_columns=dataset1.column_names) dataset2 = load_dataset("imdb", split="train[:10]") dataset2 = dataset2.map(lambda example: {"src_texts": "imdb: " + example["text"], "tgt_texts": str(example["label"])}, remove_columns=dataset2.column_names) train_dataset = datasets.concatenate_datasets([dataset1, dataset2]) train_dataset.set_format(type='torch', columns=['src_texts', 'tgt_texts']) dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=32) for id, batch in enumerate(dataloader): print(batch) ```
https://github.com/huggingface/datasets/issues/813
How to implement DistributedSampler with datasets
Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks.
Hi, I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them. I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using datasets in which datasets are iterative? To give you more context, I have multiple of datasets and I need to write sampler for this case. thanks.
40
How to implement DistributedSampler with datasets Hi, I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them. I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using datasets in which datasets are iterative? To give you more context, I have multiple of datasets and I need to write sampler for this case. thanks. Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks.
https://github.com/huggingface/datasets/issues/812
Too much logging
Hi ! Thanks for reporting :) I agree these one should be hidden when the logging level is warning, we'll fix that
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock [2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock using datasets version = 1.1.2
22
Too much logging I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock [2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock using datasets version = 1.1.2 Hi ! Thanks for reporting :) I agree these one should be hidden when the logging level is warning, we'll fix that
https://github.com/huggingface/datasets/issues/812
Too much logging
+1, the amount of logging is excessive. Most of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these messages were logged after I already called `datasets.logging.set_verbosity_error()`) ``` I1109 21:26:01.742688 139785006901056 filelock.py:318] Lock 139778216292192 released on /home/kitaev/.cache/huggingface/datasets/9ed4f2e133395826175a892c70611f68522c7bc61a35476e8b51a31afb76e4bf.e6f3e3f3e3875a07469d1cfd32e16e1d06b149616b11eef2d081c43d515b492d.py.lock I1109 21:26:01.747898 139785006901056 filelock.py:274] Lock 139778216290176 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock I1109 21:26:01.748258 139785006901056 filelock.py:318] Lock 139778216290176 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock I1109 21:26:01.748412 139785006901056 filelock.py:274] Lock 139778215853024 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock I1109 21:26:01.748497 139785006901056 filelock.py:318] Lock 139778215853024 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock I1109 21:07:17.029001 140301730502464 filelock.py:274] Lock 140289479304360 acquired on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock I1109 21:07:17.029341 140301730502464 filelock.py:318] Lock 140289479304360 released on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock I1109 21:07:17.058964 140301730502464 filelock.py:274] Lock 140251889388120 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock I1109 21:07:17.060933 140301730502464 filelock.py:318] Lock 140251889388120 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock I1109 21:07:17.061067 140301730502464 filelock.py:274] Lock 140296072521488 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock I1109 21:07:17.069736 140301730502464 metric.py:400] Removing /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow I1109 21:07:17.069949 140301730502464 filelock.py:318] Lock 140296072521488 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock ```
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock [2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock using datasets version = 1.1.2
145
Too much logging I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock [2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock using datasets version = 1.1.2 +1, the amount of logging is excessive. Most of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these messages were logged after I already called `datasets.logging.set_verbosity_error()`) ``` I1109 21:26:01.742688 139785006901056 filelock.py:318] Lock 139778216292192 released on /home/kitaev/.cache/huggingface/datasets/9ed4f2e133395826175a892c70611f68522c7bc61a35476e8b51a31afb76e4bf.e6f3e3f3e3875a07469d1cfd32e16e1d06b149616b11eef2d081c43d515b492d.py.lock I1109 21:26:01.747898 139785006901056 filelock.py:274] Lock 139778216290176 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock I1109 21:26:01.748258 139785006901056 filelock.py:318] Lock 139778216290176 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock I1109 21:26:01.748412 139785006901056 filelock.py:274] Lock 139778215853024 acquired on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock I1109 21:26:01.748497 139785006901056 filelock.py:318] Lock 139778215853024 released on /home/kitaev/.cache/huggingface/datasets/_home_kitaev_.cache_huggingface_datasets_glue_mnli_1.0.0_7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4.lock I1109 21:07:17.029001 140301730502464 filelock.py:274] Lock 140289479304360 acquired on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock I1109 21:07:17.029341 140301730502464 filelock.py:318] Lock 140289479304360 released on /home/kitaev/.cache/huggingface/datasets/b16d3a04bf2cad1346896852bf120ba846ea1bebb1cd60255bb3a1a2bbcc3a67.ec871b06a00118091ec63eff0a641fddcb8d3c7cd52e855bbb2be28944df4b82.py.lock I1109 21:07:17.058964 140301730502464 filelock.py:274] Lock 140251889388120 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock I1109 21:07:17.060933 140301730502464 filelock.py:318] Lock 140251889388120 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock I1109 21:07:17.061067 140301730502464 filelock.py:274] Lock 140296072521488 acquired on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock I1109 21:07:17.069736 140301730502464 metric.py:400] Removing /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow I1109 21:07:17.069949 140301730502464 filelock.py:318] Lock 140296072521488 released on /home/kitaev/.cache/huggingface/metrics/glue/mnli/default_experiment-1-0.arrow.lock ```
https://github.com/huggingface/datasets/issues/812
Too much logging
In the latest version of the lib the logs about locks are at the DEBUG level so you won't see them by default. Also `set_verbosity_warning` does take into account these logs now. Can you try to update the lib ? ``` pip install --upgrade datasets ```
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock [2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock using datasets version = 1.1.2
46
Too much logging I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock [2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock using datasets version = 1.1.2 In the latest version of the lib the logs about locks are at the DEBUG level so you won't see them by default. Also `set_verbosity_warning` does take into account these logs now. Can you try to update the lib ? ``` pip install --upgrade datasets ```
https://github.com/huggingface/datasets/issues/812
Too much logging
Thanks. For some reason I have to use the older version. Is that possible I can fix this by some surface-level trick? I'm still using 1.13 version datasets.
I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock [2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock using datasets version = 1.1.2
28
Too much logging I'm doing this in the beginning of my script: from datasets.utils import logging as datasets_logging datasets_logging.set_verbosity_warning() but I'm still getting these logs: [2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock [2020-11-07 15:45:41,909][filelock][INFO] - Lock 139958278886176 released on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1c145a0a210d5b9cdce2b60002831e6ed0edc7ab9275d6f0d48.1bd4ccbce9de3dad0698d84674a19d6cc66a84db736a6398110bd196795dde7e.py.lock using datasets version = 1.1.2 Thanks. For some reason I have to use the older version. Is that possible I can fix this by some surface-level trick? I'm still using 1.13 version datasets.
https://github.com/huggingface/datasets/issues/809
Add Google Taskmaster dataset
Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now?
## Adding a Dataset - **Name:** Taskmaster - **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations) - **Paper:** https://arxiv.org/abs/1909.05358 - **Data:** https://github.com/google-research-datasets/Taskmaster - **Motivation:** One of few annotated datasets of this size for goal-oriented dialogue Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
27
Add Google Taskmaster dataset ## Adding a Dataset - **Name:** Taskmaster - **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations) - **Paper:** https://arxiv.org/abs/1909.05358 - **Data:** https://github.com/google-research-datasets/Taskmaster - **Motivation:** One of few annotated datasets of this size for goal-oriented dialogue Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now?
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
Hi ! The url works on my side. Is the url working in your navigator ? Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ```
30
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ``` Hi ! The url works on my side. Is the url working in your navigator ? Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
> Hi ! > The url works on my side. > > Is the url working in your navigator ? > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? I tried another server, it's working now. Thanks a lot. And I'm curious about why download things from "github" when I load dataset from local files ? Dose datasets work if my network crashed?
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ```
69
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ``` > Hi ! > The url works on my side. > > Is the url working in your navigator ? > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? I tried another server, it's working now. Thanks a lot. And I'm curious about why download things from "github" when I load dataset from local files ? Dose datasets work if my network crashed?
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
> > Hi ! > > The url works on my side. > > Is the url working in your navigator ? > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > I tried another server, it's working now. Thanks a lot. > > And I'm curious about why download things from "github" when I load dataset from local files ? Dose datasets work if my network crashed? I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? Thanks :D
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ```
103
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ``` > > Hi ! > > The url works on my side. > > Is the url working in your navigator ? > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > I tried another server, it's working now. Thanks a lot. > > And I'm curious about why download things from "github" when I load dataset from local files ? Dose datasets work if my network crashed? I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? Thanks :D
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
hello, how did you solve this problems? > > > Hi ! > > > The url works on my side. > > > Is the url working in your navigator ? > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > > > > > I tried another server, it's working now. Thanks a lot. > > And I'm curious about why download things from "github" when I load dataset from local files ? Dose datasets work if my network crashed? > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? > > Thanks :D hello, I tried this. but it still failed. how do you fix this error?
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ```
136
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ``` hello, how did you solve this problems? > > > Hi ! > > > The url works on my side. > > > Is the url working in your navigator ? > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > > > > > I tried another server, it's working now. Thanks a lot. > > And I'm curious about why download things from "github" when I load dataset from local files ? Dose datasets work if my network crashed? > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? > > Thanks :D hello, I tried this. but it still failed. how do you fix this error?
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
> hello, how did you solve this problems? > > > > > Hi ! > > > > The url works on my side. > > > > Is the url working in your navigator ? > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > > > > > > > > I tried another server, it's working now. Thanks a lot. > > > And I'm curious about why download things from "github" when I load dataset from local files ? Dose datasets work if my network crashed? > > > > > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? > > Thanks :D > > hello, I tried this. but it still failed. how do you fix this error? 你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)`
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ```
155
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ``` > hello, how did you solve this problems? > > > > > Hi ! > > > > The url works on my side. > > > > Is the url working in your navigator ? > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > > > > > > > > I tried another server, it's working now. Thanks a lot. > > > And I'm curious about why download things from "github" when I load dataset from local files ? Dose datasets work if my network crashed? > > > > > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? > > Thanks :D > > hello, I tried this. but it still failed. how do you fix this error? 你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)`
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
> > hello, how did you solve this problems? > > > > > Hi ! > > > > > The url works on my side. > > > > > Is the url working in your navigator ? > > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > > > > > > > > > > > I tried another server, it's working now. Thanks a lot. > > > > And I'm curious about why download things from "github" when I load dataset from local files ? Dose datasets work if my network crashed? > > > > > > > > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? > > > Thanks :D > > > > > > hello, I tried this. but it still failed. how do you fix this error? > > 你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)` 好的好的!解决了,感谢感谢!!!
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ```
174
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ``` > > hello, how did you solve this problems? > > > > > Hi ! > > > > > The url works on my side. > > > > > Is the url working in your navigator ? > > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > > > > > > > > > > > I tried another server, it's working now. Thanks a lot. > > > > And I'm curious about why download things from "github" when I load dataset from local files ? Dose datasets work if my network crashed? > > > > > > > > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? > > > Thanks :D > > > > > > hello, I tried this. but it still failed. how do you fix this error? > > 你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)` 好的好的!解决了,感谢感谢!!!
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
> > > > hello, how did you solve this problems? > > > > > Hi ! > > > > > The url works on my side. > > > > > Is the url working in your navigator ? > > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > > > > > > > > > > > I tried another server, it's working now. Thanks a lot. > > > > And I'm curious about why download things from "github" when I load dataset from local files ? Dose datasets work if my network crashed? > > > > > > > > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? > > > Thanks :D > > > > > > hello, I tried this. but it still failed. how do you fix this error? > > 你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)` 我照着做了,然后报错。 ValueError: unable to parse C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets\dataset_infos.json as a URL or as a local path `--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-5-fd2106a3f053> in <module> ----> 1 dataset = load_dataset('C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets/csv.py', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) C:\Software\Anaconda\envs\ptk_gpu2\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 C:\Software\Anaconda\envs\ptk_gpu2\lib\site-packages\datasets\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 296 local_dataset_infos_path = cached_path( 297 dataset_infos, --> 298 download_config=download_config, 299 ) 300 except (FileNotFoundError, ConnectionError): C:\Software\Anaconda\envs\ptk_gpu2\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 316 else: 317 # Something unknown --> 318 raise ValueError("unable to parse {} as a URL or as a local path".format(url_or_filename)) 319 320 if download_config.extract_compressed_file and output_path is not None: ValueError: unable to parse C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets\dataset_infos.json as a URL or as a local path `
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ```
316
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ``` > > > > hello, how did you solve this problems? > > > > > Hi ! > > > > > The url works on my side. > > > > > Is the url working in your navigator ? > > > > > Are you connected to internet ? Does your network block access to `raw.githubusercontent.com` ? > > > > > > > > > > > > I tried another server, it's working now. Thanks a lot. > > > > And I'm curious about why download things from "github" when I load dataset from local files ? Dose datasets work if my network crashed? > > > > > > > > > I download the scripts `https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py` and move it to the package dir `*/datasets/` solved the problem. Could you please put the file `datasets/datasets/csv/csv.py` to `datasets/src/datasets/`? > > > Thanks :D > > > > > > hello, I tried this. but it still failed. how do you fix this error? > > 你把那个脚本下载到你本地安装目录下,然后 `load_dataset(csv_script_path, data_fiels)` 我照着做了,然后报错。 ValueError: unable to parse C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets\dataset_infos.json as a URL or as a local path `--------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-5-fd2106a3f053> in <module> ----> 1 dataset = load_dataset('C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets/csv.py', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) C:\Software\Anaconda\envs\ptk_gpu2\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 C:\Software\Anaconda\envs\ptk_gpu2\lib\site-packages\datasets\load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 296 local_dataset_infos_path = cached_path( 297 dataset_infos, --> 298 download_config=download_config, 299 ) 300 except (FileNotFoundError, ConnectionError): C:\Software\Anaconda\envs\ptk_gpu2\lib\site-packages\datasets\utils\file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 316 else: 317 # Something unknown --> 318 raise ValueError("unable to parse {} as a URL or as a local path".format(url_or_filename)) 319 320 if download_config.extract_compressed_file and output_path is not None: ValueError: unable to parse C:/Software/Anaconda/envs/ptk_gpu2/Lib/site-packages/datasets\dataset_infos.json as a URL or as a local path `
https://github.com/huggingface/datasets/issues/807
load_dataset for LOCAL CSV files report CONNECTION ERROR
I also experienced this issue this morning. Looks like something specific to windows. I'm working on a fix
## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ```
18
load_dataset for LOCAL CSV files report CONNECTION ERROR ## load_dataset for LOCAL CSV files report CONNECTION ERROR - **Description:** A local demo csv file: ``` import pandas as pd import numpy as np from datasets import load_dataset import torch import transformers df = pd.DataFrame(np.arange(1200).reshape(300,4)) df.to_csv('test.csv', header=False, index=False) print('datasets version: ', datasets.__version__) print('pytorch version: ', torch.__version__) print('transformers version: ', transformers.__version__) # output: datasets version: 1.1.2 pytorch version: 1.5.0 transformers version: 3.2.0 ``` when I load data through `dataset`: ``` dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ``` Error infos: ``` ConnectionError Traceback (most recent call last) <ipython-input-17-bbdadb9a0c78> in <module> ----> 1 dataset = load_dataset('csv', data_files='./test.csv', delimiter=',', autogenerate_column_names=False) ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 588 # Download/copy dataset processing script 589 module_path, hash = prepare_module( --> 590 path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True 591 ) 592 ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs) 266 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version) 267 try: --> 268 local_path = cached_path(file_path, download_config=download_config) 269 except FileNotFoundError: 270 if script_version is not None: ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 306 user_agent=download_config.user_agent, 307 local_files_only=download_config.local_files_only, --> 308 use_etag=download_config.use_etag, 309 ) 310 elif os.path.exists(url_or_filename): ~/.conda/envs/py36/lib/python3.6/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag) 473 elif response is not None and response.status_code == 404: 474 raise FileNotFoundError("Couldn't find file at {}".format(url)) --> 475 raise ConnectionError("Couldn't reach {}".format(url)) 476 477 # Try a second time ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py ``` And I try to connect to the site with requests: ``` import requests requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ``` Similarly Error occurs: ``` --------------------------------------------------------------------------- ConnectionRefusedError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 159 conn = connection.create_connection( --> 160 (self._dns_host, self.port), self.timeout, **extra_kw 161 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 83 if err is not None: ---> 84 raise err 85 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/connection.py in create_connection(address, timeout, source_address, socket_options) 73 sock.bind(source_address) ---> 74 sock.connect(sa) 75 return sock ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 676 headers=headers, --> 677 chunked=chunked, 678 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _make_request(self, conn, method, url, timeout, chunked, **httplib_request_kw) 380 try: --> 381 self._validate_conn(conn) 382 except (SocketTimeout, BaseSSLError) as e: ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in _validate_conn(self, conn) 975 if not getattr(conn, "sock", None): # AppEngine might not have `.sock` --> 976 conn.connect() 977 ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in connect(self) 307 # Add certificate verification --> 308 conn = self._new_conn() 309 hostname = self.host ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connection.py in _new_conn(self) 171 raise NewConnectionError( --> 172 self, "Failed to establish a new connection: %s" % e 173 ) NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 448 retries=self.max_retries, --> 449 timeout=timeout 450 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/connectionpool.py in urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, **response_kw) 724 retries = retries.increment( --> 725 method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] 726 ) ~/.conda/envs/py36/lib/python3.6/site-packages/urllib3/util/retry.py in increment(self, method, url, response, error, _pool, _stacktrace) 438 if new_retry.is_exhausted(): --> 439 raise MaxRetryError(_pool, url, error or ResponseError(cause)) 440 MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) <ipython-input-20-18cc3eb4a049> in <module> 1 import requests 2 ----> 3 requests.head("https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/csv/csv.py") ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in head(url, **kwargs) 102 103 kwargs.setdefault('allow_redirects', False) --> 104 return request('head', url, **kwargs) 105 106 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/api.py in request(method, url, **kwargs) 59 # cases, and look like a memory leak in others. 60 with sessions.Session() as session: ---> 61 return session.request(method=method, url=url, **kwargs) 62 63 ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json) 528 } 529 send_kwargs.update(settings) --> 530 resp = self.send(prep, **send_kwargs) 531 532 return resp ~/.conda/envs/py36/lib/python3.6/site-packages/requests/sessions.py in send(self, request, **kwargs) 641 642 # Send the request --> 643 r = adapter.send(request, **kwargs) 644 645 # Total elapsed time of the request (approximately) ~/.conda/envs/py36/lib/python3.6/site-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /huggingface/datasets/1.1.2/datasets/csv/csv.py (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f3cceda5e48>: Failed to establish a new connection: [Errno 111] Connection refused',)) ``` I also experienced this issue this morning. Looks like something specific to windows. I'm working on a fix
https://github.com/huggingface/datasets/issues/806
Quail dataset urls are out of date
Hi ! Thanks for reporting. We should fix the urls and use quail 1.3. If you want to contribute feel free to fix the urls and open a PR :)
<h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore.
30
Quail dataset urls are out of date <h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore. Hi ! Thanks for reporting. We should fix the urls and use quail 1.3. If you want to contribute feel free to fix the urls and open a PR :)
https://github.com/huggingface/datasets/issues/806
Quail dataset urls are out of date
Done! PR [https://github.com/huggingface/datasets/pull/820](https://github.com/huggingface/datasets/pull/820) Updated links and also regenerated the metadata and dummy data for v1.3 in order to pass verifications as described here: [https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset](https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset).
<h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore.
24
Quail dataset urls are out of date <h3>Code</h3> ``` from datasets import load_dataset quail = load_dataset('quail') ``` <h3>Error</h3> ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml ``` As per [quail v1.3 commit](https://github.com/text-machine-lab/quail/commit/506501cfa34d9ec6c042d31026ba6fea6bcec8ff) it looks like the location and suggested ordering has changed. In [https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58](https://github.com/huggingface/datasets/blob/master/datasets/quail/quail.py#L52-L58) the quail v1.2 datasets are being pointed to, which don't exist anymore. Done! PR [https://github.com/huggingface/datasets/pull/820](https://github.com/huggingface/datasets/pull/820) Updated links and also regenerated the metadata and dummy data for v1.3 in order to pass verifications as described here: [https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset](https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset).
https://github.com/huggingface/datasets/issues/805
On loading a metric from datasets, I get the following error
Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object. Could you update pyarrow and try again ? ``` pip install --upgrade pyarrow ```
`from datasets import load_metric` `metric = load_metric('bleurt')` Traceback: 210 class _ArrayXDExtensionType(pa.PyExtensionType): 211 212 ndims: int = None AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' Any help will be appreciated. Thank you.
31
On loading a metric from datasets, I get the following error `from datasets import load_metric` `metric = load_metric('bleurt')` Traceback: 210 class _ArrayXDExtensionType(pa.PyExtensionType): 211 212 ndims: int = None AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' Any help will be appreciated. Thank you. Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object. Could you update pyarrow and try again ? ``` pip install --upgrade pyarrow ```
https://github.com/huggingface/datasets/issues/804
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
Yes: TriviaQA has a private test set for the leaderboard [here](https://competitions.codalab.org/competitions/17208) For the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here: https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md
# The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ? # How to reproduce ```py from datasets import load_dataset kilt_tasks = load_dataset("kilt_tasks") trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext') # both in "kilt_tasks" In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']]) Out[18]: False # and "trivia_qa" In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']]) Out[13]: True # appears to be fine on the train and validation sets. In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']]) Out[14]: False In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']]) Out[15]: False In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']]) Out[16]: True In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']]) Out[17]: True ```
32
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa') # The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ? # How to reproduce ```py from datasets import load_dataset kilt_tasks = load_dataset("kilt_tasks") trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext') # both in "kilt_tasks" In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']]) Out[18]: False # and "trivia_qa" In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']]) Out[13]: True # appears to be fine on the train and validation sets. In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']]) Out[14]: False In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']]) Out[15]: False In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']]) Out[16]: True In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']]) Out[17]: True ``` Yes: TriviaQA has a private test set for the leaderboard [here](https://competitions.codalab.org/competitions/17208) For the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here: https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md
https://github.com/huggingface/datasets/issues/804
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa')
Oh ok, I guess I read the paper too fast 😅, thank you for your answer!
# The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ? # How to reproduce ```py from datasets import load_dataset kilt_tasks = load_dataset("kilt_tasks") trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext') # both in "kilt_tasks" In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']]) Out[18]: False # and "trivia_qa" In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']]) Out[13]: True # appears to be fine on the train and validation sets. In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']]) Out[14]: False In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']]) Out[15]: False In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']]) Out[16]: True In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']]) Out[17]: True ```
16
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa') # The issue It's all in the title, it appears to be fine on the train and validation sets. Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ? # How to reproduce ```py from datasets import load_dataset kilt_tasks = load_dataset("kilt_tasks") trivia_qa = load_dataset('trivia_qa', 'unfiltered.nocontext') # both in "kilt_tasks" In [18]: any([output['answer'] for output in kilt_tasks['test_triviaqa']['output']]) Out[18]: False # and "trivia_qa" In [13]: all([answer['value'] == '<unk>' for answer in trivia_qa['test']['answer']]) Out[13]: True # appears to be fine on the train and validation sets. In [14]: all([answer['value'] == '<unk>' for answer in trivia_qa['train']['answer']]) Out[14]: False In [15]: all([answer['value'] == '<unk>' for answer in trivia_qa['validation']['answer']]) Out[15]: False In [16]: any([output['answer'] for output in kilt_tasks['train_triviaqa']['output']]) Out[16]: True In [17]: any([output['answer'] for output in kilt_tasks['validation_triviaqa']['output']]) Out[17]: True ``` Oh ok, I guess I read the paper too fast 😅, thank you for your answer!
https://github.com/huggingface/datasets/issues/801
How to join two datasets?
Hi ! Currently the only way to add new fields to a dataset is by using `.map` and picking items from the other dataset
Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence. Thanks!
24
How to join two datasets? Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence. Thanks! Hi ! Currently the only way to add new fields to a dataset is by using `.map` and picking items from the other dataset
https://github.com/huggingface/datasets/issues/801
How to join two datasets?
Closing this one. Feel free to re-open if you have other questions about this issue. Also linking another discussion about joining datasets: #853
Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence. Thanks!
23
How to join two datasets? Hi, I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels? I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence. Thanks! Closing this one. Feel free to re-open if you have other questions about this issue. Also linking another discussion about joining datasets: #853
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
Hi ! Indeed there's an issue with those links. We should probably use the target urls of the redirections instead
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here.
20
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here. Hi ! Indeed there's an issue with those links. We should probably use the target urls of the redirections instead
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
Hi, the same issue here, could you tell me how to download it through datasets? thanks
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here.
16
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here. Hi, the same issue here, could you tell me how to download it through datasets? thanks
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
Actually it's already fixed on the master branch since #740 I'll do the 1.1.3 release soon
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here.
16
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here. Actually it's already fixed on the master branch since #740 I'll do the 1.1.3 release soon
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
Hi thanks, but I did tried to install from the pip install git+... and it does not work for me,. thanks for the help. I have the same issue with wmt16, "ro-en" thanks. Best Rabeeh On Mon, Nov 16, 2020 at 10:29 AM Quentin Lhoest <notifications@github.com> wrote: > Actually it's already fixed on the master branch since #740 > <https://github.com/huggingface/datasets/pull/740> > I'll do the 1.1.3 release soon > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/datasets/issues/798#issuecomment-727854736>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCEUBJKPOCLABXCKMPDSQDWH3ANCNFSM4TJBUKSA> > . >
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here.
98
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here. Hi thanks, but I did tried to install from the pip install git+... and it does not work for me,. thanks for the help. I have the same issue with wmt16, "ro-en" thanks. Best Rabeeh On Mon, Nov 16, 2020 at 10:29 AM Quentin Lhoest <notifications@github.com> wrote: > Actually it's already fixed on the master branch since #740 > <https://github.com/huggingface/datasets/pull/740> > I'll do the 1.1.3 release soon > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/datasets/issues/798#issuecomment-727854736>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCEUBJKPOCLABXCKMPDSQDWH3ANCNFSM4TJBUKSA> > . >
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
I just tested on google colab using ```python !pip install git+https://github.com/huggingface/datasets.git from datasets import load_dataset load_dataset("trec") ``` and it works. Can you detail how you got the issue even when using the latest version on master ? Also about wmt we'll look into it, thanks for reporting !
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here.
48
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here. I just tested on google colab using ```python !pip install git+https://github.com/huggingface/datasets.git from datasets import load_dataset load_dataset("trec") ``` and it works. Can you detail how you got the issue even when using the latest version on master ? Also about wmt we'll look into it, thanks for reporting !
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
I think the new url with .edu is also broken: ``` ConnectionError: Couldn't reach https://cogcomp.seas.upenn.edu/Data/QA/QC/train_5500.label ``` Cant download the dataset anymore.
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here.
21
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here. I think the new url with .edu is also broken: ``` ConnectionError: Couldn't reach https://cogcomp.seas.upenn.edu/Data/QA/QC/train_5500.label ``` Cant download the dataset anymore.
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
Hi ! The URL seems to work fine on my side, can you try again ?
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here.
16
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here. Hi ! The URL seems to work fine on my side, can you try again ?
https://github.com/huggingface/datasets/issues/798
Cannot load TREC dataset: ConnectionError
Forgot to update, i wrote an email to the webmaster of seas.upenn.edu because i couldnt reach the url on any machine. This was the answer: ``` Thank you for your report. The server was offline for maintenance and is now available again. ``` Guess all back to normal now 🙂
## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here.
50
Cannot load TREC dataset: ConnectionError ## Problem I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>. * `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True)` raises `requests.exceptions.TooManyRedirects: Exceeded 30 redirects.` * Opening `http://cogcomp.org/Data/QA/QC/train_5500.label' in a browser works, but opens a different address * Increasing max_redirects to 100 doesn't help Also, while debugging I've seen that requesting 'https://storage.googleapis.com/huggingface-nlp/cache/datasets/trec/default/1.1.0/dataset_info.json' returns <Response [404]> before, but it doesn't raise any errors. Not sure if that's relevant. * datasets.__version__ == '1.1.2' * requests.__version__ == '2.24.0' ## Error trace ``` >>> import datasets >>> datasets.__version__ '1.1.2' >>> dataset = load_dataset("trec", split="train") Using custom data configuration default Downloading and preparing dataset trec/default (download: 350.79 KiB, generated: 403.39 KiB, post-processed: Unknown size, total: 754.18 KiB) to /home/przemyslaw/.cache/huggingface/datasets/trec/default/1.1.0/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/przemyslaw/.cache/huggingface/modules/datasets_modules/datasets/trec/ca4248481ad244f235f4cf277186cad2ee8769f975119a2bbfc41b8932b88bd7/trec.py", line 140, in _split_generators dl_files = dl_manager.download_and_extract(_URLs) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/przemyslaw/.local/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://cogcomp.org/Data/QA/QC/train_5500.label ``` I would appreciate some suggestions here. Forgot to update, i wrote an email to the webmaster of seas.upenn.edu because i couldnt reach the url on any machine. This was the answer: ``` Thank you for your report. The server was offline for maintenance and is now available again. ``` Guess all back to normal now 🙂
https://github.com/huggingface/datasets/issues/792
KILT dataset: empty string in triviaqa input field
Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md (Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :))
# What happened Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark) # Versions KILT version is `1.0.0` `datasets` version is `1.1.2` [more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1) # How to reproduce ```py In [1]: from datasets import load_dataset In [4]: dataset = load_dataset("kilt_tasks") # everything works fine, removed output for a better readibility Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data. # empty string in triviaqa input field In [36]: dataset['train_triviaqa'][0] Out[36]: {'id': 'dpql_5197', 'input': '', 'meta': {'left_context': '', 'mention': '', 'obj_surface': {'text': []}, 'partial_evidence': {'end_paragraph_id': [], 'meta': [], 'section': [], 'start_paragraph_id': [], 'title': [], 'wikipedia_id': []}, 'right_context': '', 'sub_surface': {'text': []}, 'subj_aliases': {'text': []}, 'template_questions': {'text': []}}, 'output': {'answer': ['five £', '5 £', '£5', 'five £'], 'meta': [], 'provenance': [{'bleu_score': [1.0], 'end_character': [248], 'end_paragraph_id': [30], 'meta': [], 'section': ['Section::::Question of legal tender.\n'], 'start_character': [246], 'start_paragraph_id': [30], 'title': ['Banknotes of the pound sterling'], 'wikipedia_id': ['270680']}]}} In [35]: dataset['train_triviaqa']['input'][:10] Out[35]: ['', '', '', '', '', '', '', '', '', ''] # same with test set In [37]: dataset['test_triviaqa']['input'][:10] Out[37]: ['', '', '', '', '', '', '', '', '', ''] # works fine with natural questions In [34]: dataset['train_nq']['input'][:10] Out[34]: ['how i.met your mother who is the mother', 'who had the most wins in the nfl', 'who played mantis guardians of the galaxy 2', 'what channel is the premier league on in france', "god's not dead a light in the darkness release date", 'who is the current president of un general assembly', 'when do the eclipse supposed to take place', 'what is the name of the sea surrounding dubai', 'who holds the nba record for most points in a career', 'when did the new maze runner movie come out'] ``` Stay safe :)
21
KILT dataset: empty string in triviaqa input field # What happened Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark) # Versions KILT version is `1.0.0` `datasets` version is `1.1.2` [more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1) # How to reproduce ```py In [1]: from datasets import load_dataset In [4]: dataset = load_dataset("kilt_tasks") # everything works fine, removed output for a better readibility Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data. # empty string in triviaqa input field In [36]: dataset['train_triviaqa'][0] Out[36]: {'id': 'dpql_5197', 'input': '', 'meta': {'left_context': '', 'mention': '', 'obj_surface': {'text': []}, 'partial_evidence': {'end_paragraph_id': [], 'meta': [], 'section': [], 'start_paragraph_id': [], 'title': [], 'wikipedia_id': []}, 'right_context': '', 'sub_surface': {'text': []}, 'subj_aliases': {'text': []}, 'template_questions': {'text': []}}, 'output': {'answer': ['five £', '5 £', '£5', 'five £'], 'meta': [], 'provenance': [{'bleu_score': [1.0], 'end_character': [248], 'end_paragraph_id': [30], 'meta': [], 'section': ['Section::::Question of legal tender.\n'], 'start_character': [246], 'start_paragraph_id': [30], 'title': ['Banknotes of the pound sterling'], 'wikipedia_id': ['270680']}]}} In [35]: dataset['train_triviaqa']['input'][:10] Out[35]: ['', '', '', '', '', '', '', '', '', ''] # same with test set In [37]: dataset['test_triviaqa']['input'][:10] Out[37]: ['', '', '', '', '', '', '', '', '', ''] # works fine with natural questions In [34]: dataset['train_nq']['input'][:10] Out[34]: ['how i.met your mother who is the mother', 'who had the most wins in the nfl', 'who played mantis guardians of the galaxy 2', 'what channel is the premier league on in france', "god's not dead a light in the darkness release date", 'who is the current president of un general assembly', 'when do the eclipse supposed to take place', 'what is the name of the sea surrounding dubai', 'who holds the nba record for most points in a career', 'when did the new maze runner movie come out'] ``` Stay safe :) Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md (Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :))
https://github.com/huggingface/datasets/issues/790
Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist
I saw that `faiss-cpu` 1.6.4.post2 was released recently to fix the installation on macos. It should work now
I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error. ```sh git clone https://github.com/huggingface/datasets cd datasets virtualenv venv -p python3 --system-site-packages source venv/bin/activate pip install -e ".[dev]" ``` ![image](https://user-images.githubusercontent.com/59632/97868518-72871800-1cd5-11eb-9cd2-37d4e9d20b39.png) ![image](https://user-images.githubusercontent.com/59632/97868592-977b8b00-1cd5-11eb-8f3c-0c409616149c.png) Python 3.7.7
18
Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error. ```sh git clone https://github.com/huggingface/datasets cd datasets virtualenv venv -p python3 --system-site-packages source venv/bin/activate pip install -e ".[dev]" ``` ![image](https://user-images.githubusercontent.com/59632/97868518-72871800-1cd5-11eb-9cd2-37d4e9d20b39.png) ![image](https://user-images.githubusercontent.com/59632/97868592-977b8b00-1cd5-11eb-8f3c-0c409616149c.png) Python 3.7.7 I saw that `faiss-cpu` 1.6.4.post2 was released recently to fix the installation on macos. It should work now
https://github.com/huggingface/datasets/issues/786
feat(dataset): multiprocessing _generate_examples
I agree that would be cool :) Right now the only distributed dataset builder is based on Apache Beam so you can use distributed processing frameworks like Dataflow, Spark, Flink etc. to build your dataset but it's not really well suited for single-worker parallel processing afaik
forking this out of #741, this issue is only regarding multiprocessing I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool. In my use case, I would instead of: ```python for datum in data: yield self.load_datum(datum) ``` do: ```python return pool.map(self.load_datum, data) ``` As the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset. If this was a larger dataset (and many such datasets exist), it would take multiple days to complete. Using multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour.
46
feat(dataset): multiprocessing _generate_examples forking this out of #741, this issue is only regarding multiprocessing I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool. In my use case, I would instead of: ```python for datum in data: yield self.load_datum(datum) ``` do: ```python return pool.map(self.load_datum, data) ``` As the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset. If this was a larger dataset (and many such datasets exist), it would take multiple days to complete. Using multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour. I agree that would be cool :) Right now the only distributed dataset builder is based on Apache Beam so you can use distributed processing frameworks like Dataflow, Spark, Flink etc. to build your dataset but it's not really well suited for single-worker parallel processing afaik
https://github.com/huggingface/datasets/issues/786
feat(dataset): multiprocessing _generate_examples
`_generate_examples` can now be run in parallel thanks to https://github.com/huggingface/datasets/pull/5107. You can find more info [here](https://huggingface.co/docs/datasets/dataset_script#sharding).
forking this out of #741, this issue is only regarding multiprocessing I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool. In my use case, I would instead of: ```python for datum in data: yield self.load_datum(datum) ``` do: ```python return pool.map(self.load_datum, data) ``` As the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset. If this was a larger dataset (and many such datasets exist), it would take multiple days to complete. Using multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour.
16
feat(dataset): multiprocessing _generate_examples forking this out of #741, this issue is only regarding multiprocessing I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool. In my use case, I would instead of: ```python for datum in data: yield self.load_datum(datum) ``` do: ```python return pool.map(self.load_datum, data) ``` As the dataset in question, as an example, has **only** 7000 rows, and takes 10 seconds to load each row on average, it takes almost 20 hours to load the entire dataset. If this was a larger dataset (and many such datasets exist), it would take multiple days to complete. Using multiprocessing, for example, 40 cores, could speed it up dramatically. For this dataset, hopefully to fully load in under an hour. `_generate_examples` can now be run in parallel thanks to https://github.com/huggingface/datasets/pull/5107. You can find more info [here](https://huggingface.co/docs/datasets/dataset_script#sharding).
https://github.com/huggingface/datasets/issues/784
Issue with downloading Wikipedia data for low resource language
Hello, maybe you could ty to use another date for the wikipedia dump (see the available [dates](https://dumps.wikimedia.org/jvwiki) here for `jv`) ?
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these two languages: Javanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json ``` Sundanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json ``` I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid. Any suggestions on how to handle this issue? Thanks!
21
Issue with downloading Wikipedia data for low resource language Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these two languages: Javanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json ``` Sundanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json ``` I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid. Any suggestions on how to handle this issue? Thanks! Hello, maybe you could ty to use another date for the wikipedia dump (see the available [dates](https://dumps.wikimedia.org/jvwiki) here for `jv`) ?
https://github.com/huggingface/datasets/issues/784
Issue with downloading Wikipedia data for low resource language
@lhoestq I've tried `load_dataset('wikipedia', '20200501.zh', beam_runner='DirectRunner')` and got the same `FileNotFoundError` as @SamuelCahyawijaya. Also, using another date (e.g. `load_dataset('wikipedia', '20201120.zh', beam_runner='DirectRunner')`) will give the following error message. ``` ValueError: BuilderConfig 20201120.zh not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu'] ``` I am pretty sure that `https://dumps.wikimedia.org/enwiki/20201120/dumpstatus.json` exists.
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these two languages: Javanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json ``` Sundanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json ``` I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid. Any suggestions on how to handle this issue? Thanks!
342
Issue with downloading Wikipedia data for low resource language Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these two languages: Javanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json ``` Sundanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json ``` I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid. Any suggestions on how to handle this issue? Thanks! @lhoestq I've tried `load_dataset('wikipedia', '20200501.zh', beam_runner='DirectRunner')` and got the same `FileNotFoundError` as @SamuelCahyawijaya. Also, using another date (e.g. `load_dataset('wikipedia', '20201120.zh', beam_runner='DirectRunner')`) will give the following error message. ``` ValueError: BuilderConfig 20201120.zh not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu'] ``` I am pretty sure that `https://dumps.wikimedia.org/enwiki/20201120/dumpstatus.json` exists.
https://github.com/huggingface/datasets/issues/784
Issue with downloading Wikipedia data for low resource language
For posterity, here's how I got the data I needed: I needed Bengali, so I had to check which dumps are available here: https://dumps.wikimedia.org/bnwiki/ , then I ran: ``` load_dataset("wikipedia", language="bn", date="20211101", beam_runner="DirectRunner") ```
Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these two languages: Javanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json ``` Sundanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json ``` I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid. Any suggestions on how to handle this issue? Thanks!
34
Issue with downloading Wikipedia data for low resource language Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet ``` jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner') su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner') ``` And I get the following error for these two languages: Javanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json ``` Sundanese ``` FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json ``` I found from https://github.com/huggingface/datasets/issues/577#issuecomment-688435085 that for small languages, they are directly downloaded and parsed from the Wikipedia dump site, but both of `https://dumps.wikimedia.org/jvwiki/20200501/dumpstatus.json` and `https://dumps.wikimedia.org/suwiki/20200501/dumpstatus.json` are no longer valid. Any suggestions on how to handle this issue? Thanks! For posterity, here's how I got the data I needed: I needed Bengali, so I had to check which dumps are available here: https://dumps.wikimedia.org/bnwiki/ , then I ran: ``` load_dataset("wikipedia", language="bn", date="20211101", beam_runner="DirectRunner") ```
https://github.com/huggingface/datasets/issues/778
Unexpected behavior when loading cached csv file?
Hi ! Thanks for reporting. The same issue was reported in #730 (but with the encodings instead of the delimiter). It was fixed by #770 . The fix will be available in the next release :)
I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset. Small snippet to reproduce the behavior: ```python import datasets with open("dummy_data.csv", "w") as file: file.write("test,this;text\n") print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train").column_names) # ["test", "this;text"] print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train", delimiter=";").column_names) # still ["test", "this;text"] ``` By the way, thanks a lot for this amazing library! :)
36
Unexpected behavior when loading cached csv file? I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset. Small snippet to reproduce the behavior: ```python import datasets with open("dummy_data.csv", "w") as file: file.write("test,this;text\n") print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train").column_names) # ["test", "this;text"] print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train", delimiter=";").column_names) # still ["test", "this;text"] ``` By the way, thanks a lot for this amazing library! :) Hi ! Thanks for reporting. The same issue was reported in #730 (but with the encodings instead of the delimiter). It was fixed by #770 . The fix will be available in the next release :)
https://github.com/huggingface/datasets/issues/778
Unexpected behavior when loading cached csv file?
Thanks for the prompt reply and terribly sorry for the spam! Looking forward to the new release!
I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset. Small snippet to reproduce the behavior: ```python import datasets with open("dummy_data.csv", "w") as file: file.write("test,this;text\n") print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train").column_names) # ["test", "this;text"] print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train", delimiter=";").column_names) # still ["test", "this;text"] ``` By the way, thanks a lot for this amazing library! :)
17
Unexpected behavior when loading cached csv file? I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be nice if the information what `delimiter` or what `column_names` were used would influence the identifier of the cached dataset. Small snippet to reproduce the behavior: ```python import datasets with open("dummy_data.csv", "w") as file: file.write("test,this;text\n") print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train").column_names) # ["test", "this;text"] print(datasets.load_dataset("csv", data_files="dummy_data.csv", split="train", delimiter=";").column_names) # still ["test", "this;text"] ``` By the way, thanks a lot for this amazing library! :) Thanks for the prompt reply and terribly sorry for the spam! Looking forward to the new release!
https://github.com/huggingface/datasets/issues/773
Adding CC-100: Monolingual Datasets from Web Crawl Data
These dataset files are no longer available. https://data.statmt.org/cc-100/ files provided in this link are no longer available. Can anybody fix that issue? @abhishekkrthakur @yjernite
## Adding a Dataset - **Name:** CC-100: Monolingual Datasets from Web Crawl Data - **Description:** https://twitter.com/alex_conneau/status/1321507120848625665 - **Paper:** https://arxiv.org/abs/1911.02116 - **Data:** http://data.statmt.org/cc-100/ - **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
24
Adding CC-100: Monolingual Datasets from Web Crawl Data ## Adding a Dataset - **Name:** CC-100: Monolingual Datasets from Web Crawl Data - **Description:** https://twitter.com/alex_conneau/status/1321507120848625665 - **Paper:** https://arxiv.org/abs/1911.02116 - **Data:** http://data.statmt.org/cc-100/ - **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). These dataset files are no longer available. https://data.statmt.org/cc-100/ files provided in this link are no longer available. Can anybody fix that issue? @abhishekkrthakur @yjernite
https://github.com/huggingface/datasets/issues/773
Adding CC-100: Monolingual Datasets from Web Crawl Data
Hi ! Can you open an issue to report this problem ? This will help keep track of the fix :)
## Adding a Dataset - **Name:** CC-100: Monolingual Datasets from Web Crawl Data - **Description:** https://twitter.com/alex_conneau/status/1321507120848625665 - **Paper:** https://arxiv.org/abs/1911.02116 - **Data:** http://data.statmt.org/cc-100/ - **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
21
Adding CC-100: Monolingual Datasets from Web Crawl Data ## Adding a Dataset - **Name:** CC-100: Monolingual Datasets from Web Crawl Data - **Description:** https://twitter.com/alex_conneau/status/1321507120848625665 - **Paper:** https://arxiv.org/abs/1911.02116 - **Data:** http://data.statmt.org/cc-100/ - **Motivation:** A large scale multi-lingual language modeling dataset. Text is de-duplicated and filtered by how "Wikipedia-like" it is, hopefully helping avoid some of the worst parts of the common crawl. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). Hi ! Can you open an issue to report this problem ? This will help keep track of the fix :)
https://github.com/huggingface/datasets/issues/771
Using `Dataset.map` with `n_proc>1` print multiple progress bars
Yes it allows to monitor the speed of each process. Currently each process takes care of one shard of the dataset. At one point we can consider using streaming batches to a pool of processes instead of sharding the dataset in `num_proc` parts. At that point it will be easy to use only one progress bar
When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed.
56
Using `Dataset.map` with `n_proc>1` print multiple progress bars When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed. Yes it allows to monitor the speed of each process. Currently each process takes care of one shard of the dataset. At one point we can consider using streaming batches to a pool of processes instead of sharding the dataset in `num_proc` parts. At that point it will be easy to use only one progress bar
https://github.com/huggingface/datasets/issues/771
Using `Dataset.map` with `n_proc>1` print multiple progress bars
Hi @lhoestq, I am facing a similar issue, it is annoying when lots of progress bars are printed. Is there a way to turn off this behavior?
When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed.
27
Using `Dataset.map` with `n_proc>1` print multiple progress bars When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed. Hi @lhoestq, I am facing a similar issue, it is annoying when lots of progress bars are printed. Is there a way to turn off this behavior?
https://github.com/huggingface/datasets/issues/769
How to choose proper download_mode in function load_dataset?
`download_mode=datasets.GenerateMode.FORCE_REDOWNLOAD` should work. This makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing
Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4 "Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5 ``` First I try to use this command to load my csv file . ``` python dataset=load_dataset('csv', data_files=['sst_test.csv']) ``` It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this. ``` python import pyarrow as pa from pyarrow import csv read_options = csv.ReadOptions(block_size=1024*1024) parse_options = csv.ParseOptions() convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()}) dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options, parse_options=parse_options, convert_options=convert_options) ``` It keeps the same: ```shell Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210) ``` I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right. Is it a bug? How to choose proper download_mode to avoid this issue?
17
How to choose proper download_mode in function load_dataset? Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4 "Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5 ``` First I try to use this command to load my csv file . ``` python dataset=load_dataset('csv', data_files=['sst_test.csv']) ``` It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this. ``` python import pyarrow as pa from pyarrow import csv read_options = csv.ReadOptions(block_size=1024*1024) parse_options = csv.ParseOptions() convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()}) dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options, parse_options=parse_options, convert_options=convert_options) ``` It keeps the same: ```shell Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210) ``` I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right. Is it a bug? How to choose proper download_mode to avoid this issue? `download_mode=datasets.GenerateMode.FORCE_REDOWNLOAD` should work. This makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing
https://github.com/huggingface/datasets/issues/769
How to choose proper download_mode in function load_dataset?
Indeed you should use `features` in this case. ```python features = Features({'text': Value('string'), 'label': Value('float32')}) dataset = load_dataset('csv', data_files=['sst_test.csv'], features=features) ``` Note that because of an issue with the caching when you change the features (see #750 ) you still need to specify the `FORCE_REDOWNLOAD ` flag. I'm working on a fix for this one
Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4 "Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5 ``` First I try to use this command to load my csv file . ``` python dataset=load_dataset('csv', data_files=['sst_test.csv']) ``` It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this. ``` python import pyarrow as pa from pyarrow import csv read_options = csv.ReadOptions(block_size=1024*1024) parse_options = csv.ParseOptions() convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()}) dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options, parse_options=parse_options, convert_options=convert_options) ``` It keeps the same: ```shell Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210) ``` I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right. Is it a bug? How to choose proper download_mode to avoid this issue?
55
How to choose proper download_mode in function load_dataset? Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4 "Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5 ``` First I try to use this command to load my csv file . ``` python dataset=load_dataset('csv', data_files=['sst_test.csv']) ``` It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this. ``` python import pyarrow as pa from pyarrow import csv read_options = csv.ReadOptions(block_size=1024*1024) parse_options = csv.ParseOptions() convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()}) dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options, parse_options=parse_options, convert_options=convert_options) ``` It keeps the same: ```shell Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210) ``` I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right. Is it a bug? How to choose proper download_mode to avoid this issue? Indeed you should use `features` in this case. ```python features = Features({'text': Value('string'), 'label': Value('float32')}) dataset = load_dataset('csv', data_files=['sst_test.csv'], features=features) ``` Note that because of an issue with the caching when you change the features (see #750 ) you still need to specify the `FORCE_REDOWNLOAD ` flag. I'm working on a fix for this one
https://github.com/huggingface/datasets/issues/769
How to choose proper download_mode in function load_dataset?
https://github.com/huggingface/datasets/issues/769#issuecomment-717837832 > This makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing @lhoestq do you still think we should rename it?
Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4 "Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5 ``` First I try to use this command to load my csv file . ``` python dataset=load_dataset('csv', data_files=['sst_test.csv']) ``` It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this. ``` python import pyarrow as pa from pyarrow import csv read_options = csv.ReadOptions(block_size=1024*1024) parse_options = csv.ParseOptions() convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()}) dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options, parse_options=parse_options, convert_options=convert_options) ``` It keeps the same: ```shell Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210) ``` I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right. Is it a bug? How to choose proper download_mode to avoid this issue?
25
How to choose proper download_mode in function load_dataset? Hi, I am a beginner to datasets and I try to use datasets to load my csv file. my csv file looks like this ``` text,label "Effective but too-tepid biopic",3 "If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4 "Emerges as something rare , an issue movie that 's so honest and keenly observed that it does n't feel like one .",5 ``` First I try to use this command to load my csv file . ``` python dataset=load_dataset('csv', data_files=['sst_test.csv']) ``` It seems good, but when i try to overwrite the convert_options to convert 'label' columns from int64 to float32 like this. ``` python import pyarrow as pa from pyarrow import csv read_options = csv.ReadOptions(block_size=1024*1024) parse_options = csv.ParseOptions() convert_options = csv.ConvertOptions(column_types={'text': pa.string(), 'label': pa.float32()}) dataset = load_dataset('csv', data_files=['sst_test.csv'], read_options=read_options, parse_options=parse_options, convert_options=convert_options) ``` It keeps the same: ```shell Dataset(features: {'text': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None)}, num_rows: 2210) ``` I think this issue is caused by the parameter "download_mode" Default to REUSE_DATASET_IF_EXISTS because after I delete the cache_dir, it seems right. Is it a bug? How to choose proper download_mode to avoid this issue? https://github.com/huggingface/datasets/issues/769#issuecomment-717837832 > This makes me think we we should rename this to DownloadMode.FORCE_REDOWNLOAD. Currently that's confusing @lhoestq do you still think we should rename it?